24 min read

Estimated time: 3–4 weeks | Deliverables: Research proposal, IRB application mock-up, methodology justification memo, peer review

Capstone 1: Design a Cross-Cultural Study

Research Design Project

Estimated time: 3–4 weeks | Deliverables: Research proposal, IRB application mock-up, methodology justification memo, peer review


Overview

You have spent a semester learning how attraction works — and, more importantly, how to ask rigorous questions about it. You have watched Dr. Adaeze Okafor and Dr. Carlos Reyes wrestle with the challenge of studying desire across twelve countries without flattening the very cultural specificity they were trying to understand. You have read about the WEIRD problem, the replication crisis, the ethics of cross-cultural consent protocols, and the deep methodological tensions between evolutionary and social-constructionist frameworks.

Now it is your turn.

This capstone asks you to do what Okafor and Reyes did at the beginning of their careers: design a cross-cultural study of attraction from scratch. You will not be collecting data — this is a research design project. But the design must be rigorous enough that, in theory, you could. Your proposal should reflect everything the course has taught you about the craft of research: careful operationalization, transparent method choices, cross-cultural validity, and ethical responsibility.

This is one of the most intellectually demanding things you will do in this course, and it is supposed to be. Research design is where the rubber meets the road. Anyone can say "culture affects attraction." Saying how you would measure that, where you would look, who you would recruit, and what you would do when your instruments mean different things in different places — that requires real thinking.

You are ready for this. Let's get into it.


Learning Objectives

By completing this capstone, you will be able to:

  1. Formulate a testable research question in the domain of cross-cultural attraction science, distinguishing primary from secondary questions and identifying the assumptions embedded in each.
  2. Select and justify a research methodology for cross-cultural attraction research, weighing the trade-offs among experimental, survey, observational, and qualitative designs.
  3. Address cross-cultural equivalence as a concrete methodological challenge — including linguistic translation, conceptual equivalence, and measurement invariance.
  4. Apply ethical reasoning to cross-cultural research design, including IRB-level considerations about risk, consent, and cultural sensitivity.
  5. Write in the genre of the research proposal, producing a document that is clear, specific, and defensible.
  6. Give and receive constructive peer review on a research design.

Part I: Background — The Okafor-Reyes Study as Your Model

Dr. Okafor and Dr. Reyes did not arrive at the Global Attraction Project overnight. In Chapter 1, we watched them meet at a conference in Toronto and argue about whether a single research instrument could capture attraction across cultures without smuggling in Western assumptions. In Chapter 3, we saw the full architecture of their methodology: a three-component design combining large-scale surveys (administered in the local language at each site), behavioral observation protocols (standardized but culturally adapted), and in-depth qualitative interviews.

By Chapter 5, they were in the middle of an ethics board review that complicated their consent protocols in ways neither had anticipated — specifically around whether the concept of "individual informed consent" translated cleanly into cultural contexts where family involvement in romantic decisions is normative and expected. Reyes initially thought this was a procedural nuisance; Okafor argued, correctly, that it was a fundamental signal about what "voluntary participation" meant in different cultural contexts.

Their twelve countries were not chosen randomly. They selected them to maximize variation along several dimensions: individualism-collectivism, gender equality indices, religious diversity, economic development, and geographic spread. This was deliberate theoretical sampling, not convenience.

What does their study do well? It is ambitious in scope, rigorous about equivalence, and honest about limitations. What might it miss? Okafor herself has suggested, in several conference presentations, that the study's national-level sampling obscures enormous within-country variation — that "Nigerian attraction norms" is nearly as incoherent a category as "attraction norms in general," given Nigeria's approximately 250 ethnic groups and the vast urban-rural divide.

Your project can replicate part of what they did, extend it in a new direction, or challenge one of their design choices by proposing something better. All three approaches are valid — as long as you can defend your choices with evidence and logic.


Part II: Step-by-Step Project Guide

Step 1: Choose Your Two Countries and Justify Your Selection

Deliverable: 500-word country justification memo (included in final proposal)

The first decision you make — which two countries to study — shapes every subsequent decision. This is not a trivial choice, and you should not choose based on convenience, personal heritage, or what sounds interesting at first glance. You should choose strategically, based on what your research question requires.

What to consider when selecting countries:

Theoretical variance. You want two countries that differ meaningfully on the dimension you care about. If you are studying how individualism-collectivism shapes mate preference, you want one country high on individualism (say, Denmark or Australia) and one high on collectivism (say, South Korea or Pakistan). If both countries score similarly on your key dimension, you will not have much to compare.

Practical feasibility. Research access, language resources, and ethical complexity vary enormously. You need to demonstrate that you could, in principle, conduct research in each country. This means you need at least one credible research partner or institutional base in each location. For this project, you may invent a collaborator (e.g., "Dr. X at the University of Y"), but they must be plausible.

Previous literature. Has any research been done on attraction in these two countries? What do you know already? Gaps in the existing literature are often good reasons to choose a location, but they are not sufficient on their own — you also need to be able to design valid instruments for a context you may not know intimately.

Ethical considerations. Some research contexts carry specific ethical risks. Researching same-sex attraction in countries where homosexuality is criminalized, for instance, creates serious safety obligations that must be addressed directly in your proposal.

In your 500-word memo, address: - Why these two countries? What specific theoretical dimension do they allow you to examine? - What do you already know about attraction research in these two contexts? - What makes this comparison interesting and non-obvious? - Are there ethical considerations specific to either country that you will need to address?

💡 Key Insight: Imagining Jordan, our sociology senior, taking on this project: Jordan might choose Brazil and Japan specifically because both have been subjects of idealized, romanticized narratives in Western popular culture — the "passionate Brazilian" and the "reserved Japanese" — that flatten enormous complexity. Jordan's interest is in whether the research data actually supports those stereotypes, or whether those stereotypes are themselves imported from outside. That framing gives the country choice a critical edge, not just a descriptive one.


Step 2: Develop Your Research Questions

Deliverable: Research questions section of final proposal

A good research question is specific, answerable, and theoretically motivated. "Do people in different countries attract differently?" is not a research question — it is a truism wrapped in vagueness. You need to be precise about:

  • What you are measuring (the construct)
  • Who you are studying (the population)
  • In what context (the setting)
  • What relationship you expect to find (the direction of your hypothesis, if any)

Your research design requires:

One primary research question — this is the central question your study is designed to answer.

Two secondary research questions — these are related questions that emerge naturally from your primary question and that your methods can also address, but that are not your main focus.

Examples of research questions at different quality levels:

Weak: "How does culture affect attraction?" Why it fails: "Culture" and "attraction" are both under-specified. This could mean almost anything.

Adequate: "Do college students in France and Mexico report different preferences for partner ambition?" Why it's better: Specific constructs, specific populations, specific countries. Why it could be stronger: It says nothing about why we expect a difference, or what theoretical framework predicts one.

Strong: "Among heterosexual adults aged 18–35, does the relative weight placed on partner economic ambition in long-term mate selection differ between a high-gender-equality country (Sweden) and a lower-gender-equality country (Egypt), and does this difference vary by the respondent's own gender?" Why it's strong: It specifies the population, the construct (weight placed on economic ambition), the comparison dimension (gender equality of country), the expected moderator (respondent gender), and the theoretical framework (sex-role theory of mate preference) — all without spelling that out explicitly.

Notice that strong research questions often include moderators. Real-world relationships rarely hold across all groups; specifying who you expect the finding to hold for (and who you do not) is a sign of theoretical sophistication.

⚖️ Debate Point: Okafor would push you to ask: whose question is this? Research questions that seem neutral often embed assumptions about what is worth studying and from whose perspective. As you write your question, ask yourself: if a research participant in one of your two countries read this question, would they recognize it as something meaningful in their own cultural context?


Step 3: Select Your Methodology

Deliverable: Methodology justification memo (800–1,000 words, included in final proposal)

This is the technical heart of your proposal. You must choose an overall research design and justify it. For cross-cultural attraction research, the main options are:

Option A: Survey/Questionnaire Design Administer validated or adapted scales measuring your construct of interest across both countries. This is the most common approach in cross-cultural psychology and the backbone of the Okafor-Reyes study.

Strengths: Large samples, statistical power, can test measurement invariance, relatively cost-efficient.

Weaknesses: Social desirability bias, translation challenges, possible construct non-equivalence, reduced ecological validity.

Example scales you might adapt: Mate Preference Priority Questionnaire (Buunk et al.), Attachment Style questionnaires (ECR-R, covered in Chapter 11), Relationship Satisfaction scales, Objectification measures.

Option B: Behavioral Observation Directly observe attraction-relevant behaviors in naturalistic or semi-naturalistic settings (e.g., speed-dating events, first meetings, social gatherings). This is the behavioral component of the Okafor-Reyes design.

Strengths: High ecological validity, avoids self-report bias, captures nonverbal behavior.

Weaknesses: Difficult to standardize across cultures, expensive, raises more ethical concerns about observation, small samples.

Option C: Qualitative Interviews Conduct in-depth interviews with participants about their attraction experiences, relationship histories, and cultural norms around dating. This is Okafor's preferred component.

Strengths: Rich, contextualized data; can reveal unexpected dimensions; participant voice is centered; ideal for exploratory questions.

Weaknesses: Not generalizable, labor-intensive, analysis is complex, cross-cultural comparison is methodologically tricky.

Option D: Mixed Methods Combine two or more of the above. This is the gold-standard approach for complex cross-cultural questions, but it is also the most demanding.

In your methodology memo, you must:

  1. Name your chosen design.
  2. Justify it with at least three reasons — this means three specific arguments, not just "it's well-suited." Why is this design right for your question, your two countries, and your sample?
  3. Acknowledge at least two significant limitations of your chosen design and explain how you will mitigate them.
  4. Specify your sampling strategy. How many participants? How will you recruit them? What are your inclusion/exclusion criteria?

🧪 Methodology Note: The Okafor-Reyes study uses what methodologists call a "partially emic, partially etic" design. The etic components are standardized across all 12 countries (allowing comparison). The emic components are allowed to vary by site (honoring cultural specificity). You do not have to use this design, but you should be aware of the emic-etic distinction and explain where your design falls on that spectrum.


Step 4: Address Cross-Cultural Equivalence

Deliverable: Equivalence section of final proposal

Cross-cultural research fails most often not because researchers choose the wrong country or the wrong method, but because they fail to ensure that their instruments mean the same thing in both cultural contexts. This is the problem of measurement equivalence (also called measurement invariance), and it is one of the hardest problems in cross-cultural psychology.

There are several distinct layers of equivalence you must address:

Linguistic equivalence. If you are using questionnaire items, they must be translated from English (or whatever the source language is) into each local language. Translation is not just a technical task — it is a theoretical one. You must use back-translation (translate into target language, then translate back to source language without seeing the original, compare). Discrepancies reveal where meaning has drifted.

Conceptual equivalence. This is subtler and more important. It asks: does the underlying concept exist in both cultures? For example, in Chapter 22, the Okafor-Reyes study discovered that some East Asian participants did not have a readily available concept corresponding to "flirting" as an active, performative behavior — they had vocabulary for something closer to "attending with interest" or "subtle acknowledgment." The researchers had to decide whether to adapt their instrument (potentially losing comparability) or keep it standard (potentially measuring something slightly different in that context).

Metric equivalence. Do participants in both countries use rating scales in the same way? Research consistently shows that response style varies across cultures — some cultures show a strong preference for middle-of-the-scale responses; others show extreme responding. If you are comparing means across cultures, you need to test whether the metric is equivalent using confirmatory factor analysis or similar tools.

In your equivalence section, address: - What specific concepts in your study might not translate cleanly across your two chosen countries? - How will you conduct translation, and who will do it? - How will you test for metric equivalence (you can describe the procedure even if you will not implement it yourself)? - What will you do if you discover partial non-equivalence? (Hint: there are statistical procedures for this, including partial invariance models.)

⚠️ Critical Caveat: The Okafor-Reyes study, for all its rigor, used a professional translation service for five of their twelve country adaptations, rather than genuine bilingual cultural insiders. Okafor has since described this as one of the study's significant limitations. Genuine cross-cultural research requires collaborators who are cultural insiders in each context — not just fluent in the language.


Step 5: Write a Mock IRB Application (Abridged)

Deliverable: Mock IRB application (included as an appendix to your final proposal)

All human subjects research in the United States must be reviewed and approved by an Institutional Review Board (IRB) before data collection begins. Chapter 3 introduced IRBs, and Chapter 5 showed how the Okafor-Reyes study ran into significant IRB complications around cross-cultural consent protocols.

Your mock IRB application is abridged — you will complete four sections rather than a full application. Write these as if you were genuinely submitting them for review.

Section 1: Project Description (300–400 words) Describe your study in plain language. What are you studying, why does it matter, and what will participants actually do? This section is written for an IRB committee that may not be familiar with your field. Avoid jargon; define every technical term.

Section 2: Participant Population and Recruitment (200–300 words) Who will your participants be? How will you recruit them? What are your inclusion and exclusion criteria? What is your target sample size, and how did you determine it? Are there any vulnerable populations involved (minors, prisoners, individuals in relationships of dependency)?

Section 3: Risk Assessment and Minimization (300–400 words) What are the potential risks to participants? Be specific — this means psychological discomfort, breach of confidentiality, social risks (e.g., in contexts where discussing romantic/sexual topics is stigmatized), and any risks specific to your research context. For each risk, describe your mitigation strategy. Cross-cultural research almost always carries specific risks related to cultural context that domestic research does not.

Section 4: Consent Procedures (250–350 words) How will you obtain informed consent? What does "voluntary participation" mean in each of your two cultural contexts? If either of your countries involves cultural norms in which individual autonomous consent is complicated — by family structures, by power dynamics between researcher and participant, or by institutional contexts — you must address this directly. Recall the discussion in Chapter 5 of how Okafor and Reyes had to redesign their consent procedures for several field sites.

🔵 Ethical Lens: IRB applications are not bureaucratic obstacles — they are where research ethics becomes concrete. The questions an IRB asks ("What is the risk to this specific person if they participate?") are the same questions that separate responsible science from extractive research. Take this section seriously even though it is a mock exercise.


Step 6: Write the Full Research Proposal

Deliverable: Research proposal (2,500–3,500 words, not counting the IRB appendix)

Your final proposal integrates and expands all the work you have done in Steps 1–5. It should read as a coherent, professional document — not as a collection of answers to separate prompts.

Required sections and approximate word targets:

Section Content Target Words
Introduction Why does this question matter? What is the intellectual and social significance of your research? 300–400
Literature Review What do we already know? What are the key studies in this area? Where are the gaps? 500–700
Research Questions Primary + two secondary questions, with theoretical justification 200–300
Country Justification Incorporate your Step 1 memo (revised and polished) 400–500
Methodology Incorporate and expand your Step 3 memo; include instruments, sampling, procedures 600–800
Cross-Cultural Equivalence Incorporate your Step 4 work 300–400
Limitations Honest discussion of what your study cannot do 200–300
Conclusion Why is this study worth doing? What would it contribute? 150–200
References APA format

The proposal should have a consistent voice — not "In step 3 I chose surveys" but "This study employs a cross-sectional survey design..."


Part III: Grading Rubric

Your project will be evaluated across five dimensions. Each dimension is worth 20 points, for a total of 100 points.


Dimension 1: Research Question Quality (20 points)

Score Description
18–20 Primary research question is specific, theoretically motivated, and clearly answerable by the proposed methods. Secondary questions extend logically from the primary. The question reveals genuine intellectual engagement with the course's frameworks — it is not a question Okafor and Reyes have already answered.
14–17 Research question is mostly specific and theoretically grounded. Minor vagueness in operationalization or theoretical framing.
10–13 Research question is adequate but under-specified. The theoretical motivation is implied rather than explicit. Secondary questions feel tacked on.
6–9 Research question is too broad to be answerable, or not clearly connected to the proposed methods.
0–5 Research question is missing, incoherent, or so general as to be meaningless in research terms.

Dimension 2: Methodology (20 points)

Score Description
18–20 Method choice is well-justified with at least three specific, compelling reasons. Sampling strategy is clearly described and appropriate. At least two limitations are acknowledged with realistic mitigation strategies. The emic-etic distinction is addressed.
14–17 Method choice is justified but with some superficiality. Sampling is described. At least one limitation acknowledged.
10–13 Method choice is stated but weakly justified. Sampling is vague. Limitations are acknowledged but not addressed.
6–9 Method choice seems arbitrary. Sampling is missing or clearly inadequate.
0–5 Methodology section is absent or incoherent.

Dimension 3: Cross-Cultural Equivalence (20 points)

Score Description
18–20 Demonstrates genuine understanding of the layers of cross-cultural equivalence (linguistic, conceptual, metric). Specific potential equivalence problems in the chosen countries are identified and addressed with concrete solutions. Shows awareness of the emic-etic tradeoff.
14–17 Addresses equivalence with some depth. Most layers are covered.
10–13 Addresses equivalence but only at a surface level (mentions "translation" without engaging with conceptual or metric equivalence).
6–9 Equivalence is barely addressed or treated as a translation problem only.
0–5 Equivalence is not addressed.

Dimension 4: IRB Application (20 points)

Score Description
18–20 All four IRB sections are complete and demonstrate genuine ethical reasoning. Risks are specific (not generic). Consent procedures reflect awareness of cross-cultural complexity raised in Ch 5. Risk mitigation strategies are realistic.
14–17 All four sections present. Ethical reasoning is mostly sound with some superficiality.
10–13 At least three sections present. Ethics feels perfunctory. Risks are generic ("participants may feel uncomfortable").
6–9 Two or fewer sections. Serious gaps in ethical reasoning.
0–5 IRB application is absent or clearly placeholder text.

Dimension 5: Writing Quality and Proposal Coherence (20 points)

Score Description
18–20 Proposal reads as a unified professional document. All sections are present and within word targets. APA citations are correct. Voice is consistent. Writing is clear, precise, and appropriately academic without being unnecessarily jargon-heavy.
14–17 Proposal is mostly coherent. Minor inconsistencies in voice or citation style. Word targets mostly met.
10–13 Proposal feels like answers to separate prompts rather than a unified document. Writing is adequate but uneven. Some sections are significantly under-developed.
6–9 Major sections missing or extremely thin. Writing quality significantly impedes comprehension.
0–5 Proposal is substantially incomplete.

Peer Review (separate grade, 10 points): Your peer review of a classmate's proposal will be assessed separately. Strong peer reviews: identify at least two genuine strengths, raise at least two specific methodological concerns, and offer constructive suggestions for improvement. Peer reviews that are purely complimentary ("this is great!") or purely critical without constructive direction will receive partial credit.


Part IV: What Strong Work Looks Like

The following is an abbreviated example of a strong research question section and a strong country justification memo. This is not a template to copy — it is a demonstration of the kind of thinking the assignment requires.


Example: Research Question Section (abbreviated)

This study asks: Among adults aged 18–40 engaged in heterosexual dating, does the weight placed on a potential partner's educational attainment in mate selection differ between Iceland and Morocco, and does this relationship vary by the respondent's own educational level?

This question is motivated by two bodies of theory in productive tension. From a human capital perspective (Oppenheimer, 1988; Sweeney & Cancian, 2004), rising female educational attainment in post-industrial societies has produced increasing homogamy — the tendency to partner with someone of similar educational status. Iceland, consistently ranked first in the world for gender equality (World Economic Forum, 2024), provides an ideal test case for this prediction. Morocco, while experiencing significant increases in female educational attainment over the past two decades, operates under a different gender-role framework shaped by Islamic family law and substantially lower gender equality indices (WEF rank: 136th). The prediction is that educational homogamy will be more pronounced in Iceland; the open question is whether this effect is symmetrical by respondent gender (i.e., whether women's preference for equally- or better-educated partners has shifted as much as men's in Iceland).

Secondary questions: (1) Does the relative weight placed on partner income versus partner educational credentials differ between the two countries? (2) Do age and marital history moderate the educational attainment preference in each country?


What makes this strong: - The primary question is specific: it names the construct (weight placed on educational attainment), the population (adults 18–40 in heterosexual dating), the comparison dimension (Iceland vs. Morocco), and the moderator (respondent's own education level). - The theoretical motivation is explicit: it names the theoretical framework (human capital / educational homogamy) and explains why these two countries are the right test case. - It makes a directional prediction while honestly acknowledging what remains an open question. - The secondary questions extend logically from the primary, rather than being unrelated add-ons.


Part V: Common Pitfalls

1. Treating countries as monoliths. Nigeria is not a single thing. Japan is not a single thing. Iceland is not a single thing. Every country contains enormous within-group variation by region, ethnicity, class, generation, urbanicity, and religion. Acknowledge this explicitly and, where possible, specify your sampling in a way that reflects the subpopulation you are actually studying.

2. Choosing countries based on stereotypes. "I want to compare France and Japan because France is romantic and Japan is reserved." This is a stereotype, not a theoretical rationale. Your country choice needs to be grounded in specific, measurable cultural dimensions (Hofstede's cultural dimensions, the World Values Survey, gender equality indices) — not in cultural mythology.

3. Confusing translation with equivalence. Translating your survey into Arabic or Korean is necessary but not sufficient. You must also ensure that the concepts in your survey mean what you think they mean in each cultural context. This is conceptual equivalence, and it requires more than a translation service.

4. Generic IRB risk sections. "Participants may experience mild discomfort discussing personal topics" is not a risk assessment. It is a phrase. What specific topics in your study could cause discomfort? For whom? In which cultural context? What will you do about it?

5. Forgetting the "so what." The introduction and conclusion of your proposal should answer the "so what" question directly. Why does it matter whether educational homogamy operates differently in Iceland and Morocco? What would we do differently — in theory, in practice, in policy — if we knew the answer?

6. Over-ambition without acknowledgment. Students sometimes write proposals that would require the resources of a major funded research center. That is fine — ambition is good. But you must also honestly acknowledge what you could not do, what you would need to approximate, and what your study's limitations are. Unapologetic over-ambition without self-awareness is a design flaw.


Part VI: Resources and Templates

Core Methodological Resources

  • Vandenberg, R.J., & Lance, C.E. (2000). "A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research." Organizational Research Methods, 3(1), 4–70. — The foundational text on measurement invariance; technical but essential.

  • van de Vijver, F.J.R., & Leung, K. (1997). Methods and Data Analysis for Cross-Cultural Research. Sage. — The standard reference for cross-cultural methodology. Chapters 3 and 6 are particularly relevant.

  • Hofstede, G., Hofstede, G.J., & Minkov, M. (2010). Cultures and Organizations: Software of the Mind (3rd ed.). McGraw-Hill. — The source for individualism-collectivism and other cultural dimension scores by country.

  • World Values Survey (worldvaluessurvey.org) — Free access to decades of comparative attitudinal data across 90+ countries. Essential for grounding your country justification empirically.

Relevant Course Chapters

  • Chapter 3 — The Okafor-Reyes methodology; IRB overview; survey design; effect sizes.
  • Chapter 5 — Cross-cultural ethics and consent; the IRB complications Okafor and Reyes encountered.
  • Chapter 8 — Cross-cultural physical attractiveness standards; measurement issues.
  • Chapter 12 — Cognitive biases in attraction; survey measurement challenges.
  • Chapter 19 — Flirtation behavioral coding across cultures (Okafor-Reyes Year 3 data).
  • Chapter 22 — The "silence and space" finding; conceptual non-equivalence in action.
  • Chapter 25 — Racial preference data controversy; research ethics in sensitive areas.

IRB Resources

  • Your institution's IRB office website will have its own templates. Use those if available; otherwise, the PRIM&R (Public Responsibility in Medicine and Research) website offers standard templates at primr.org.
  • The Belmont Report (1979) remains the foundational ethical document for human subjects research. It is approximately 8,000 words and worth reading in full: hhs.gov/ohrp/regulations-and-policy/belmont-report.

Suggested Citation Management Tools

  • Zotero (free, open source)
  • Mendeley (free)
  • Your institution's library likely provides access to RefWorks or EndNote

A Final Word

Research design is an act of intellectual humility. Every choice you make — which countries, which method, which sample, which instrument — forecloses other possibilities. Good researchers do not pretend those trade-offs do not exist; they acknowledge them clearly and argue that the trade-offs they have made are worth making.

Okafor and Reyes have been wrestling with the trade-offs of the Global Attraction Project for five years. Some of their early choices have held up beautifully. Others they would make differently now. That is not failure — it is science.

Your job is not to design a perfect study. Your job is to design a study that you can defend, whose assumptions you can name, and whose limitations you have genuinely thought through. That, in the end, is what rigorous scholarship looks like.

Good luck.


This capstone project draws directly on the methodological frameworks introduced in Chapters 3 and 5, the cross-cultural findings discussed in Chapters 8, 19, and 22, and the ethical frameworks developed throughout Parts I and VI of the course. Appendix A (Research Methods Reference Guide) may also be helpful for methodology decisions.