Home / Topics / Collaboration / Working with Sites / International Sites / Culturally Relevant Resilience Research
Michael Ungar

Culturally Relevant Resilience Research

Posted on February 14, 2006

Michael Ungar (bio) describes his work with the International Resilience Project.

 

Q: You are involved, with many other people, in a multi-site, multi-national, mixed methods study on resilience, called the International Resilience Project. Its purpose is to develop and apply culturally-sensitive methods that will make a positive impact on policy and interventions for children at risk. How did you get interested in this project?
A: The idea was to move away from the more typical methods of studying resilience, which are unsatisfactory in many ways. They often rely on measures that have a Eurocentric bias, and they don't sufficiently account for the complexity of the cultural and social world of the child. We wanted to develop a way to conduct research that could be used internationally; one that would be both scientifically rigorous and culturally relevant.
Q: What's involved in developing a culturally-sensitive protocol?
A: The first step was to bring together a team. I first invited colleagues I knew both in Canada and internationally who shared an interest in this topic. Then I went searching for people from as many different contexts and cultures as I could find. I discovered I was always just one or two degrees of separation from a person interested in this research and found partners on every continent, in some very challenging environments, like those where there is war, or epidemics of AIDS, poverty, and many different experiences of marginalization. The team includes academics, community NGOs with some capacity for research, child advocates, and practitioners. I think our strength was in our diversity. At the moment, we are in communication with people in 22 countries. In order to make this work, I have encouraged the swapping of favors. I do some work for my partners, they do work for the IRP, and together we share resources. We've also managed to meet twice all together in Halifax.
Q: How did you choose the sites?
A: After building the research team, we needed to choose the sites. Our selection criteria were primarily aimed at maximizing variability. We weren't trying to say something about any particular population, but were trying to compare small groups of youth (60 or more) from different contexts and cultures in order to test methods and develop some research tools.

Of course, many of the team members already had research underway with the kind of youth we wanted to include in this study. For example, our Colombian partner, Dr. Luis F. Duque, was already conducting a longitudinal study of outcomes from a violence prevention program in Medellin, one of the most dangerous cities in the world. In some cases, though, team members found new communities to work with specifically for the purposes of this research. It made them think about resilience and where they might find resilient youth. For example, our Palestinian partner made contact with the boy scout troup leaders in the Ramallah refugee camp. It was a perfect place to meet young people who were coping well under extreme adversity. Site selection, then, was based on both convenience and word of mouth.

The sites we eventually chose were Halifax, Canada; Winnipeg, Canada (Aboriginal and non-Aboriginal youth); Sheshatshiu, Labrador in northern Canada; Tampa, Florida, USA; MedellÌn, Colombia; East Jerusalem, Palestinian Occupied Territories; Tel Aviv, Israel; Hong Kong, China; Moscow, Russia; Imphal, India; Serekunda, The Gambia; Moshi Tanzania; and Cape Town, South Africa.
Q: You set up a meeting with team members to design the research. Who attended the meeting, and what did you accomplish?
A: We arranged a 2-day conference in Halifax, Canada, with one member from each site attending. We hired a professional facilitator to help keep us moving along in our process. I didn't want that role as I wanted to make sure I could be there listening and participating. The facilitator, himself a Ph.D. in psychology, was excellent. he was able to push us along toward finding solutions to the problems of research design that we were there to address. It had cost tens of thousands of dollars for us all to meet, so it made sense to keep us on track and on schedule.

We reached consensus using a 3-phase process. The purpose of the first phase was to get to know each other and to talk about the operational definition of resilience. Each member gave a 15-minute presentation on his/her site, focusing on the challenges that the children face, and how they cope with them. The presenter also discussed the concept of resilience as it is defined in the community. In the second phase, we discussed the methods (participants, measures, etc.) and came to a consensus. The last phase was concerned with finalizing the design.
Q: How did you come to a consensus on methods, given that the sites differed so significantly?
A: Phase II, establishing the methodology, was quite time-consuming. We worked electronically for about 8 months before meeting face to face. But by then, we had enough ideas on the table to move quickly to finding compromise. It was of course necessary for everyone to be open to compromise. We represented, after all, a dozen or more cultures, from five continents, living in very different contexts, with different levels of education and from different disciplines. Even more challenging was the fact that we have experts in qualitative and quantitative methods, but few individuals who specialized in mixed method research. We addressed the following questions:
  • Who are the participants going to be?
  • What areas of people's lives will we study?
  • What measures will we use?
  • What ethical dilemmas might arise, and how will we deal with them?
  • What are the site-specific constraints and opportunities that we are likely to encounter?
Who are the participants going to be?
To decide on participants, we split into small groups and identified 'key informants'. During the large group discussion afterwards, we found many cultural differences, particularly when it came to deciding on the age of the children whom we would study. Some groups felt that 11-year-olds should be the focus, while other groups felt that late teens would be a better target. After discussing the differences in expectations across societies (i.e. the age at which children are expected to take care of themselves, take on jobs, become sexually active), we decided that each site should determine the age range that they would study, with the provision that it spanned a 3-year time frame.

What areas of people's lives will we study?
Deciding on what domains to look at again required breaking up into small groups. Each group got colored index cards with domains typed on them (based on a pre-meeting email survey that asked group members to rank domains in order of importance). We then had to sort them in categories for inclusion in the study: Yes, Maybe, and No. Groups were allowed a maximum of 12 cards in the Yes pile, and were given 30 minutes to reach consensus. We ended up with 32 domains under four broad topics. All of the agreed-on domains were seen to be relevant to each culture, and each had a theoretical foundation.

What measures will we use?
Next, we looked at what methods we might use that would be appropriate across sites, disciplines, and theoretical backgrounds. We agreed that in order to develop a valid measure of resilience, we needed to include the voices of the children we were studying. A semi-structured interview was proposed as a way of allowing each site to define resilience in its own way. Each site was given the tool to pilot and revise, according to their culture. Designing the interview was challenging. For example, items about sexual relationships were seen acceptable in one culture (Canadian), but needed to be modified in the Muslim communities. Subtle differences in word meanings occurred as well. We changed 'camaraderie', which had a militant connotation in some cultures, to 'feeling part of a group', and 'protest against' was softened to 'disagree'. As well, some concepts promoted by our overseas partners proved difficult for westerners to grasp. For example, Chinese colleagues spoke of "self-betterment" but didn't mean "me improving myself." They wanted us to reflect a much more collectivist notion of personal development: what I do affects my family, my community, my ancestors and those who come after me. Finding equivalent constructs in western individualistic cultures has proven difficult.

What ethical dilemmas might arise, and how will we deal with them?
We started by identifying and discussing ethical issues that we had encountered in previous work. We ended up with some areas that were common to all of the sites:
  • Confidentiality and safety: Disclosure of information might put participants at risk, particularly in societies where war or tribal conflict is occurring. We needed to be careful that participants were not seen as 'colluding with outsiders'.
  • Consent: There was a great deal of variation among sites concerning the issue of consent. Canadian standards, for example, generally require written consent from the parent. This was not the case in other societies, where verbal consent would be the only practical method.
  • Coercion: In many societies, both children and adults might feel a social obligation to participate in the study.
  • Local ethics boards: For several sites, local review boards did not exist, so we developed a set of 12 questions to help guide us in each community. Community members sitting on Local Advisory Committees answered questions such as , "Can people be hurt in any way by taking part in this research?", and "Do they understand what the information will be used for?"
  • Benefit to community: All team members felt that it was essential to offer some tangible benefits, in addition to money, for the communities being studied. For example, could the information gleaned from the study be used by local officials to improve children's health?
What are the constraints and opportunities that we are likely to encounter?
For this last issue, we divided into teams by site, and listed potential obstacles for conducting this research at our specific locations. For example, in some sites there were two or more cultural groups. Should we sample from all groups within the site? (We decided to do so, if at all possible). In certain sites, social norms might cause participants to give answers that they viewed to be 'correct', while others might not be comfortable answering personal questions. These site-specific constraints were brought to the table, and possible solutions were discussed.
Q: What did Phase 3 of this consensus process entail?
A: On the last day of the team meeting, we made decisions on the final design. First, we broke into groups of about 10 to write up suggested research templates. After viewing all the templates, we discussed their similarities and differences, and came to a tentative consensus. The design was finalized after the team meeting over a period of four months through email communication. Based on the consensus and on the communication that followed the meeting, we developed the following documents:

Protocol Manual
We wanted to develop a manual that was based on strong methodology, but was flexible enough so that the various sites could use it to gather the data they needed. We used flow charts to illustrate each of the 11 phases of the study, and used icons to show the people responsible for each step.

Qualitative Methods Toolbox
The toolbox clearly described in layman's terms the qualitative methods for data collection.

Fieldwork Checklist
We developed a checklist of administrative and data collection tasks to help site workers track their progress. We also included a suggested timeline. All this went ahead fairly well, though we had to be sure that our instructions were clear and could be easily translated. We also had to figure out how to go from our teams' general consensus on what to do, to finding a way to put it into practice that would work across so many different sites. It was this contextualization that took time.

We also had the enormous task of getting this through the Research Ethics Board. There was a struggle around consent issues: we wanted to allow verbal consent and, if written was necessary, to limit the amount of technical details in the forms. We ended up submitting the proposal 5 times before it was approved.

Legal documents were also needed to delineate the responsibilities of each site and of the funder. This involved lengthy negotiations and several revisions. It was difficult, for example, for the university to understand that the sites needed to have funds advanced to them, rather than to get reimbursed. Similarly, transfering documents to our partners seemed straightforward, but sending .pdf files and Word documents are not as simple as it sounds when you are asking community researchers to access their email in small towns at internet cafés with limited bandwidth.
Q: Based on your experience with beginning the process of conducting multi-site, cross-cultural research, what lessons have you learned?
A: There's no guide out there to how us the best way to approach this kind of research, but there is a great deal of need. I would highly recommend to anyone in a university embarking on an international project to understand university requirements before developing the protocol. I also think that the issue of adaptability is key. If you want to conduct research in the real world, with the diversity that it entails, then you need a flexible design. Our inclusion of qualitative and quantitative methods, with each informing the other, speaks to this need to have research that is both reliable and valid.

 

 

Related Links

 


Based on personal communication with researcher in February 2006 and published article. Ungar, M., & Liebenberg, L. (2005). The International Resilience Project: A mixed methods approach to the study of resilience across cultures. In M. Ungar (Ed.), Handbook for working with children and youth: Pathways to resilience across cultures and contexts (pp.211-226). Thousand Oaks, CA: Sage Publications.

Feedback

Please note that the feedback is viewed only by 4researchers staff and is not intended for communication with individual contributors.

 

Use the form below to submit feedback about this article. If you would like a response, please be sure to include your e-mail address.


More About "International Sites"

2:30

Challenges in Doing International Research

1:05

Collaborating Across Time Zones

Show All...