Rationale and Guidelines for Implementing a Revenue Allocation Survey in Kenya

Dorcas Omowole
13 min readDec 29, 2021

(Note: This paper was written in the first quarter of 2018–23 March 2018 — as part of a Research Design course)

“…Neither economists nor political scientists can answer the crucial economic and political questions about the consequences of revenue sharing.”

- Henry Aaron, “Honest Citizen’s Guide to Revenue Sharing”[i]

“Indeed, in virtually all federations in which the constitution shares power between the central and regional or state governments and, for each level to be “within a sphere co-ordinate and independent”[ii] enough resources need be allocated to each tier to justify their existence”.[iii] This process involves the distribution of governmental functions, the harder task of allocating resources, and ensuring simultaneous adjustments to the allocation of resources and functions between national and sub-national units.[iv] Irrespective of the origin of a federation, whether aggregation or devolution[DS1] , the task of allocation is not only onerous[DS2] [v], but also critical in that if not properly done it can jeopardize national cohesion and integration efforts. Hence, a revenue allocation process that “imbues in the citizenry a sense of justice, equity and fairness”[vi] is of immense value.[DS3]

The Constitution of Kenya, which was promulgated on 27th of August 2010, devolved powers and resources to the 47 counties across the country.[vii] The devolution became effective with the election of county governors, deputy governors and representatives in March 2013.[viii] The Commission on Revenue Allocation (CRA), an independent Commission, was set up under Article 215 of the 2010 Constitution with the core mandate to recommend the basis for equitable sharing of revenues raised nationally between the National and the County governments.[ix] The drive for devolution offshoots from the assumption that effective implementation of devolution may address political and economic-based grievances that have been the root cause behind Kenya’s political conflict. [DS4] The devolution effort relies on the necessary condition that the dispersal of powers and resources facilitate political accommodation and an equitable distribution of state resources and development.[x][DS5]

The Kenyan devolution and revenue allocation process has generated a lot of attention and public discourse within and outside Kenya, from civil society and international organizations. Since the devolution and consequent revenue allocation process was instituted for Kenyans, it is important to understand Kenyan’s perception of the process[DS6] . There is a need for positive returns on the quality of life; considering that funds have been invested in setting up new agencies and supporting parastatals, it is important to understand how valuable or worthwhile these investments have been from the viewpoint of Kenyans and over time compare investments to improvements in perceptions. [DS7] This study will measure perceptions by asking select and pertinent questions using a questionnaire survey.

The research team recommends a survey for this study for the following reasons:

· Surveys are appropriate for data collection when data is to be gathered from many people, standardization is important, and answers are needed to a clearly defined set of questions.[xi]

· Surveys are generally conducted to provide data for use in providing descriptive inferences and testing hypothesis.[xii] A carefully selected probability sample in combination with a standardized questionnaire offers the possibility of making refined descriptive assertions about a population.[xiii]

· A randomly distributed survey is most appropriate to measure perceptions quantitatively such that they form a basis for decision making and comparisons across time and groups.[xiv] Significant differences among subgroups can be tested. Also, the research team envisages that this study would be conducted periodically to monitor progress revenue allocation variables measured by the survey.

· In the spirit of democracy and grassroots participation, the research team believes it is most appropriate to talk to the people directly. Taking such a direct approach [DS8] gives Kenyans voice (participation/expression) with respect to revenue allocation.

Prior secondary research and qualitative studies have revealed characteristics of revenue allocation that now need to be measured quantitatively. The survey includes open ended questions such as general overviews and suggestions. Permission to contact respondent for clarifications or follow up studies after the survey was also requested.

Questionnaire Design Considerations

Based on commendable advice from Krosnick and Presser (2010), in designing this survey, we paid attention to our choice of words and the structural features of our questions by using simple and familiar words, simple sentences, avoiding words that can be interpreted differently by various respondents, avoided leading questions, double barreled questions or questions with single or double negations. The language used in this survey was conversational and designed to put respondent at ease. [DS1]

Our survey made use of five-point rating scales in measuring perceptions. The research team chose rating scales with number of points that had exhaustive and mutually exclusive response options and met conditions for rating scales to work effectively. Our points covered the entire measurement continuum, leaving out no regions. They also appeared ordinal[DS2] , progressing from one end of a continuum to the other. The meanings of adjacent points did not overlap. This ensured clarity in the meaning associated with each point on the scale so that the meaning of scale points is not ambiguous, and reliability and validity of measurement is not compromised.[i] The “nearness” of someone’s true judgment to the nearest conceptual division between adjacent scale points is associated with unreliability of responses — those nearer to a division are more likely to pick one option on one occasion and another option on a different occasion.[ii] Hence, we did not include “moderate” in the scale to reduce the chances of this occurring or making it difficult for respondents to make a choice between “agree” and “moderately agree” or “inappropriate” and “moderately inappropriate” due to inability to make a clear distinction between them. Also, the research team did not present either verbally or visually the “neutral” (midpoint) or “don’t know” options to prevent satisficing. [DS3]

According to Krosnick and Presser (2010), the probability that a respondent’s task becomes more difficult increases when presented with numerical rather than verbal labels. This occurs because for a respondent to make sense of a numerically-labeled rating scale, respondents must first generate a verbal definition for each point and then match these definitions against their mental representation of the attitude of interest. Therefore, verbal labels have an advantage, because they clarify the meanings of the scale points while at the same time reducing respondent burden by removing a step from the cognitive processes entailed in answering the question. In designing our survey, we made use of verbal labels for all rating scales used (the numbers in the questionnaires is for ease of choosing options by the interviewer and data processing purposes). Various studies suggest that reliability is higher when all points are labeled with words than when only some are[DS4] .[iii] Respondents also express greater satisfaction when scale points are verbally labeled.[iv]

In designing this survey, the research team was also concerned about minimizing the likelihood of satisficing and acquiescence. Satisficing occurs when a respondent provides answers, with no intrinsic motivation to make the answers of high quality; by being less thoughtful about a question’s meaning, searching their memory superficially, integrating retrieved information sketchily, leading to the selection of response choices that are inaccurate.[v] While the same process happens in acquiescence, the underlying factors that result into acquiescence include factors such as lower social status, less formal education, lower intelligence. Satisficing and/or acquiescence is more likely to occur when a question is difficult, either because it is hard to interpret, the interviewer stumbles on words or there are distractions. Acquiescence also occurs when respondents have become fatigued by answering many prior questions, and when interviews are conducted by telephone as opposed to face-to-face. In addition to the difficulty of questions, satisficing is also thought to be determined by respondent comprehension ability, pre-formulated judgments on the issue in question and the importance respondent attaches to the outcomes of the survey.[vi] Respondents may not be encouraged to provide optimizing responses if they think it is another survey that will not translate into positive societal outcomes.

Since our survey is designed to be nationally representative, we would be talking to people across various literacy, cognition and motivation levels. This necessitates commitment to implementing conditions that will minimize acquiescence and satisficing by making the questions “more accessible”[vii] to respondents. One of the ways we plan to make our survey more accessible is by translating the questionnaires into local language(s) in Kenya and conducting interviews with those of a less literate or lower socio-economic class in their own local language by interviewers fluent in those languages. These questionnaires would be translated to local languages and back translated to English by another translator to check that original meanings of words have been maintained. There was compromise between having a more data heavy survey and minimizing satisficing. The research team chose questions that did not involve long lists, mentally or visually tasking questions, but opted for a shorter questionnaire with straightforward questions and simple answer choices which respondents could answer easily and at the same time answered our research questions.[viii] Demographic questions were put at the end after the main questions. Part of our reasons for choosing to do interviewer led face-to-face interviews was to reduce satisficing, the presence of an interviewer reduces the number of “don’t know” responses.[ix]

Using guidelines from Krosnick and Presser (2010), question order was optimized by beginning the questionnaire with an open-ended question, which was easy to answer, to get the respondent in a conversational mood and build rapport between the respondent and the researcher.[x] The first question delved directly into the topic of the survey. Questions on the same topic were also grouped together and proceeded from general to specific: from the first section on, “Perceptions on Revenue Allocation”, to, “Perceptions on Revenue Allocation to Counties”, and “Recommendations and suggestions”. Sensitive questions such as ethnicity and educational qualifications were placed at the end of the questionnaire. Asking about ethnicity earlier may bias the respondent negatively by creating doubts about the real purpose of our survey. Information on respondent’s educational qualification is a sensitive question in some contexts but it is a variable the research team hopes to disaggregate research findings by. Low educational qualification level is a sign of exclusion; hence, the research team wants to understand if revenue allocation entrenches perception of exclusion by those who are already excluded in other ways. Filter questions and instructions to interviewers was also stated clearly in the questionnaire so that respondents are not asked questions that do not apply to them. Asking respondents questions that do not apply to them might make the interviewer look unprepared, be irksome for the respondent and may even be a cause of loss of interest in survey or satisficing. In Question 4 of the questionnaire, only respondents who indicated that percentage allocated to counties was not adequate were asked to suggest preferred percentage to be allocated to counties. Question 3 asked respondents what the percentage of national revenue is allocated to Counties is and serves as a control question for respondents’ level of political knowledge. The last question in the demographic section asking respondents about who they think was the actual winner of the 2013 elections was included as a measure of political affiliation. Pro-Uhuru respondents are likely to indicate that it is Uhuru Kenyatta, while Pro-Ralia respondents are likely to indicate that it is Ralia Odinga. The actual names of these politicians were not included in the questionnaire, but interviewers knew which code to include for both politicians. This strategy of not including names was adopted for safety reasons.

Strategies for overcoming difficulties in measurement

The strategies for tackling difficulties in measurement included designing a clear and comprehensive consent form, pretesting our survey instrument (questionnaire) and ensuring our interviewers are adequately trained. The consent process included a form which stated the objectives of the study in clear language. Interviewers obtained verbal consent. There would be no incentive for the study, but we hope that respondents knowing that they are contributing to the common good of peace and prosperity through revenue allocation process in Kenya would consider this a good enough motivation.[xi]

Our interviewers also had identity cards in case respondents requested some confirmation that they are researchers working with the Institute of Economic Affairs (IEA), Kenya. We did not use branded clothing because we did not want to draw unnecessary attention to our interviewers or risk them becoming victims of violence by those who do not understand the purpose of the survey. Our interviewers as representatives of IEA were trained to be very polite and desist from extraneous conversations which may be construed as ethnically motivated or biased so as not to put themselves in harm’s way. The interviewers were also trained to be respectful of the respondent’s space and time. This involves being conversant with the questionnaire and able to administer the questions without unnecessary pauses/breaks. Interviewers were also trained to desist from acts that will make their respondents uncomfortable, to be observant and look for ways to do little things that will make the respondent comfortable, interested, and happy to complete the survey. Respect for respondent’s space also involves putting on neat and appropriate dressing; sloppy or extravagant dressing could be a turn off for some respondents. “Dress the way you want to be addressed” is a norm that impacts the response and acceptance interviewers enjoy.[i]

An established policy think tank in Kenya[DS1] , with whom we have partnered with, has experience in conducting large surveys and is committed to ensuring high quality data and aware of the need to train interviewers adequately to prevent or manage difficulties in measurement better. During the training of interviews, the rationale for each question was clarified so that interviewers understand the importance of each question, the importance of asking the questions as specified, and to be able to identify contradictory responses. Interviewers worked in pairs to conduct mock interviewers to become familiar with the questionnaire and ask the questions fluently and competently. Pilot interviews with actual respondents also took place before full fieldwork rollout. Feedback was given by the research team post the mock and pilot interviews, questions answered, and expectations clarified. Final versions of the English and local languages questionnaires were used during the mocks and pretest/pilot interviews. The pretest process is an important step for envisaging and mitigating problems in measurement. It would allow us to:

· Revise the initial draft of a questionnaire by helping to identify question, and answer categories that need to be improved to enhance questionnaire comprehension.

· Determine whether the questionnaire is too long or confusing, or additional transitional language is needed between questions.

· Perform basic analysis of the data generated via the pre-test to discover any important omissions.

· Reinforce the training of research assistants and interviewers and improve their confidence in administering the survey.

Furthermore, interviewers were encouraged to suggest revisions they think would enhance the comprehension and flow of the questionnaire to increase their sense of ownership of the project. The expectation of the research team is that the team spirit generated during the feedback from the mock and pilot interviews helps to form bonds and excitement that carry on throughout the project and translate into better data quality. There will also be periodic meetings and follow up calls with research assistants and interviewers at various completion rates of the survey, review of completed questionnaires, to address issues quickly, resolve questions and keep the whole team updated on progress.

Conclusion

The research team agrees with Henry Aaron that, “…Neither economists nor political scientists can answer the crucial economic and political questions about the consequences of revenue sharing”, without due consultations with the people. It is important that economists and political scientists work with the people to understand what the people want, why the people want what they want, how to go about providing what the people want, towards ensuring that the consequences of revenue allocation are positive overall. A well designed and properly implemented revenue allocation survey such as this represents the views of the people and evaluates to what extent their expectations regarding revenue allocation are being met.

Please note: Ideally, Questionnaire and Consent Form should be put in the appendix but included in paper to enhance coherence of the paper.

References

[i] Earl, B., “The Practice of Social Research”, Wadsworth publishing company, 1998. pp. 265.

[i] Kuncel, R. B. (1973). Response process and relative location of subject and item. Educational and Psychological Measurement, Vol 33, pp. 545–563.

[ii] Ibid

[iii] Krosnick, J. A., & Berent, M. K. (1993). Comparisons of party identification and policy preferences: the impact of survey question format. American Journal of Political Science, 37, 941–964.

[iv] Dickinson, T. L., & Zellinger, P. M. (1980). A comparison of the behaviorally anchored rating mixed standard scale formats. Journal of Applied Psychology, 65, 147–154.

[v] Krosnick, J. A. and Presser, S., “Question and Questionnaire Design”, Handbook of Survey Research (2nd Edition), James D. Wright and Peter V. Marsden (eds). San Diego, CA: Elsevier, 2009. pp.5.

[vi] Ibid, pp. 6.

[vii] Bleck, J., “Schooling Citizens: Education, Citizenship, and Democracy in Mali”, A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy, August 2011. pp. 222.

[viii] Krosnick, J. A. and Presser, S., “Question and Questionnaire Design”, Handbook of Survey Research (2nd Edition), James D. Wright and Peter V. Marsden (eds). San Diego, CA: Elsevier, 2009. pp.

[ix] Earl, B., “The Practice of Social Research”, Wadsworth publishing company, 1998. pp. 264.

[x] Krosnick, J. A. and Presser, S., “Question and Questionnaire Design”, Handbook of Survey Research (2nd Edition), James D. Wright and Peter V. Marsden (eds). San Diego, CA: Elsevier, 2009. pp.3.

[xi] D’arcy, M. and Cornell, A., “Devolution and Corruption In Kenya: Everyone’s Turn To Eat?”, Oxford University Press, Journal of African Affairs. Vol. 115. №459. 2016. pp.248.

[i] Aaron, H., “The Honest Citizen’s Guide to Revenue Sharing”, Tax Foundation Tax Review Vol. XXXII, №10. October, 1971. pp. 37 https://files.taxfoundation.org/legacy/docs/taxreview-1971-10.pdf

[ii] Wheare, K. C., 1963, Federal Government, (4th Edition), Oxford University Press, London. pp. 93.

[iii] Ojo, E. 2010, “The Politics of Revenue Allocation and Resource Control in Nigeria: Implications for Federal Stability”, Federal Governance, Vol. 7 №1, pp. 15. https://ojs.library.queensu.ca/index.php/fedgov/article/view/4387

[iv] Wheare, K. C., 1963, Federal Government, (4th Edition), Oxford University Press, London. pp. 93

[v] Ibid

[vi] Ojo, E. 2010, “The Politics of Revenue Allocation and Resource Control in Nigeria: Implications for Federal Stability”, Federal Governance, Vol. 7 №1, pp. 15. https://ojs.library.queensu.ca/index.php/fedgov/article/view/4387

[vii] “Devolution System Made Simple: A Popular Version of County Governance System”, Friedrich Ebert Stiftung, November 2012. pp. 3. http://library.fes.de/pdf-files/bueros/kenia/09856.pdf

[viii] Mwangi, S. K., ‘Devolution and Resource Sharing in Kenya’ October 22, 2013 https://www.brookings.edu/opinions/devolution-and-resource-sharing-in-kenya/

[ix] http://www.crakenya.org/cra-overview/

[x] Bosire, C. M., ‘Kenya’s Ethno-Politics and the Implementation of Devolution under the Constitution of Kenya 2010’ Workshop on Devolution and Local Development in Kenya. 26 June 2014, Nairobi Safari Club, Nairobi, Kenya. Conference proceedings. Proceeding №2. Swedish international centre for local democracy (ICLD). pp. 13. https://icld.se/static/files/forskningspublikationer/report-proceedings.pdf

[xi] Section III: An overview of qualitative and quantitative data collection methods, Data collection methods: some tips and comparison, The 2002 User-Friendly Handbook for Project Evaluation. The National Science Foundation. pp. 49. https://www.nsf.gov/pubs/2002/nsf02057/nsf02057_4.pdf

[xii] Kapiszewski, D., et al. 2015. Field research in political science, chapter 8. Cambridge: Cambridge University Press. pp. 278.

[xiii] Earl B., “The Practice of Social Research”, Wadsworth publishing company, 1998. pp. 265.

[xiv] Kapiszewski, D., et al. 2015. Field research in political science, chapter 8. Cambridge: Cambridge University Press. pp. 269

--

--