Social Capital Community Survey, 2006
CitationPutnam, R. D. (2019, December 19). Social Capital Community Survey 2006.
SummaryThe 2006 Social Capital Community Survey was undertaken by the Saguaro Seminar at the John F. Kennedy School of Government at Harvard University. The SCCS consisted of a national sample and targeted samples in 22 American communities. The SCCS is a follow-up to the 2000 Social Capital Community Benchmark Survey, conducted nationally and in 41 American communities.
Social capital is the societal analogue of physical or economic capital -- the value inherent in friendship networks and other associations that individuals and groups can draw upon to achieve private or collective objectives. In recent years, the concept has received increasing attention as accumulating evidence demonstrates the independent relationship between social capital and a wide range of desirable outcomes: economic success, improved school performance, decreased crime, higher levels of voting and better health. Within communities, recent research supports the belief that social capital fosters norms of social trust and reciprocity, facilitating communal goals. The concept's theoretical richness and practical significance is becoming increasingly well-documented.
For more information, visit the Saguaro Seminar website.
The ARDA has added seven additional variables to the original data set to enhance the users' experience on our site.
Data FileCases: 12100
Weight Variable: FWEIGHT
Data CollectionJanuary through August 2006 (Wave 1 was conducted from mid-January to late April 2006 nationwide and in 14 communities, Wave 2 was conducted from April to August 2006 nationwide and in 8 communities)
Funded BySurdna Foundation, Audrey and Bernard Rapoport Foundation, Kansas Health Institute, Community Foundation for Greater Greensboro, Duluth Area Foundation, Gulf Coast Community Foundation of Venice, Kalamazoo Foundation, Maine Community Foundation, New Hampshire Charitable Foundation, San Diego Foundation, Winston-Salem Foundation
Collection ProceduresTNS Intersearch selected experienced interviewers to conduct the telephone survey interviews. Interviewers worked from centralized telephone interviewing facilities under continuous supervision of senior staff. The survey's large scale required use of multiple interviewing centers. All of the interviewing in Spanish was conducted by bilingual interviewers from one facility in California. A small number of interviews in Cantonese were conducted in the San Francisco survey by experienced survey interviewers fluent in Cantonese and English. Interviewers were thoroughly briefed on the specifics of the survey before beginning, using a customized Interviewer Guide prepared for this survey. Refresher briefings were administered periodically, especially on techniques of obtaining respondent cooperation. Interviews in progress were also intermittently monitored for quality control. Interviewers not performing up to standard were retrained and, if necessary, replaced.
To minimize the number of non-contacts, at least 11 attempts were made (initial dialing plus 10 call-backs) before sampled telephone numbers were replaced. In many cases - particularly when re-contact appointments were made and eventual contact seemed likely - there were more than 11 dialings to sampled numbers, and in some cases as high as 30. Successive contact attempts were scheduled at different times of the day and week, and the full complement typically spanned a period of at least one month, often longer, to maximize the chance of eventual contact. To minimize the number of refusals and increase participation, skilled "refusal conversion" interviewers attempted to re-contact those initially opting out of the survey (or hanging up abruptly) and persuade the designated respondent in the household to agree to be interviewed. Such efforts did not include "hard refusals" in which the person answering was decidedly adamant about not participating, or was angry or abusive to the interviewer.
Sampling ProceduresEach sponsoring organization decided what specific area(s) were to be surveyed, how many interviews to conduct, and if specific areas or ethnic groups were to be over-sampled. In most cases, the survey area was one county or a cluster of contiguous counties; some of the community samples are municipalities and others are entire states. Most of the community surveys used proportionate sampling, that is, no over- or under-sampling of sub-areas or population groups. Most of the samples range in size from 400-800 interviews. The national sample (N = 2,741) was conducted randomly across the Continental United States.
The Genesys system, a widely-recognized random-digit-dial survey telephone number generator, was used to produce the starting sample telephone numbers. Genesys is a list-assisted sampling procedure which generates numbers from all working residential hundred-banks (area code + exchange + digits 7 and 8; example: 215 654-78XX) of possible telephone numbers corresponding to the targeted geographic area - the boundaries of the community's geography, as specified by the sponsor. A hundred-bank is determined to be "working residential" if it contained at least one two directory-listed residential phone numbers. As in all RDD telephone surveys, prefixes (area code + exchange combinations, sometimes called 10,000-banks) were selected which correspond to the survey area. The degree of correspondence is not perfect and depends, among other factors, on the size of the geographic unit being surveyed: the larger the area, the more likely that a phone number from a given prefix will fall within the indicated borders. Correspondence is very high with state lines, fairly high with large county boundaries, less so with smaller counties, and so forth. The same size or degree of fit relationship applies among municipalities. Irregularly shaped borders can also complicate (lessen) the tightness of the correspondence. Most sponsors were willing to accept some degree of slippage between sample phone exchanges and desired geography - and tolerate an expected small percentage of their final sample falling outside the geographic definition of their community - rather than implement more expensive respondent screening.
Principal InvestigatorsRobert D. Putnam, Peter and Isabel Malkin Professor of Public Policy at the John F. Kennedy School of Government at Harvard University
CitationThese data are being provided as a public utility. If you use these data in any publication, please properly attribute these data as "The 2006 Social Capital Community Survey. Saguaro Seminar: Civic Engagement in America project at Harvard's Kennedy School, in conjunction with various community foundations across the U.S."
Survey InstrumentThe 2006 SCCS Survey built heavily on the 2000 SCCBS Survey which was developed by the Saguaro Seminar in conjunction with the Scientific Advisory Committee that they formed. Questionnaire construction in 2000 followed an exhaustive process beginning with a listing of relevant content areas for the survey. Using this list, a thorough search was made to identify potential questions from previous surveys which would be suitable for use. Pertinent questions were borrowed from other surveys, whenever possible, to facilitate comparisons. For the 2006 SCCS, we removed questions that were less predictive of important outcome variables in 2000, and added in more questions to test the intersection between diversity, social capital, and equality. The 2006 questionnaire developed for CATI programming underwent numerous redrafts, additional pre-testing (beyond the extensive 2000 pretesting), and was revised several times before receiving final approval. It was then translated into Spanish, reviewed and revised, and then ultimately fielded.
Because of budget limitations and the desire to avoid an extremely lengthy interview to preserve response quality, several sections/questions were administered to randomly selected portions of the sample. Typically such questions in surveys are designed in "split ballots" (where some respondents get questions A, B, and C; and other respondents get equally long questions D, E, and F). The disadvantage is that if one wants to understand how people who answered A interacted with the answers they gave to D for example, you can't do this, because no respondents get questions across these ballots. Thus for the 2006 SCCS survey, we did the relatively innovative practice of randomizing by question rather than ballot (paired groups of questions). This way, one has data on the relationship between any two questions in the survey. In addition, there was a series of about 10 questions, questions 39A-H that were administered only in Baton Rouge, Houston, Arkansas and in some cases in the national sample to gauge how the influx of Katrina refugees affected social patterns and attitudes.
A copy of the printed questionnaire reflecting the CATI interview administered to respondents has been provided to the Roper Center and is also available on the Saguaro website.
Constructed Variables in Data FilesSeveral variables which do not appear on the questionnaire were computed or appended and included in the data files:
METSTAT - A "metropolitan status" code provided for each sample telephone number, measuring location of place of residence relative to MSA center city or if not part of an MSA: center city of MSA, same county as MSA center city but not center city, other county of MSA, in MSA with no center city, and non-MSA.
ETHNIC4 - A recoding of race and Hispanic ethnicity into four primary racial/ethnic categories: non-Hispanic white, non-Hispanic black, Hispanic, and Asian.
AGE - A recoding of year of birth into age.
WEIGHT - Weight for each sample.
FWEIGHT - Downweights oversampled areas to produce community weights.
RESPID - Respondent's unique identification number.
CALLD - Date of interview.
SAMP - Code representing the community interviewed or the sample.
CENSDIV - Code representing one of nine U.S. Census divisions.
CALL - Number of contact attempts required to complete interview.
LANGASK - Language of interview (English or Spanish).
Frequency of Civic/Social Activities Which Combine Follow-Up Probe:
PARADE2 - Q56A: CPARADE and PARADE.
ARTIST2 - Q56B: CARTIST and ARTIST.
CARDS2 - Q56C: CCARDS and CARDS.
FAMVIS2 - Q56D: CFAMVISI and FAMVISIT.
CLUBS2 - Q56E: CCLUBMET and CLUBMEET.
FRNDHOM2 - Q56F: CFRDVIST and FRDVISIT.
FRNDRAC2 - Q56G: CFRDRAC and FRDRAC.
JOBSOC2 - Q56H: CJOBSOC and JOBSOC.
FRNDHNG2 - Q56I: CFRDHANG and FRDHANG.
TEAMSPT2 - Q56J: CSOCSPRT and SOCSPORT.
WWWCHAT2 - Q56K: CWWWCHAT and WWWCHAT.
PUBMEET2 - Q56L: CPUBMEET and PUBMEET.
NEIHOME2 - Q56M: CNEIHOME and NEIHOME
HMEXNEI2 - Q56N: CHMEXNEI and HOMEXNEI
VOLTIME2 - Q58: CVOLTIME and VOLTIMES.
Interviewers asked how often the respondent engaged in each activity (Q56A-N, 58) in two different ways. First: How many times in the past 12 months have you [participated in activity X]? Those who could not, or did not, answer the initial, open-ended question were asked a follow-up, probe specifying frequencies to try to make it easier for the respondent to provide an answer: Would you say you never did this, did it once, a few times, about once a month on average, twice a month, about once a week on average, or more often that that? For each item, the variable name for the initial question ("How many times have you...") begins with a "C" (e.g., CPARADE), the variable name for the probe is similar but omits the "C" (e.g., PARADE), and the variable which combines the two distributions (those answering the initial question and those answering the probe) ends with a "2" (e.g., PARADE2). Users will most commonly want to use the combined variables in their analysis.
Notes to UsersA) These 2006 social capital community survey data are being supplied on an *as-is* basis (as supplied by the polling firm with only modest changes from us to repolarize data or merge variables that need to be combined). You will most likely need to do various massaging of these data before analysis. These data are not necessarily being supplied in a consistent form with the 2000 Social Capital Community Benchmark Survey so please take care in your analyses.
B) Some of these questions were only asked of respondents in Katrina affected parts of the country, like Arkansas, Baton Rouge or Houston, and some questions were only asked of the national sample. [Most of these questions are question 39A-H.] One question about trust in co-religionists (7C) was asked only in Winston-Salem, Greensboro and of a random 33% of national sample.
Some changes you may want to make to these files:
1) Create EDUC_ALL from EDUC and GED, and maybe create an intervalized ED_YEARS variable and create educational dummies. Create RACE_ALL from various racial responses.
2) Create work status dummies from LABOR and a 'working' dummy.
3) Create WORKTIME (counting nonworkers as 0).
4) Create dummies for marital status and create a LIVEPART variable (living with partner) counting those who are currently married as 0.
5) Create immigrant status variables and immigrant generation variables.
6) Reorder POLKNOW into POLKNOW2 that corresponds to increasing political knowledge.
7) Racial trust: create RACETRST, TRSTOWN, RTSTBLK, RTSTWHT, RTSTASN, RTSTHIS variables of composite racial group trust (see 2000 SCCBS codebook to see how this was done).
8) Creating indices: (instructions on how we did these things in 2000 is in our 2000 codebook, available from Roper), but in some cases we didn't ask all of the underlying questions so you will have to take great care in blindly replicating any 2000 indices:
- GRPINVLV, GRPINVL2
9) Recode RELMEM into RELMEM2 counting those with no religion as not church members.
10) Recode OFFICER & REFORM into 0 when respondent doesn't belong to any organization.
11) Create a new RELPART variable that codes this as 0 when respondent is not religious
13) Create new JOBSOC3 that counts non-workers as 0
14) Create new attitude variables towards inter-racial marriage that exclude the same racial/ethnic group (e.g., recode MARASN into MAR2ASN, but just for non-Asian respondents)
15) HISPNAT categories may not be consistent with 2000 SCCBS survey that we conducted.
16) for KS, NH and Arkansas samples you can get a breakdown of where those interviews were with MSA variable (for NH and Arkansas) or with MSA and qcell (for Kansas).