Categories
Education Employment Governance Trade and investment Transparency

Canada’s Economic Action Plan: Problems with ‘Canada Job Grant’ ad go beyond fact program doesn’t exist (updated May 22, 2013)

A big deal was made over the long weekend of the current federal government’ advertising a ‘Canada Job Grant’ program that not only doesn’t exist, but hasn’t even been developed.

Apparently the government has spent in excess of $100M to date on such ‘Economic Action Plan’ ads. Imagine how many jobs that could have created. This is the latest manifestation of its view of Canadian workers as lazy, dim-wits who just aren’t looking hard enough for all the jobs available. Cue up the cartoon unemployed Canadians looking despondent with question marks in their thought bubbles. So many things wrong on so many levels with the ad. Let’s just move on to what little content it presented.

Categories
Governance Media Transparency

2011 NHS: Community organizations officially left in the dark

Today’s first release of the 2011 National Household Survey (NHS) data confirmed what we had previously written on December 6, 2012. It appears the data quality was so poor that Statistics Canada decided to release neither data at the dissemination area (DA) nor the census tract (CT) levels. These are more commonly referred to as ‘community-level’ data.

NHS Focus on Geography Series

2011 National Household Survey: Data tables

NHS User Guide > Chapter 6 – Data dissemination for NHS standard products

It’s unclear at this point whether the community-level data will be released at a later date, or only provided on a paid-access basis. Given the obviously problematic data quality, the latter would be ill-advised.

As an example, the small community that received the Statscan letter referenced in the December 2012 post had a population of about 65,000 in 2006. That community was swallowed up in amalgamation and is now part of a census subdivision (CSD) with a population of 1,600,000. The lowest level geography made available in today’s 2011 NHS release was CSD. According to Statscan, the non-response rate for that amalgamated CSD was 21%. To put that in context, non-response in that small community’s now amalgamated CSD was equivalent to 5 communities its size. From a statistical perspective, that community has effectively disappeared.

That was the biggest news from yesterday’s 2011 NHS release: No community-level data (both DA and CT) was released.

Editor’s Note:
For context, census tract data has been available since 1941, dissemination area data since 2001 (enumeration area data, the DA equivalent prior to 2001, dates back to 1961).

Update (26/06/2013):
An update to this post can be found here.

Categories
Governance Transparency

2011 NHS: A few questions for the first release

Here are a few interesting questions we’d like to see answered. We’d be equally happy to see them even asked.

Methodology / data quality

1. How did Statscan manage to get the 2011 NHS response rate up over 2/3 when the 2011 Census 2B test of 2008 had a response rate of slightly less than 1/2?

2. a) What was the minimum number of questions completed before any manipulation (NRFU, CEFU, FEFU, E&I, etc) for which Statscan accepted a 2011 NHS questionnaire as complete?

b) Was the number and/or quality of responses deemed acceptable on a 2011 NHS questionnaire different from prior year Censuses (specifically 2006 and 2001)?

3. Will Statscan be releasing a per question response rate, similar to the 2011 Census 2B test of 2008 (if it hasn’t already)?

4. Were the data quality standards changed at the DA and CT levels (if immediately released) to reduce suppression given the circumstances, and what proportion of each were suppressed?

alternately: Will the DA and CT level data be released at some point (if not immediately released), and what proportion do you expect to be suppressed?

5. Statscan was unable to answer whether it would even release the community-level data at all as recently as six months ago. When was the decision made (not) to release it and how was that decision reached?

Aboriginal Peoples

1. a) Why/how did Statscan’s 2006 2B Census estimates indicate certain Aboriginal populations increase by 30%+ between 2001 and 2006 Censuses?

b) Were there possible response rate or methodological issues that contributed to this extraordinary jump in those Aboriginal populations?

2. If Aboriginal response rates were problematic even when the long-form was mandatory, how bad was it this time?

Immigration

1. a) Were first year immigrants, i.e. those who landed during the reference year 2010, included in the immigrant population count?

b) If so, why have those same first-year immigrants not previously been counted in the income statistics, and has anything been done to address this issue in the 2011 NHS?

Ethnic origin

1. Could you elaborate on the issue of whether/how example order in the Census/NHS survey questionnaire plays an impact on response rate for certain ethnic identities?

2. In particular, how has moving ‘Canadian’ up in the list of examples each year contributed to the increased ‘Canadian’ response, and was there any political input in the decision to move it up in the list over the years? * inquisitive readers may wish to request a copy of the April 2, 2008 release presentation, during which this question was first asked, to compare answers

3. How do you classify responses of ‘French Canadian’ or ‘Canadien(ne)s français(es)’, as French or as Canadian?

4. a) Why/how did Statscan’s 2006 2B Census estimates indicate certain Asian ethnic populations increased by 40%+ between 2001 and 2006 Censuses?

b) Were there possible response rate or methodological issues that contributed to this extraordinary jump in those Asian ethnic populations?

 

That should do for now. Will update if any other interesting questions for the first release come to mind.

 

Categories
Governance Media Transparency

2011 NHS: News since long-form Census cancellation of summer 2010

The media has once again taken up writing about the 2011 long-form Census cancellation in preparation for the first releases from its replacement, the 2011 National Household Survey (NHS). Readers would be hard-pressed to distinguish the media write-ups in recent days from those dating back to summer 2010.

What happened back in 2010

Munir Sheikh, who in his brief time as Chief Statistician gutted the agency’s social and environmental programs along with analytical research, somehow ended up a hero after resigning over the long-form Census cancellation.  In the original article that broke the Census cancellation story brief mention was made of cuts to programs and analytical research, along with an unsettling culture change coinciding with Mr. Sheikh’s arrival. The media failed to follow up, presumably because it didn’t fit with the developing narrative.

The official opposition at the time played politics with the issue – deservedly, it went on to suffer the greatest electoral defeat in party history. The federal government ultimately conceded to the addition of a couple of language questions to the short-form Census questionnaire.

There was no shortage of hypotheticals re the drop in data quality that would result from the changeover from a mandatory to a voluntary long-form. Reference was made to how municipalities could suffer from lower quality data that was necessary for urban planning and how social/community groups working with vulnerable communities could potentially suffer in the same way.

The cancellation of the Participation and Activity Limitation Survey (PALS) shortly thereafter received significantly less attention, though it was not completely unexpected: The PALS sample relied on a couple of questions on the long-form Census. The government promised to replace the PALS with another survey shortly thereafter.

The media eventually discovered that the voluntary 2008 Census test, conducted in preparation for the 2011 Census, contradicted then Industry Minister Tony Clement’s claim that StatsCan had assured him a voluntary survey could provide data of comparable quality to the long-form Census.  In 2008, less than half of the long-form test questionnaire recipients responded to it.

All the while the federal government Gong Show about protecting Canadians from long-form Census questions asking for the number of bathrooms in their homes went on largely unabated (despite the obvious fact the questionnaire included no such question).

What’s happened since

Munir Sheikh pulled his golden parachute and landed at Queen’s University. The career government hatchet man (PDF), in no small part thanks to the mistaken hero narrative, ended up heading The Commission for the Review of Social Assistance in Ontario (PDF). Not surprisingly, among the report’s recommendations was a cut to benefits for the province’s disabled. We’ve touched on the impact The Great Recession that began in 2008 had on the poor and disabled in Canada’s largest province. (Notably, during his brief tenure Mr. Sheikh directed StatsCan employees not to use the word “recession”.)

The federal opposition made some noise about the impact the loss of the long-form Census would have on federal official languages policy, going so far as to pursue a court challenge on the basis of the Official Languages Act. What it seemed to wilfully ignore was the greater role the long-form played in evaluating the performance of Canada’s legislative/statutory framework. Among other things, the Census Guide 2B 2006 noted:

Questions 7 and 8 provide information on the number of people in Canada who have difficulties with daily activities, and whose activities are reduced because of a physical condition, a mental condition, or a health problem. The results are used to help Statistics Canada find out more about the barriers these persons face in their everyday lives…

Question 17 provides information about the ethnic and cultural diversity of Canada’s population. This information is required under the Multiculturalism Act (s. 3.(2)(d)) (PDF) and the Canadian Charter of Rights and Freedoms (PDF). It is also used extensively by ethnic and cultural associations, as well as by agencies and researchers, for activities such as health promotion, communications and marketing.

Questions 18, 20 and 21 provide information about Aboriginal or First Nation, Inuit and Métis peoples that is used to administer legislation and employment programs under the Indian Act and the Employment Equity Act. The information is also used by researchers and Aboriginal governments and associations to explore a wide variety of demographic and socio-economic issues. 

Question 19 tells us about the groups that make up the visible-minority population in Canada. This information is required for programs under the Employment Equity Act, which promotes equal opportunity for everyone.

The PALS relied on long-form Census questions 7 and 8 for its sample. The PALS sample became less of a concern when the government scrapped it shortly after scrapping the long-form Census. In 2012, PALS was quietly replaced by the Canadian Survey on Disability (CSoD), which uses the less reliable National Household Survey (NHS) for its sample. Unlike PALS, and every major StatsCan social survey before it, the CSoD is now a voluntary survey based on a sample taken from another voluntary survey. This will be the fate of all StatsCan social surveys that previously relied on the long-form census.

Like numerous required periodic reports on the operation of federal statutes, the Minister’s Annual Report on the Operation of the Canadian Multiculturalism Act cites the Census as its data source on immigrant and ethno-cultural communities. In addition to the loss of the long-form Census, the Longitudinal Survey on Immigrants to Canada is classified inactive, likely to be discontinued.

Canadians with disabilities and Canadian ethno-cultural communities are among the vulnerable minorities on whom data would become less reliable with the change from the Census to the voluntary NHS. The preceding are concrete examples of how the loss of the long-form Census has already had an adverse impact.

An example of how the loss of the mandatory long-form has already affected community organisations was previously touched on. Even if it ultimately does release data for lower-level geographies (census tracts, dissemination areas),  StatsCan’s uncertainty only six months prior to the first scheduled 2011 NHS releases speaks to the questionable quality of the data collected. (It will likely be compelled to release it eventually, irrespective of data quality.)

Going forward a few years, Canada’s Labour Force Survey (LFS), as notoriously unreliable as it already is, will become even more so. As succinctly stated here:

the long-form data is the basis for just about all of Statistics Canada’s important social measurements. The unemployment rate, for instance, is compiled from the monthly Labour Force Survey, but the sample used in that survey is based on the census data. Once the census data becomes voluntary, the unemployment rate will be considered less reliable, taking the heat off governments in times of rising unemployment.

Given that the LFS is used to determine qualifying hours and weeks entitlement for Employment Insurance (EI) benefits for the unemployed, the more unreliable LFS data will mean more questionable EI benefits denials and shorter benefits entitlements.

What’s the end game

A lot of the chatter following the government’s decision to scrap the long-form Census in 2010 accused it of trying to substitute ideology in the place of fact-based public policy. It’s plausible. However, the government seemed to go out of its way to make the point it could likely collect better, more accurate information at lower cost from administrative data.  A quick browse of online comments in support of the government’s position at the time appeared to repeat this ‘administrative database’ idea, elaborating on how certain Scandinavian countries don’t take a Census but rather rely on administrative records in lieu of a census..

The ‘ideologically-driven’ narrative seemed to ignore the government’s penchant for passing legislation curtailing Canadian civil liberties in the name of protecting Canadians from fill-in-the-blank (pedophiles, youth gangs, terrorists, etc). Little consideration was given to the possibility the long-form Census was cancelled in the name of protecting Canadians’ privacy, with the intent of passing legislation intended to egregiously breach same said privacy. It was not that long ago that a secret longitudinal administrative database (under a previous government) was cancelled upon discovery for violating Canadians’ privacy.

Canada Scraps Citizen Database
Wired News Report   May 30, 2000

Privacy Commissioner applauds dismantling of database
Office of the Privacy Commissioner of Canada, May 29, 2000

That ‘super-database’ was innocuously named the Longitudinal Labour Force File (LLFF), HRDC PPU 335.

Given the options, well-informed Canadians would likely choose the far less intrusive long-form census over a secret government super-database that was previously found in breach of their privacy rights.

Categories
Employment Governance Transparency

November 2012 SEPH: Statistics Canada responds to ‘unclassified business’ jobs boom

Federal MP and Liberal labour critic Rodger Cuzner has seen fit to inquire of Chief Statistician Wayne Smith regarding the ‘unclassified business’ jobs boom. The following is the response his office received from Assistant Chief Statistician Peter Morrison. Suffice it to say Mr. Cuzner was not amused.

Apparently Statscan is aware of the issue. No indication is provided as to when the agency first became aware of it, why action hadn’t been taken sooner to address it nor what, if any, corrective action will be taken going forward. Nor is it explained why the issue was never mentioned in any of the SEPH Daily releases over the last 5 years.

To respond to Statscan’s equivocation:

Mr. Morrison is correct in stating that ‘unclassified businesses’ accounted for slightly less than 3% of all payroll employment in what he references as the most recent (September 2012) release. However, that is up from 0.6% in 2002, and 1% percent as recently as 2007. To put that  mere 2% increase in context, total payroll jobs since 2007 increased just 4.6%. Another way of looking at it is ‘unclassified business’ payrolls accounted for 40% of all net payroll job growth since 2007.

Table 1 SEPH Payroll Employment, total

Industry (NAICS)

Sep 2002

Sep 2007

Sep 2012

All, incl. unclassified

13336052

14733517

15418425

All, excl. unclassified

13261937

14583004

14991930

Unclassified businesses

74115

150513

426495

Unclassified share

0.6%

1.0%

2.8%

Table 2 SEPH Payroll Employment, net

Industry (NAICS)

2002 to 2012

2007 to 2012

All, incl. unclassified

2082373

684908

All, excl. unclassified

1729993

408926

Unclassified businesses

352380

275982

Unclassified share

16.9%

40.3%

Source: CANSIM Table 281-0023 Employment (SEPH), unadjusted for seasonal variation, by type of employee for selected industries classified using the North American Industry Classification System (NAICS) monthly (persons), Statistics Canada

With respect to Mr. Morrison’s reply, the gross payroll employment figure masks the impact of Statscan’s recent SEPH issue. The following charts look at the net change in payroll employment starting in 2002 and 2007, respectively. The first table shows an up-tick in net ‘unclassified’ payroll jobs starting in 2007. The second table appears to indicate ‘unclassified’ jobs “filled the gap” so to speak during the major labour market downturn spanning 2008 and 2011. Fake it ’til you make it?

SEPH Payroll Employment change (net),  2007-2012

In terms of Statscan’s process, Mr. Morrison’s reply is vague. A more detailed description of the SEPH methodology along with info on the Remittance form (PD7), Business Number (BN) application form and the Business Register (BR) questionnaire are available. Section A4 of the BN is fairly brief and specific in terms of identifying the nature of the business. The methodology document section titled ‘Conversion of BN level data’ indicates that industry information is ascertained from the BN, suggesting that some level of industry level aggregation is performed by CRA. Why/how Statistics Canada is unable to sort/classify the BN industry information to its BR, or rather why in recent years it has been unable to do so to such a remarkable degree, is a critical question.

As a result, not only are Canadians left to guess the nature of 40% of the jobs businesses supposedly created over those five years, but those same ‘unclassified business’ hours and payrolls are excluded from the published data. Any analysis of employee pay or hours worked during that period would effectively exclude a substantial proportion of the jobs ‘created’ during that same period. To be blunt, that would render such analysis meaningless.

Also not mentioned is that the eventual reclassification of those ‘unclassified’ jobs would skew the SEPH data in any future period during which an adjustment is made. Statscan would have to explicitly inform readers every time it reclassified jobs from ‘unclassified’ to a specific industry in a given month so readers don’t mistake reclassified jobs with those resulting from real economic activity. And we’re not talking a few jobs, but rather 350K more ‘unclassified’ jobs since 2002 (275K ‘created’ between September 2007 and 2012).

So why does this matter? Hopefully the figures above speak to the magnitude of the 2007-2012 SEPH problem. As has been noted a number of times over the years, including here, the Labour Force Survey is notoriously unreliable. The SEPH is considered the ‘real’ jobs report. In fact, it’s Canada’s only regular, national survey of employment as reported by employers. That it was this off should be a serious concern. That the problem appears to have emerged during the same period the country went through a prolonged labour market downturn should be of even greater concern. Jobs created during a downturn would be expected to provide lower pay, with lower labour market demand putting downward pressure on wages. A prolonged labour market downturn also tends to encourage self-employment (self-employed incorporated or temps on agency payrolls, for example), which provides lower earnings than corresponding regular, full-time employment during a recession. At the same time the country was going through this labour market downturn, the federal government inexplicably boosted the number of temporary foreign worker admissions, the TFWP further putting downward pressure on Canadian wages by design. What if any impact these political / economic issues had on payroll job creation and wage earnings may have been partially, if not entirely, masked as a result of so many jobs going ‘unclassified’ in the SEPH between 2007 and 2012.

Note to readers: The tabulations above differ slightly when replicated using the current dataset from the referenced CANSIM table. That’s because Statscan revised the September 2012 SEPH data. Specifically, ‘All, incl. unclassified’ was changed from 15,418,425 to 15,446,225, ‘All, excl. unclassified’ from 14,991,930 to 15,016,505 and ‘Unclassified businesses’  from 426,495 to 429,720. In finding an extra 28 thousand jobs, only 3 thousand of which were ‘unclassified’, Statscan reduced its 2007-2012 ‘Unclassified’ ratio from 40.3% to 39.2%. One could ask how 28 thousand jobs were originally missed in a single month, given the revision was to the seasonally unadjusted payroll employment data…

Categories
Governance Transparency

2011 NHS: data may not be released at all due to data quality, communities to lose vital data source

A friend recently shared an interesting perspective on the perils of criticising the 2011 National Household Survey (NHS) before the data is even released. The thinking went something like this:

The current federal government had been signalling its intention to eliminate the long-form Census from the moment it took office in 2006. Unfortunately (for that government; fortunately for Canada), the process was too far along to stop, given it took office February 6 and Census day was May 16, 2006. The same government went on to cancel the 2011 long-form Census, replacing it with the voluntary NHS.  Inevitably, the data quality would significantly deteriorate, likely to the point of being completely useless in many areas. This deterioration in data quality would subsequently provide justification for the government to announce the cancellation of the long-form Census/NHS all together. It would be deemed too costly to maintain for the lousy quality data it produced. Not so long ago, this would have seemed a somewhat far-fetched conspiracy theory; it’s a pretty interesting theory given the current state of Canadian federal politics.

Given this theory, any criticism of the NHS data before it even comes out could  be used to build a narrative to justify the cancellation of the long-form survey all together, ignoring critics’ true intent. This thinking of course is premised on the idea that the 2011 NHS data would be released much in the same way the 2006 Census 2B data was. Given that both long-form surveys took place the same month (May) of their respective years, the delay in release dates between the two (2006 and 2011) should clearly indicate it’s not business as usual. But more on that shortly.

Municipal amalgamation in larger metropolitan areas across Canada over the last 10-15 years has resulted in the dissolution of geographic boundaries that once defined smaller, well-established communities. While whether / how great an economic efficiency resulted from amalgamation is certainly a topic of debate, the fact is dissolution of those boundaries did not change reality in those communities. Community organisations continued to tend to the specific needs of their residents. In some cases, provincial health an social service organisations were re-organised / expanded in an effort to better meet community needs. Some communities, in collaboration with their respective newly-amalgamated municipalities, caught on to the idea of using more micro-level geographic (dissemination area – DA, census tract – CT) Census data to recreate the dissolved boundaries of their communities. In this way they could continue to discern the specific needs of their residents, much as they had prior to the formal dissolution of their communities’ boundaries.

At least that was the case prior to the long-form Census cancellation. As reported by numerous media outlets in the summer of 2010, the cancellation of the long-form Census and replacement by the NHS would result in less reliable data. What did not garner as much attention back then was the practical impact the less reliable data would have. Statscan faces a rather unenviable task with respect to the 2011 NHS lower-level geographic data dissemination: Release unreliable data and risk the reputation of the agency, or withhold it and risk the ire of community and social organisation across the country. It’s a lose-lose proposition – intentionally so, if you accept the proposed theory. Given the context, it should come as no surprise that the regular release schedule was pushed back.

In anticipation, some community organisations have inquired as to whether they will be able to update their community profiles with the 2011 NHS data. Not surprisingly, the formal response from Statscan:

The National Household Survey product line is currently under development, so we are unable to provide a response at this time regarding availability of data at small levels of geography.

i.e. We’re not sure if data for lower-level geographies will be released at all.  Slightly different in wording, but materially different in meaning, from the information provided in the ’Important findings’ section of this Census report (updated July 2012):

It is unknown at this point what the impact of the non-response will be on the quality of the NHS data, particularly for low geographic areas and small populations, and to what extent this quality will meet users’ needs.

The long-form Census had ~95% response rate and 20% sampling. The NHS had an estimated ~50% response rate (Statscan has since claimed 2/3 of households responded, but questions have been raised re the standard of what it deemed an acceptable response under NHS) and 30% sampling. The mandatory nature of the Census ensured that each household receiving a questionnaire had almost the same high likelihood of returning it, and that nearly the entire 20% sample would be captured. The voluntary NHS with its expected response rate would capture a 15% sample of whoever-felt-like-answering. Much like an online poll;  imagine policy decisions based on data that unreliable.

There are a couple of adjustments that could be made, like reweighing the distribution of the NHS responses to match the age/gender/language distributions from the short-form for a given geographic area. The better electoral polls do this, and we know how accurate those have been in recent federal and provincial elections. Such adjustments only go so far, since certain subgroups of the population (e.g. immigrants) are not evenly distributed across those three demographic traits, nor are they evenly distributed geographically at lower levels within a community (e.g. ethnic enclaves). What Statscan will invariably end up doing is using the 2006 Census as a base and estimate the 2011 NHS distributions with it as well as other, more up-to-date surveys and  administrative data. Which defeats the whole purpose of the exercise, since the 2011 Census was supposed to be the new benchmark.

In any case, the process is likely not going well if Statscan’s not sure if it will release 2011 NHS data at lower level geographies only six months prior to its first scheduled release. The demographics from the short-form (e.g. the sharp rise in young adults staying at / returning home) suggests a significant deterioration of socio-economic well-being since 2006. Unfortunately, a full and accurate measure of the impact will likely be difficult, if not completely impossible, to capture at the community level.

Categories
Governance Homelessness Housing Immigration Justice Media Transparency Youth

Census 2011: Prison population rose 17.3% as population in shelters rose by only 2.8% (updated 25/09/2012)

With the release today of its 2011 Census families and living arrangements report, media, and doubtless reader, attention was likely diverted by the news that Statistics Canada had mistakenly counted same-sex roommates as gay couples.  What did not receive much attention today was the  2011 Census collective dwelling type release.  The release figures indicated a rise in the prison population of 17.3% as the population in shelters rose by only 2.8% (relative to the figures provided in the 2006 Census collective dwelling type release).  Yet between 2006 and 2011, crime decreased dramatically and the country went through a severe economic downturn.  Given these facts, the opposite outcome would have been anticipated.

Categories
Employment Financial security Transparency

EI reform debate happens to coincide with sudden suppression of EI benefits payment data

If any web-savvy users stumble upon this post, it would be useful to know what changes were made to the Service Canada EI page since the budget was tabled (current page indicates recent changes on May 9, 2012).  It may be nothing…

Will fill in with a few tables from the data that is left later.

EI changes still under wraps but details coming ‘soon’
Meagan Fitzpatrick, CBC News May 18, 2012

CANSIM ‘data liberation’ was being heralded only a short while ago.  Unfortunately, the party appears to have been short-lived.  Just as the debate over Empoyment Insurance (EI) reform has started to heat up, apparently Statistics Canada has suppressed the only three tables providing data series on EI benefits payments.  The general series prefix for EI benefits tables is 276.  The suppressed tables are

CANSIM Table 276-0005
Employment Insurance Program (E.I.), benefit payments by province and type of benefit monthly (Dollars), Jan 1943 to Mar 2011

CANSIM Table 276-0015
Employment Insurance Program (E.I.), weeks paid by province and type of benefit monthly (Number), Jan 1943 to Mar 2011

CANSIM Table 276-0016
Employment Insurance Program (E.I.), average weekly payments by province and type of benefit monthly (Dollars), Jan 1942 to Mar 2011

Will update when a more cogent explanation is provided.  As it stands, apparently updates to the referenced EI benefit payment table were discontinued in March 2011 after (unexplained) inconsistencies were discovered in the EI benefit payment data breakdown by benefit type.  The aggregated data (sum total for all benefit types) was still available, but only ‘on demand’.  As of May 2012, this month, the aggregated data is no longer available at all.

Yes, as of this month, just as the government started the EI reform  ‘debate’  intended to curtail benefits payments and weeks of entitlement, the only data to analyse the effects of such changes to the EI program will no longer be available.

As to why the explanation doesn’t appear to hold up to scrutiny:  No explanation is provided as to how the data problems related to EI benefit payments breakdown by benefit type led to the decision to suppress the aggregated data series as well.  And no explanation is provided as to how the unexplained data problem with the breakdown by benefit type affected the tables related to EI benefit payments, but not the number of recipients using the same breakdown

CANSIM Table 276-0001
Employment Insurance Program (E.I.), income beneficiaries by province, type of income benefit, sex and age monthly (Persons), May 1975 to Feb 2012

Categories
Employment Transparency

The jobless (non)recovery: R4, R8 and why the spread matters

Canada’s job market struggles for growth
Tavia Grant, Globe and Mail, March 9, 2012

As usual, no mention of the rate that includes discouraged job-seekers, the involuntarily un(der)employed and those waiting to return to work.

Back at the outset of the ‘economic downturn’ that no one at Statcan was allowed to call a recession, a number of economists noted the LFS seemed slow to capture the rise in unemployment.  That is in large part due to a quasi-cohort effect stemming from the design of the survey.

Nevertheless, the LFS did start to capture steadily increasing unemployment after October 2008, and that is commonly used as the starting point in analysing the employment effects of the recession.  That month, the spread between the official unemployment rate (R4) and the ‘real’ unemployment rate (R8), was 2.4%.  In July 2009, at the height of the reported LFS unemployment rate during the recession, that spread had grown to 3.7%.  Today? You guessed it: 3.7%.

And that’s without even getting into the issues of job losses by industry sector or self-employment during recessionary cycles.   One can’t honestly say there has been a recovery in the labour market with numbers like this.

Categories
Governance Poverty Transparency

Inaccurate measures of poverty: A brief Canadian history

Low income cut-off (LICO)

For several decades, Statistics Canada used LICO as its primary measure of relative poverty in Canada.  Although it was not formally referred to as the ‘poverty line’, that is the purpose it served for many.   It was a general measure that took the average family’s expenditure on food, shelter and clothing (adjusted for family and community size), added a margin of 20% to it, and that was it.  If a family’s income fell below its respective LICO threshold, it was counted as low-income.  The LICOs were occasionally updated to account for increases in average family expenditure on the three basics.  What the LICO lacked in precision, it compensated for as a historical index:  LICO could be traced as far back as 1968 and it allowed for the creation of historical maps such as the ones published with the 2006 Census income release.

Low income before tax cut-offs, 2005 

Low income cut-off, after tax (LICO-AT)

The LICO was changed in the early 1990′s.  In 1991, LICOs based on after-tax income were published for the first time.  It seemed reasonable at first glance:  Canada supposedly had a progressive taxation system with tax rates that rose relative to income bracket, and the LICOs should reflect that.  Problem?  The Census (and SLID) income data was the basis for the denominator. The Family Expenditure Survey (FAMEX) and later Survey of Household spending (SHS) would be the basis for the numerator.  The Census (and SLID) income tax data was notoriously unreliable, where it was available at all.  If reliable income tax data was not available, then what were the after-tax LICOs based on?

There is no simple relationship, such as the average amount of taxes payable, to distinguish the two types of cut-offs.  Although both sets of low income cut-offs and rates continue to be available, Statistics Canada prefers the use of the after-tax measure…  The number of people falling below the cut-offs has been consistently lower on an after-tax basis than on a before-tax basis.

Basically, the amount of income tax paid was assumed using the tax brackets and aggregated Canada Revenue Agency T1 data, irrespective of how much actual tax a person or family actually paid.  That’s inaccurate, as many high-income and/or wealthy families could defer, offset or reduce a significant share of their earnings through various registered investment plans, capital gains and other tax shelters.  Lower-income families’ earnings on the other hand largely consist of wages and salaries, from which personal income taxes are usually already deducted.  The lowest personal income tax rate is higher than the effective capital gains tax rate on the highest personal income tax bracket in Canada. Even if they do qualify for tax rebates, lower-income families are less likely to apply for or receive them as they  don’t  generally benefit from professional tax preparation services. Some families with little income do not file returns at all.

While it was no more accurate than it’s newly-dubbed ‘before-tax’ counterpart, the after-tax LICO did have one notable effect: by lowering the cut-offs across the board,  it also lowered the number of Canadians falling below them, effectively reducing low-income incidence.

Low income after-tax cut-offs, 2005

Low-income measure (LIM)

Also introduced in 1991, the LIM is a fixed percentage (50%) of median adjusted family income based on family size.  Canadians would only be considered low-income if their total family income was less than half the median same-size family’s income.  Community size was not taken into account .

The LIMs are based on the Survey of Labour and Income Dynamic (SLID), a voluntary survey of a small number of households sampled from the LFS (in turn sampled from the Census).

Oh, and in many cases the more general LIMs also happened to further lower the income thresholds, effectively reducing low-income incidence.

LIM after-tax thresholds, 2006

MBM 

Introduced in 2000, unlike the LICO and LIM, which were relative measures of low-income based on average family income and/or  consumption, the MBM measure was normative:

The MBM and the MBM disposable income were designed by a working group of Federal, Provincial and Territorial officials, led by Human Resources Development Canada (HRSDC) between 1997 and 1999. 

Bureaucrats got together to figure out what/how poor people should live and created a basket of goods based on these assumptions, which, not surprisingly, were rather questionable.  Probably the most questionable of these was:

The MBM thresholds are calculated as the cost of purchasing the following items: …Transportation costs, using public transit where available or costs associated with owning and operating a modest vehicle where public transit is not available.

A low-income family of four (two adults,  two children) living in a major urban area  (CA/CMA) where transit is available gets a couple of transit passes added to its basket for transportation costs.  Because the working poor in areas where transit is available presumably needn’t drive – not the Molly Maids, the delivery drivers, the general labour contractors, etc.  This assumption was plainly wrong and was easily demonstrated to be so by a simple calculation (using Q47 of the 2006 20% sample Census).  Nevertheless, it remained.

In addition to being an inaccurate measure, in many cases the MBMs further lowered income thresholds, effectively reducing low-income incidence.

MBM thresholds for reference family, 2007

‘Progress’

The historical ‘progress’ of low-income measurement in Canada can be demonstrated with an example.  A family of four (two adults, two children) living in Montréal:

LICO-BT (2005)       38,610
LICO-AT (2005)       32,556
LIM-AT (2006)         30,358
MBM (2007)            26,560

With each subsequent new measure, the income cut-off was lowered, cut by nearly a third overall for that family of four living in Montréal. It seems ‘progress’ in Canadian poverty reduction is simply a matter of changing metrics.