The USA: Regional Employment guide provides expert legal commentary on key issues for businesses. The guide covers the important developments in the most significant jurisdictions.
Last Updated: September 30, 2019
Employment law in the private sector, as with any area of the law, necessarily begins with definition. At this stage in the evolution of our labor and employment history, one might expect the fundamental concepts and definitions to be fairly well settled. To many, however, the picture is at times somewhat unsettled. From a positive point of view, this is arguably attributable to a vibrant economy’s need, and willingness, to adapt to the changes in our global and socio-economic, political and technological climate. At the same time, alternative solutions have created new challenges and, it has been argued, some unintended consequences.
A global entity seeking to establish or enhance its presence in the United States – or, more specifically, in a given region or regions of the United States – will encounter these issues in the context of what, at times, has been referred to as today’s “changing workplace.” No matter the nature of the entity, the “workplace” essentially remains the focus of any dispute, but the issues, relationships and considerations of today have in some measure been redefined and, indeed, expanded or delimited to the point where what traditionally was viewed as the “workplace” may, in certain circumstances, no longer be as we once knew it.
The ever-increasing use of social media and the evolution of its more sophisticated and complex vehicles have alone transformed the “workplace” to include or otherwise affect recipients not previously considered and to invoke entitlements and/or restrictions not previously recognized. The same applies to the impact upon the “workplace” of the “gig” economy, cyberspace, artificial intelligence, analytics and other technological inroads and advances. The “Me Too,” “Pay Equity,” “Whistle-blowing” and other related movements, by highlighting their own issues, have had their own impact, whether it be in areas of alleged sexual and other harassment, discrimination, retaliation, workplace and product safety, financial and business misconduct. All of this has occurred in the face of a steady decline in the unionization of the private-sector workplace, accompanied, ironically, by efforts to expand the scope of the protections and restrictions of the labor laws to both the unorganized and organized employees in a workplace.
As a result, at times we see the parties, our federal, state and local governments, their agencies and the courts, arbitral and other forums, attempting to grapple with one form or another of the proverbial square peg in the round hole – struggling with what some might have considered relatively time-tested meanings of basic terms, notions of “civility,” “privacy,” “confidentiality,” “non-disclosure” and “due process,” or with alleged misclassifications to evade wage, overtime, pension, other benefits and/or other financial obligations.
Compounding these considerations are the overlapping and conflicting definitions and applications of terms and concepts that go to the very heart of the workplace disputes before the different – and alternative – forums our system provides in each region for such dispute resolution. A definition or application in one forum, or under one constitutional provision or statute or regulation, or contract, may well differ from the definition or application accorded that term in another forum or under another statute, regulation or contract.
Such issues may arise at a jurisdictional level, in the course of the discovery process, or at a procedural or substantive level. They may relate to the viability of a class or collective action, or the alleged waiver of such. They may surface in the attempted enforcement of a covenant not to compete or not to solicit, or of a “no poaching” agreement, both where the individual is otherwise covered by a collective bargaining agreement or is not so covered.
The import of a regional perspective certainly compounds the issues even further, but at the same time may offer the global entity some insights, possibilities or options that otherwise might not be available in another region. The governance of this nation’s employment law is in so many respects a matter of federal law, and, accordingly, that is the primary focus of this Regional Guide. Even so, it is vital that the global entity understands that the interpretations of the federal laws in each region often may differ, whether it be in the decisions of the regional offices of the applicable federal agencies, the region’s federal district courts or its circuit courts of appeals, or, ultimately, the Supreme Court of the United States in its review of regional conflicts between or among the federal courts of appeals. Certain issues, moreover, may well be governed by state or other local law and, accordingly, while our primary focus must remain one of federal law, where especially pertinent to the specific needs and interests of a global entity, this Regional Guide may take note of the interplay of such federal employment law and a region’s state or other local law.
Against this background and in the belief that context matters, this Regional Guide seeks to provide a picture of the current socio-economic, political and legal climate, both in the United States generally and in the particular regions here covered. It addresses, in that context, the alternative approaches and arrangements the global entity will need to consider at its inception when defining and implementing its basic structure, its relationships with those who will be servicing it (whether in a non-union, union, or potential union setting), and the import of such decisions. The Guide emphasizes the significance, under our law, of the interviewing process, both as to the possibilities the process offers and the legal and practical constraints our laws may impose.
Among other legal developments this Regional Guide addresses are those terms and conditions that may be of particular importance, if not crucial, to the global entity and its decision as to where in the United States it might wish to establish or enhance its presence, inclusive of restrictive covenants against competition, solicitation or poaching (when enforceable and to what extent), confidentiality, trade secrets, benefits considerations, data and other privacy issues, workplace safety, immigration and related foreign workers' issues. It includes issues and developments in the areas of discrimination, harassment and retaliation, with regard to both pertinent safeguards and restrictions. Also discussed are key issues relative to the termination of the relationship, whether to be addressed at the outset of the relationship in anticipation of that possibility or at the time of the termination.
Most importantly, this Regional Guide also emphasizes what a global entity must understand about the types of disputes that may arise; the different internal and external alternative dispute forums and other forums in which such disputes might be heard; the options available to the entity either in anticipation of such disputes or once such disputes arise; the types of remedies it might seek or to which it might be exposed; and, to the extent relevant, whether there are any extraterritorial applications of the law that may or must be taken into consideration.
TRENDS AND DEVELOPMENTS
As will be seen, throughout this review we attempt to focus, as well, on the most recent trends and developments of which the global entity should be aware. Two of these developments, still in their evolutionary stages, are of such critical importance that their particular mention, with some elaboration, warrants special emphasis. They involve our ability even to define, much less measure and establish, certain elements of discrimination. The first development focuses on artificial intelligence and its increased introduction of analytics to the equation; the second concerns what has been termed “implicit bias.”
Discrimination: Artificial Intelligence — the Import of its Increased Introduction of Analytics and the Algorithms they Entail
Artificial intelligence, to be sure, poses the obvious concerns about job displacement, globally and here in the United States. None of that displacement, short-term or otherwise, can be minimized, particularly with regard to the impact on those least able to cope with it. How and the extent to which that displacement is addressed will present its own challenges, problems and solutions, including as to the innovative and effective internal and external education and training programs such anticipated and actual displacements will require. What is clear, and hopefully somewhat encouraging, is a heightened awareness on the part of our business and educational institutions (at all levels, public and private) of the roles they, themselves, can — and must — play in the development and implementation of such education and training programs, and of the benefits these programs can provide both to the recipients and to the institutions themselves, including diversity and inclusion. Indeed, more employers today not only have adopted diversity goals, but are incentivizing those involved in the hiring process to meet such goals.
That said, of concern at this relatively early stage of artificial intelligence is — as some commentators have observed — its “revolutionary” incorporation of analytics, and the algorithms they entail, into so many aspects of the employment process that most of us could hardly have foreseen. Whether in job postings and the formulation of job descriptions and responsibilities, the reviewing of resumes, the screening of video interviews, testing and other aspects of the hiring, job placement and promotional processes, more and more these algorithms are being used as predictors of behavior, working traits, qualifications or future performance — ostensibly to promote diversity and inclusion or otherwise to mitigate the possibility of unlawful bias. In part, this may be a further outcome of the #Me Too, Times Up, Pay Equity and other such movements.
While they are potentially productive and generally adopted in good faith, we are beginning to learn that the introduction of these algorithms in certain material respects may be problematic, if not outright questionable. The good intentions notwithstanding, the reliability of such algorithms as predictors of behavior or to ascertain the motivations of the decision-makers is, as will be seen, far from clear and very much the subject of ongoing challenge and debate, both among the social psychologists who helped develop the concept and the legal community, including our judicial system, involved in its application.
Algorithms, we know, are dependent upon the decisions made when choosing, collecting and coding the information that will be the predicate for the models to be used, and then formulating these models. Actual or potential abuses of artificial intelligence, however, have manifested themselves in various contexts, eg:
So, too, even when seemingly indicative of a certain linkage, questions have been posed as to whether the linkage is merely one of correlation, rather than causation, as well as to our ability, using the data points selected and the models created, even to measure what we are trying to determine.
In short, the concern has been that the very process of introducing a presumably neutral model to avoid conscious or unconscious biases may well result in the substitution of a subjective process of its own that unintentionally reflects the very same or other biases that the analytics were designed to minimize, if not avoid.
In the words of David Lopez, former and longest-serving General Counsel of the US Equal Employment Opportunity Commission (EEOC), when testifying on 4 March 2019, before a Congressional Subcommittee, “[b]ad data inputs lead to bad results,” and “these digital tools present an even greater potential for misuse if they lock in and exacerbate our country’s longstanding disparities based on race, gender, and other characteristics.” House Subcommittee on Consumer Protection and Commerce of the US House Committee on Energy and Commerce: “Inclusion in Tech: How Diversity Benefits All Americans.” (https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=108901.)
Mr Lopez cited “mishaps”, “abuses” and even “horrors” that “highlight[ ]the need to examine algorithms and big data in the context of their effects on society and the need to have a framework in place that supports its ethical and just use.” He offered, as well, an abundance of “cautionary tales ... about the failure of predictive analytics to live up to our ideals of non-discrimination, opportunity, and privacy,” and spoke of the need for a “better under[standing]” and “increased scrutiny of outcomes” in light of the relatively new-found “prominence of predictive analytics and algorithms in decision-making and other aspects of society.” Indeed, he emphasized “an alarming number of mishaps with employment screening emanating from the elevation of statistical correlation between some variable” and “purported job performance, qualifications or qualities” and the “tendency of search results themselves to reflect stereotypes and bias.” (Emphasis added.)
Clearly, on the basis of his own research, Mr Lopez was genuinely concerned about our faith in analytics as the predictors they have been held out to be. If anything, his concern was that their introduction has exacerbated, rather than ameliorated, the problem of discrimination. Again, in his own words, “algorithms are often predicated on data that amplifies rather than reduces the already present biases in society — racial, ethnic, and socio-economic — in part because these issues may not be noticed or a consideration to the people creating the technology” (emphasis added). “Subjective judgements are made,” he pointed out, and “with those judgements comes the innate biases of the individuals making the decisions” (emphasis added).
Discrimination: Implicit Bias
In his analysis of his concerns about the reliability of analytics and its algorithms as a predictor of discriminatory behavior, Mr Lopez observed: “Despite many large tech companies actively trying to increase the diversity of their workforce, there are still factors at play leading to sub-optimal results that need to be discovered and ameliorated. One of these issues is likely ‘implicit bias’ in the hiring and employment context.”
Implicit Bias Defined
In essence, as social psychologists have defined the term “implicit bias,” we all are subject to our own inherent, “unconscious” or “indirect” biases which, though devoid of a conscious intent, are, in their opinion, nonetheless probative of discriminatory behaviors. As Mr Lopez expressed it in his testimony before the House Subcommittee, the “science of implicit bias” is predicated upon “the more subtle” and “automatic association of stereotypes or [subjective] attitudes about particular groups”; “[p]eople,” he explained, “can have conscious values that are still betrayed by their implicit biases”; and these “implicit biases” — unconscious though they may be — “are frequently better at predicting discriminatory behaviors than people’s conscious values and intentions” (emphasis added).
Disparate Treatment; Disparate Impact
Reliance upon stereotypical and subjective assumptions or judgments can occur in two distinct but, from a legal standpoint, crucially different contexts — one of “disparate treatment” and one of “disparate impact.” The distinction is especially pertinent to the issues of conscious and unconscious bias:
The Import of Mr Lopez’s Observations: where Conscious Bias Alleged; where Implicit (or Unconscious) Bias Alleged
As noted above, when speaking generally about the accuracy or reliability of the relatively new-found prominence of analytics and algorithms as predictors of discriminatory behaviors or motivations, Mr Lopez expressed very serious concerns. The gravity of his concerns cannot be overstated.
The “[a]lgorithms,” Mr Lopez stressed, “are often predicated on data that amplifies rather than reduces the already present biases in society” (emphasis added). He based these concerns on a number of factors – common to both conscious and unconscious bias claims, including the “tendency of search results themselves to reflect stereotypes and bias”; inaccuracies in “statistical correlations” drawn; and the reality that “the people creating the technology” themselves might not even notice the problems or might even produce results reflective of their own individual “[s]ubjective judgements” and “innate biases.”
Mr Lopez’s misgivings about the questionable or misplaced reliance upon stereotypical assumptions and subjective judgments made clear he was referring both to claims of conscious and unconscious (implicit) bias. Indeed, he defined the “science of implicit bias” in terms of “the more subtle” and automatic association of such stereotypes or [subjective] attitudes about particular groups.”
If, by his own assessment, the predictive reliability of the analytics and their algorithms is clearly questionable when attempting to assess the conscious behaviors and motivations at issue in a disparate treatment claim, at the very least, one might well presume — expect — the same reserved/guarded assessment when attempting the more difficult task of unmasking or “betray[ing]” those supposedly bona fide conscious values and intentions on the basis of an asserted unconscious (implicit) bias. Quite the opposite, however, Mr Lopez concludes that these same analytics and algorithms, when assessing these “implicit biases” — unconscious though they may be — “are frequently better at predicting discriminatory behaviors than people’s conscious values and intentions” (emphasis added).
On what basis Mr Lopez reaches this conclusion, and how the predictive determinations of which he speaks will be made — or measured, is unclear and remains to be seen. As briefly discussed below, however, the very nature and definition of the doctrine of implicit bias, the assumptions upon which it is based, how and under what circumstances it can or cannot be measured, and its reliability as a predictor of behavior, are very much in a state of flux, including on the part of those who were very much involved in the creation and development of the doctrine.
To Be Resolved
Whether Mr Lopez’s more optimistic assessment of implicit bias as a better predictor of discriminatory behavior will prove valid will turn on a number of considerations in the evolution of the doctrine, including:
Tony Greenwald, Mahzarin Banaji and Brian Nosek, Understanding and Interpreting IAT Results, Implicit Association Test, https://implicit.harvard.edu/implicit/ireland/background/understanding.html.
These and other poignant quotes and excerpts further emphasize, by way of example, the basis for what appears to be a new-found consensus among many social psychologists that an individual’s score on the tests in question is not a reliable predictor of that individual’s likelihood of engaging in discriminatory behavior; that a fairly large number of studies do not support the conclusion that a group of persons showing higher bias on implicit measures of bias are more likely to discriminate than the group of persons showing lower bias on such measures; and that, if anything, further research is needed to examine the possible accumulative effects of implicit bias on employment outcomes. Many studies, in fact, not only do not find a positive correlation between implicit bias and discriminatory behavior even when looking at aggregate data, but, instead, indicate the very opposite behavior that implicit data would have predicted.
If, as it continues to evolve, the social psychologists and authors of the IAT themselves cannot confirm the reliability of the IAT or other tests to measure or otherwise predict an individual’s behavior and, indeed, caution against its use for such purposes even when compared with aggregate data, it remains to be seen whether the continued use of the implicit association tests as a basis for social framework “evidence” will be regarded instead as an improper substitution of one stereotype for another — particularly where the individual in question has not even taken the test in question and no such proffer is made.
Mindful of these developments, in this connection see “The Paradox of Implicit Bias and a Plea for a New Narrative” (Michael Selmi), 50 Ariz. L. J. 193 (Spring, 2018), noting behavior that often is labeled as “implied” could “just as easily be described as “explicit.” There, moreover, Professor Selmi urges a “move away from a focus on the unconscious, and the IAT, to concentrate instead on field studies that document discrimination in real world settings” (emphasis added). The idea of defining implicit bias as “unconscious, pervasive, and beyond one’s control,” he states, “is a message … that can be difficult to reconcile with our governing legal standards, which often turn on one’s ability to control one’s behavior,” and “is difficult to square with traditional notions of legal proof.” “Implicit bias,” he further notes, “has its greatest effect on spontaneous decisions but plays a lesser role in deliberative decisions” and is “most commonly identified with the controversial disparate impact theory where proof of intent is not required.” He cautions, implicit bias is tied to the IAT, and that test “has limited predictive ability” (emphasis added).
What the future narrative will be remains to be seen.