As artificial intelligence (AI) continues to integrate itself into many workplaces, the U.S. Department of Labor (DOL) has been busy addressing the technology’s many implications.
In fact, the agency recently issued two sets of guidance that employers should be aware of. One discusses the legal compliance impact of AI, and the other suggests best practices for protecting workers’ well-being as AI-powered systems and processes are rolled out.
FLSA, FMLA, etc.
On April 29, DOL administrator, Jessica Looman, distributed a Field Assistance Bulletin (No. 2024-1) to Wage and Hour Division staff that outlines how various federal labor laws apply to employers’ use of AI. The memo warns that “without responsible human oversight, the use of [AI] technologies may pose potential compliance challenges with respect to federal labor standards.”
Of particular interest are the specifics Looman provides regarding the Fair Labor Standards Act (FLSA) and the Family and Medical Leave Act (FMLA). Regarding the FLSA, for example, the memo warns that employers can’t dock employees’ pay based on activity or productivity metrics calculated by AI systems.
Nor can employers use AI (or other systems) to take adverse actions against employees who engage in protected activities. Looman gives an example of using “automated worker surveillance systems to detect, target, or monitor workers whom the employer suspects have filed a complaint” with the DOL.
When it comes to the FMLA, employers must be on guard for the possibility that AI-powered systems could:
- Wrongfully deny employees’ qualified leave under the act,
- Miscalculate rightfully earned paid time off, or
- Demand too much information from employees requesting leave.
The memo also warns: “Systems used to track leave use may not be used to target FMLA leave users for retaliation or discourage the use of such leave.”
In addition, the guidance covers the Providing Urgent Maternal Protections for Nursing Mothers Act (commonly referred to as the PUMP Act) and the Employee Polygraph Protection Act. To read the full text of the bulletin, click here.
8 key principles
On May 23, the DOL published a document on its website entitled Artificial Intelligence and Worker Well-being: Principles for Developers and Employers. The guidance, which is based on “input from workers, unions, researchers, academics, employers, and developers, among others,” sets forth eight principles applicable to the development and deployment of AI in the workplace. The principles recommend that AI should be:
- Centered on worker empowerment (that is, employees should be informed and involved in its development, training and use),
- Ethically developed to protect workers,
- Established under clear governance and human oversight,
- Transparent in how it’s used for both job applicants and employees,
- Compliant with laws regarding workers’ right to organize and other rights and protections,
- Used to enable employees to improve job quality,
- Introduced into the workplace with appropriate support and upskilling, and
- Managed with the responsible use of employees’ sensitive data.
Per the guidance, “The Principles are applicable to all sectors and intended to be mutually reinforcing, though not all Principles will apply to the same extent in every industry or workplace.” To read the full text, click here.
More guidance ahead?
Both sets of guidance were prompted by an executive order issued by President Biden in October 2023 on the “safe, secure, and trustworthy” use of AI. Employers can likely expect further guidance to be issued on the federal and state levels as the impact of AI on various industries and workplaces in general becomes clearer. We can help your organization assess its costs and return on investment in AI or any other type of technology.
© 2024
Recent Comments