Is Lab Safety An Ethical Issue?
Analysis and advice on questions and issues of individual ethics and institutional integrity, from Jane Robbins.
Do you have a question or comment that you wish to make anonymously?
Click here to send it to me.
This week’s post is in response to an issue raised via the confidential post box. The questioner wondered, as one of two principle questions, whether laboratory safety fell into the category of an ethical issue.
The short answer is yes. Safety is, in fact, often referred to, in organizational terms, as a “terminal value”; most airlines, for example, would say that safety is their primary terminal value: something closely tied to their mission-critical goal of getting people and cargo entrusted to them from point A to point B. Such terminal values translate into rules of conduct that become a matter of duty in practical, everyday terms: for airlines, all the safety checks to the plane, pilots’ autonomy in the cockpit to abort, the security procedures, the flight attendant demonstrations and cross-checks, and so on. Without safety and a record of safety, there would be no business, no ability to fulfill the mission. So operationally it is sometimes said that such procedures are instrumental to supporting the terminal value -- indeed, to the very raison d’être of an organization.
In supporting mission in a particular way, safety, in theory and practice, is normative at its core. Lab safety, like airline safety, can be thought of in the stakeholder terms that airline safety procedures reflect. Beyond excellence at, say, flying or a conducting a particular type of research, there is recognition that the very act or process of flying or running a lab affects others. So here we see how much relational context (internal to external); rights and obligations; and consequences enter into thinking about what is an ethical issue or not. Each lab might analyze their stakeholders differently, but at a minimum they likely include funders; scientists, technicians, students, administrators, custodians, and other lab workers; and the potential users of the lab’s outputs, such as patients, industrial firms, or consumers. Depending on the nature of the lab, it might also include the surrounding community in explicit population terms; the environment; or animals. Stakeholder theory recognizes the intrinsic value of all stakeholders regardless of their ability to influence the interest of the organization. Analyzing stakeholders is not as easy or obvious as it sounds. It’s particularly tricky to know where (or where not to) to draw the line, so stakeholder analysis often benefits from the assistance of detached/disinterested outsiders in identifying criteria and judging.
Historically, government regulation has played a role in identifying and ensuring the consideration of various stakeholders when institutions have failed to do so on their own, including after being given ample time to do so (conflict of interest rules, for example), or when stakeholder issues have important consequences for society or the national interest. So lab safety is also a matter of law and policy, primarily regulations related to the health and safety of workers (e.g., OSHA), interstate commerce (hazardous materials, controlled substances), environment, defense/national security, and others.
This means lab safety (and other things like it) can sometimes be looked at purely as a matter of compliance: something you have to do. The thing about a compliance mindset is that it largely cuts off thinking about the underlying reasons for the regulations, and moves institutions into a minimum-response mode that is seen as burdensome rather than considered within strategic and organizational terms, which includes ethical terms within their scope. In this it touches on something else my questioner wondered about: Why isn’t lab safety part of ethics training? Why is it so detached from discussions of ethics within the sciences?
A culture of compliance is part of it -- much ethics training in the sciences is only there because it is externally mandated (e.g., responsible conduct of research), and that amount may be rather minimal. Regulations in and of themselves, especially if they were developed as a reaction to a problem in the first instance, may lag current activities or be limited in scope: to covering, e.g., accident investigation, or episodic reporting and inspection, which further detaches the question of safety from the day-to-day. When they derive from multiple sources, they may breed confusion, or increase that something might be missed.
But there may well be a larger organizational reason, a structural one, why there is little discussion about ethics in connection to lab safety. Labs are special-purpose entities; in organizational terms, many labs would be considered fragmentary and even temporary or, within a large company (or university), functional. Overseeing and administering the science of even a complex lab is not the same as leading an organization, which is a quite different set of skills but also a quite different role -- one that requires thinking about ethics and not just the science. No one is going to talk about ethics or integrity if it is not a terminal value—something understood as crucial to the purpose of the organization and its long-term survival and success as a trusted source of science; a safe place to work; a good steward of funds; a responsible citizen. Actually, I don’t think we can expect every lab to take this on because they are, as I said, fragmentary from an organizational perspective. It’s really university leadership’s job: to talk about it, to set expectations and infuse values, and to create the culture. Through recruiting, role modeling, training, procedures, and accountability, concern for safety of all stakeholders is routinized and normative in lab operations—and therefore becomes part of operating culture.
Research on “disasters” as diverse as the Challenger failure, BP oil spill, and both firm- and industry-specific financial collapse supports the notion that organizational leadership and organizational process -- connecting the pieces of complex structures, including acting as an agent for multiple stakeholders -- is both source of and solution to risks of ethics failure. What may on the surface look like an accident or “unanticipated” consequence often, through root cause analysis, is traced back to explicit decisions or failures of managerial concern. Stakeholder theory is, by definition, managerial in nature. The most recent NRC report on chemical safety (2011) recognizes the importance of the coordinating mechanism that management -- institutional management -- provides, and its role in culture.
A culture attending to risk is cheap, like washing hands in hospitals to reduce disease, or routinely following a simple checklist to avoid accidents. Accidents, scandals, and disasters from ethics failures with organizational roots are very costly. In fact, they have the potential to cost you your reputation, the trust of your supporters, your business -- or your lab. Aspiring to rather than complying with safety is an organizational choice of what you value for yourselves and your stakeholders. Safety thus falls squarely into the domain of ethics for organizations, and has performance and strategic effects.
Please post public comments under your full name, or send anonymous comments and questions using the link above or here. Comments will be incorporated into future discussion.
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading