3: Ethics and Responsibility in Dental Care

3 Ethics and Responsibility in Dental Care

As noted in Chapter 1, professionalism brings with it the responsibility to adhere to the highest ethical standards. Public trust is the greatest asset that the dental professions possess, and that trust has been hard-earned by the professions’ willingness to adhere to ethical practice and to follow through if there has been a breach. Even the most conscientious practitioner will find that ethical dilemmas arise frequently. For example, how does a practitioner respond to a patient who wants all her amalgam restorations removed because she believes they are the cause of her chronic fatigue? What is the practitioner’s obligation in treating a patient who is mentally unable to provide his own informed consent? Straight answers are not always easy to find, although ethical codes are intended to give the practitioner guidelines to follow.

This chapter discusses the place of professional ethics in dental care. We discuss the framework for ethical codes, the social and cultural background against which our ethical standards have evolved, ethics in patient care and research, and the role of the professional associations in defining ethical codes.

FRAMEWORK FOR ETHICAL STANDARDS

Ethics, a branch of philosophy and theology, is the systematic study of what is right and good with respect to conduct and character.27 Ethics has also been used as a generic term for various ways of understanding and examining the moral life.8 Our understanding of ethics can also be helped by defining some things that ethics is not: it is not a set of rules or restrictions, it is not religion, and it is neither relative nor subjective.24

The very nature of moral decisions means that much of the ethics literature asks questions rather than provides answers, and this lack of a “formula” to solve problems can be frustrating for some. Because ethics is the study of both the general nature of morals and of specific moral choices, ethical decisions can vary over time and between locations when cultural standards differ. This means, as noted earlier, that a formula for finding what is the right thing to do in specific circumstances cannot always be provided, only guidelines.

The dictum “First, do no harm” has been around since Hippocrates, around 400 bc, and from that point ethical principles have developed slowly over the centuries. Today there are four basic principles (Box 3-1) that have become widely accepted as guidelines for decision making in biomedical ethical dilemmas and that apply to dental professionals as they do to physicians and nurses.

For a professional organization, these principles then need to be formulated as ethical standards. Standards can take the following forms:

Professions adopt ethical standards because that is part of the professional charge. A patient’s trust in a professional comes in part from the expectation that the professional’s behavior is governed by norms prescribed by the group.14 It is also a public expectation that ethical standards be developed and enforced by any profession, a requirement that comes with the privilege of self-regulation.

It was stated earlier that ethical standards are shaped in part by cultural forces, so it is well to examine briefly some of the social and cultural forces that underlie ethical expectations in the United States.

INDIVIDUAL VERSUS SOCIAL RESPONSIBILITY

Who is responsible for health? Is it society as a whole, or is health each individual’s responsibility? That is a broad question, to which the answer can only be, “Some of both.”

It is well understood today that individual lifestyle choices are a major factor in determining a person’s health status. Every educated person knows the basic rules: don’t smoke, drink in moderation, eat lots of fresh fruit and vegetables and a varied diet low in saturated fats, get enough sleep, exercise regularly, fasten the car’s seatbelt, maintain friendships and social contacts. But what about those individuals who are unfortunate enough to have genetic predispositions to disease or are mentally or physically handicapped? Or those who live in rundown neighborhoods where food choices are limited and there is little opportunity to practice a healthy lifestyle? Many people became addicted to cigarettes at a time when such addiction was not understood, and some became alcoholic or drug-addicted in response to social or personal pressures. The problems these conditions present can be compounded by the individual’s inability to pay for necessary medical care. What are the professions’ ethical obligations in these and related instances?

If we believed that health is solely an individual responsibility, we would shrug our shoulders, say “Bad luck,” and be thankful that these bad things weren’t happening to us. But we don’t do that. All high-income nations accept some degree of public responsibility through health and social support systems for sick people. In many European countries, Canada, Australia, and New Zealand these systems can be extensive, usually more so than their counterparts in the United States. Arguments in the United States can turn toward whether such programs should exist at all, although most balk at suggestions that the last vestiges of a “safety net” should be removed. There is ongoing vigorous debate, however, about the extent of and eligibility criteria for public financing of health and welfare, and about the right division between public and personal financing for them. American attitudes toward publicly financed social systems are generally not as positive as those in other developed countries, so it is worth looking at how American cultural attitudes toward individualism and social responsibility have evolved.

INDIVIDUALISM IN THE UNITED STATES

Americans rightly cherish their individual rights and freedoms; individualism has been a more powerful cultural force in the United States than in other countries.9 Many of the settlers who first immigrated to America (voluntary settlers at least) were leaving rigid social, religious, or political systems to seek a new life where they and their children could prosper in an environment that was free of the constraints they had left behind, and where hard work would create its own opportunities and yield its own rewards. An abundance of natural resources and a seemingly limitless frontier gave rise to the attitude that in America people could mold their own destinies largely by their own efforts. Although historical evidence shows that this belief is at best only partly true,9 it still remains a powerful cultural perception and is still the magnet attracting today’s immigrants.

The high point of unfettered laissez-faire capitalism occurred in western Europe around the mid-nineteenth century and in the United States a generation or so later. By the early years of the twentieth century, however, philosophies in Europe were turning away from individualism toward more shared responsibility for basics like housing, education, social security, and health care. Programs grew slowly, but by the 1970s, a network of national state-sponsored social programs was the norm in Europe, less so in the United States. Why the more hesitant growth in America? One reason suggested for the slower development in the United States is that America’s relative isolation from external political turbulence during its formative years allowed the development of a more introspective national character than was the case in Europe.28 Another reason, given that catastrophic events have a way of hastening social change, is that the social devastation wrought by two major wars in the first half of the twentieth century hastened the development of social welfare programs in Europe. The United States largely escaped the social devastation of those wars. It should be remembered, however, that the first Social Security Act in the United States was passed in 1935 in the midst of the Great Depression, which was a catastrophe by any measure.

Another major contribution to the individualist culture in America comes from what is referred to as the puritan ethic, which historians consider to have arrived with the first English colonists.11 Many of these and other pioneers were members of nonconformist religious groups who brought their rigid beliefs about human nature to the new land. These attitudes became part of the American national character, and as such they remain prominent today. Essentially, the puritan ethic is a set of beliefs and attitudes which holds that God rewards people for their honest toil in this life as well as in the next, and that individual wealth or poverty is justified and largely controllable by one’s own efforts. It follows that under the puritan ethic the accumulation of great personal wealth can be seen as a reward for virtue and hard work, just as poverty can be seen as a punishment for immorality or laziness. It logically follows that the puritan ethic also involves a strong aversion to paying taxes, especially when the funds can be “wasted” for social programs aimed at helping the “undeserving” poor.

The first serious questioning of individualism in the United States came during the widespread social distress caused by the Great Depression of the 1930s. Many people at that time lost everything through what was clearly no fault of their own, and the response of the federal government was a series of emergency relief measures aimed at avoiding total societal collapse. Most of these no longer exist, although Social Security has not only survived but has become institutionalized as a major entitlement that figures prominently in current political debate. World War II (1939–45) followed the depression, after which the next wave of social activity followed the revelations about the extent and consequences of poverty in the United States during the early 1960s. The Eighty-ninth Congress (1964–65) passed a series of legislative measures intended to improve social equity, the main ones being the Medicare and Medicaid health programs (see Chapter 8), the Voting Rights Act, the Economic Opportunity Act, the Model Cities Act, and the Elementary and Secondary Education Act. This trend was slowed, and in some cases reversed, by the more conservative mood that set in during the 1980s and continues today. This public mood was also affected by a growing awareness of limited resources and a loss of faith in government’s ability to solve complex social problems.

RIGHT TO HEALTH CARE

American attitudes, historically shaped by individualism and the puritan ethic, are evolving only slowly to the belief that access to health care is a right rather than a privilege.7 The “right to health care” is an emotional and often misunderstood concept, one frequently interpreted as “the right to health.” Of course, no one has a right to health. Health, an elusive entity to define, is a dynamic state influenced by genetic endowment, nutrition, housing, physical and social environment, life habits, personal attitudes and beliefs, and medical care received, quite possibly in that order of importance. Although medical care has probably been overvalued as a determinant of health,7,16,18 most individuals at some time in their lives have a need for it, sometimes an urgent need.

Health care has always been rationed in one way or another. The traditional rationing method has been fee for service, meaning that those who can afford care get it, whereas those who cannot afford it do not. The method is simple enough, and it conforms with the puritan ethic, although there is an untold cost in wasted human resources. This philosophy was challenged in the United States during the 1960s, when access to health care was extended to millions of poorer citizens through Medicaid and to the elderly through Medicare (see Chapter 7). As would be expected, one result of better access to care was that public expenditures on medical care increased substantially. When the public mood later swung toward controlling medical care expenditures as the first priority, “managed care” emerged in the 1980s and grew rapidly in the 1990s (see Chapter 7). Managed care appeared to reduce the rate of increase of medical expenditures, although it did so by introducing other forms of rationing (e.g., restricting services and which physicians may be consulted). The ethical problems have multiplied as a result.

PROFESSIONAL ETHICS AND SELF-REGULATION

The American Dental Association (ADA) maintains a code of ethics, which is reviewed and amended periodically by the association’s Council on Ethics, Bylaws, and Judicial Affairs. The current version of the code of ethics can be found on the ADA’s website.4 It has three sections: Principles of Ethics, Code of Professional Conduct, and Advisory Opinions. This code is classified as aspirational by the ADA, meaning that it is made up of broad principles, though parts of it seem to be more educational, as defined by the standards listed earlier. It is primarily concerned with issues related to the care of patients, though it also deals with the handling of fees, referrals, criticism of colleagues, advertising, and specialty practice. The code has been modified over the years as new issues arise and as societal views on particular issues evolve. It is also influenced by judicial outcomes, which presumably represent social values, as evidenced by growth in the legal advisory opinions in the code. As an example, the 1982 statement on patient selection stated the following:

That statement was unchanged in later versions of the code, but there was an advisory opinion appended after 1988 that dealt primarily with treating infected patients. This opinion stated the following:

Jan 4, 2015 | Posted by in General Dentistry | Comments Off on 3: Ethics and Responsibility in Dental Care
Premium Wordpress Themes by UFO Themes