Clinical peer review

From Infogalactic: the planetary knowledge core
(Redirected from Medical peer review)
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. Lua error in package.lua at line 80: module 'strict' not found.

Clinical Peer Review is the process by which health care professionals evaluate each other’s clinical performance. Clinical peer review is segmented by discipline. No inter-disciplinary models for clinical peer review have been described. Physician Peer Review is most common and is found in virtually all hospitals.[1] Peer review is also done in some settings by other clinical disciplines including nursing and pharmacy. Initially used by Dans,[2] Clinical Peer Review is the best term to collectively refer to all such activity.

Medical peer review is the process by which a committee of physicians examines the work of a peer and determines whether the physician under review has met accepted standards of care in rendering medical services. Depending on the specific institution, a medical peer review may be initiated at the request of a patient, a physician, or an insurance carrier. The term "peer review" is sometimes used synonymously with performance appraisal.

Definitions

The definition of a peer review body can be broad, including not only individuals but also (for example, in Oregon), "tissue committees, governing bodies or committees including medical staff committees of a [licensed] health care facility...or any other medical group in connection with bona fide medical research, quality assurance, utilization review, credentialing, education, training, supervision or discipline of physicians or other health care providers."[3]Definition of Peer Review

The first definition of nursing peer review was published in 1988 by the American Nurses Association and is still applicable today. This definition includes the following statements: "The American Nurses Association believes nurses bare primary responsibility and accountability for the quality of nursing care their clients receive. Standards of nursing practice provide a means for measuring the quality of nursing care a client receives. Each nurse is responsible for interpreting and implementing the standards of nursing practice. Likewise, each nurse must participate with other nurses in the decision-making process for evaluating nursing care…Peer review implies that the nursing care delivered by a group of nurses or an individual nurse is evaluated by individuals of the same rank or standing according to established standards of practice…. Peer review is an organized effort whereby practicing professionals review the quality and appropriateness of services ordered or performed by their professional peers. Peer review in nursing is the process by which practicing registered nurses systematically access, monitor, and make judgments about the quality of nursing care provided by peers as measured against professional standards of practice" (ANA 1988 p. 3).

Clinical peer review should be distinguished from the peer review that medical journals use to evaluate the merits of a scientific manuscript, from the peer review process used to evaluate health care research grant applications, and, also, from the process by which clinical teaching might be evaluated. All these forms of peer review are confounded in the term Medical peer review. Moreover, Medical peer review has been used by the American Medical Association (AMA) to refer not only to the process of improving quality and safety in health care organizations,[4] but also to process by which adverse actions involving clinical privileges or professional society membership may be pursued.[5]

History

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The first documented description of a peer review process is found in the Ethics of the Physician written by Ishap bin Ali al-Rahawi (854–931) of al-Raha, Syria, who describes the first medical peer review process. His work, as well as later Arabic medical manuals, states that a visiting physician must always make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would review the practising physician's notes to decide whether his or her performance met the required standards of medical care. If their reviews were negative, the practicing physician could face a lawsuit from a maltreated patient.[6]

Medical audit, which remains the predominant mode of peer review in Europe, is a focused study of the process and/or outcomes of care for a specified patient cohort using pre-defined criteria, focused on a diagnosis, procedure or clinical situation.[7][8] This audit process was revised by changes to The Joint Commission standards were revised in 1979, dispensing with the audit requirement and calling for an organized system of Quality Assurance (QA). Thus the objective of a medical peer review committee became, to investigate the medical care rendered in order to determine whether accepted standards of care have been met. Contemporaneous with this change, hospitals and physicians adopted generic screening to improve quality of care, despite warnings from the developers of these screens that they were not validated for this purpose, having originally been developed to evaluate no-fault malpractice insurance plans.[9]

The focus on the question of whether the standard of care had been met persisted despite many criticisms,[2][10][11][12] but is increasingly recognized to be outdated, replaced over the past decade by Quality Improvement (QI) principles.[11][12]

Overview

The objective of a medical peer review committee is to investigate the medical care rendered in order to determine whether accepted standards of care have been met. The professional or personal conduct of a physician or other healthcare professional may also be investigated. If a medical peer review committee finds that a physician has departed from accepted standards, it may recommend limiting or terminating the physician's privileges at an institution. Remedial measures including education may also be recommended.

In Nursing, as in other professions, peer review applies professional control to practice, and is used by professionals to hold themselves accountable for their services to the public and the organization. Peer review plays a role in affecting the quality of outcomes, fostering practice development, and maintaining professional autonomy. The American Nurses Association guidelines on peer review define peer review as the process by which practitioners of the same rank, profession, or setting critically appraise each other’s work performance against established standards. Professionals, who are best acquainted with the requirements and demands of the role, are the givers and receivers of the feedback review.

The medical peer review system is a quasi-judicial one, similar in some ways to the grand jury / petit jury system. First, a plaintiff asks for an investigation. Discretionary appointments of staff members are made by the medical Chief of Staff to create an ad hoc committee, which then conducts an investigation in the manner it feels is appropriate. There is no standard for due process, impartiality, or information sources; the review may consult the literature or an outside expert.

An indicted (and sanctioned) physician may have the right to request a hearing, with counsel allowed. A second panel of physicians is chosen as the 'petit jury', and a hearing officer is chosen. The accused physician has the option to demonstrate conflicts of interest and attempt to disqualify jurors based on reasonable suspicions of bias or conflicts of interest in a process akin to voir dire.

The Patient Safety and Quality Improvement Act of 2005 (Public Law 109-41) created Patient Safety Organizations, whose participants are immune from prosecution in civil, criminal, and administrative hearings,[13] in order to act in parallel with peer review boards, using root cause analysis and evaluation of "near misses" in systems failure analysis.

Types

By physicians

Today, physician peer review is most commonly done in hospitals, but may also occur in other practice settings including surgical centers and large group practices. The primary purpose of peer review is to improve the quality and safety of care. Secondarily, it serves to reduce the organization’s vicarious malpractice liability and meet regulatory requirements. In the US, these include accreditation, licensure and Medicare participation.[14] Peer review also supports the other processes that healthcare organizations have in place to assure that physicians are competent and practice within the boundaries of professionally-accepted norms.[citation needed]

In varying degrees, physicians having been doing peer review for a long time. Peer review has been well documented in the 11th century and likely originated much earlier.[15] In the 1900s, peer review methods appear to have evolved in relation to the pioneering work of Codman’s End Result System [16] and Ponton’s concept of Medical Audit.[17] Lembcke, himself a major contributor to audit methodology, in reviewing this history, noted the pre-emptive influence of hospital standardization promoted by the American College of Surgeons (ACS) following WWI.[18] The Joint Commission on Accreditation of Hospitals followed the ACS in this role from 1952. Medicare legislation, enacted in 1964, was a boon to the Joint Commission. The conditions for hospital participation required a credible medical care review program. The regulations further stipulated that Joint Commission accreditation would guarantee payment eligibility.[19] What was once a sporadic process, became hardwired in most hospitals following the Audit model. The widespread creation of new programs was hampered, however, by limitations in the available process models, tools, training and implementation support.[citation needed]

Medical audit is a focused study of the process and/or outcomes of care for a specified patient cohort using pre-defined criteria. Audits are typically organized around a diagnosis, procedure or clinical situation.[20] The audit process can be effective in improving clinical performance.[21] Clinical peer review remains the predominant mode of peer review in Europe.[citation needed]

In the 70s, the widespread creation of new programs was hampered by limitations in the available process models, tools, training and implementation support. The lack of perceived effectiveness of medical audit led to revisions of Joint Commission standards in 1979. Those modified standards dispensed with the audit requirement and called for an organized system of Quality Assurance (QA). About the same time, hospital and physicians were faced escalating malpractice insurance costs. In response to these combined pressures, they began to adopt "generic screens" for potential substandard care. These screens were originally developed to evaluate the feasibility of a no-fault medical malpractice insurance plan and were never validated as a tool to improve quality of care. Despite warnings from the developers, their use became widespread.[22] In the process, a QA model for peer review evolved with a narrow focus on the question of whether or not the standard of care had been met. It has persisted despite the many criticisms of its methods and effectiveness.[2][23][11][12] Today, its methods are increasingly recognized to be outdated and incongruent with the Quality Improvement (QI) principles that have been successfully adopted into the field of health care over the past decade.[11][12]

There is good evidence that contemporary peer review process can be further improved. The American College of Obstetrics and Gynecology has offered a Voluntary Review of Quality of Care Program for more than 2 decades. Perceived issues with the adequacy of peer review were an explicit reason for requesting this service by 15% of participating hospitals, yet recommendations for improved peer review process were made to 60%.[24] A 2007 study of peer review in US hospitals found wide variation in practice. The more effective programs had more features consistent with quality improvement principles.[1] There were substantial opportunities for program improvement. The implication is that a new QI model for peer review seems to be evolving.[citation needed]

A 2009 study confirmed these findings in a separate sampling of hospitals.[25] It also showed that important differences among programs predict a meaningful portion of the variation on 32 objective measures of patient care quality and safety.[26]

A four-year longitudinal study of 300 programs identified the quality of case review and the likelihood of self-reporting of adverse events, near misses and hazardous conditions as additional multivariate predictors of the impact of clinical peer review on quality and safety, medical staff perceptions of the program, and clinician engagement in quality and safety initiatives.[27] Despite a persistently high annual rate of major program change, about 80% of programs still have significant opportunity for improvement. It is argued that the out-moded QA model perpetuates a culture of blame that is toxic to efforts to advance quality and high reliability among both physicians and nurses.

External Peer Review

The 2007 study showed that the vast majority of physician peer review is done "in house": 87% of hospitals send less than 1% of their peer review cases to external agencies. The external review process is generally reserved for cases requiring special expertise for evaluation or for situations in which the independent opinion of an outside reviewer would be helpful (see independent review). The process is significantly more costly than in-house review, since the majority of hospital review is done as a voluntary contribution of the medical staff.

Mandated external peer review has not played an enduring role in the US, but was tested back in the 70s. A 1972 amendment to the Social Security Act established Professional Standards Review Organizations (PSRO) with a view to controlling escalating Medicare costs through physician-organized review.[28] The PSRO model was not considered to be effective and was replaced in 1982 by a further act of Congress which established Utilization and Quality Control Peer Review Organizations (PROs). This model too was fraught with limitations. Studies of its methods called into question its reliability and validity for peer review.[29] A survey of Iowa state medical society members in the early 90s regarding perceptions of the PRO program illustrated the potential harm of a poorly designed program.[30] Furthermore, the Institute of Medicine issued a report identifying the system of care as the root cause of many instances of poor quality. As a result, in the mid-90s, the PROs changed their focus and methods; and began to de-emphasize their role as agents of external peer review. The change was completed by 2002, when they were renamed Quality Improvement Organizations.[31]

Nursing

Nursing peer review appears to have gained momentum as a result of growth of hospital participation in the American Nursing Association’s Magnet Program.[32] Even so, less than 7% of U.S. hospitals have qualified. Magnet hospitals are required to have had a peer review evaluation process in place designed to improve practice and performance for all RNs for at least 2 years.[33] The literature on nursing peer review is more limited than that which has been developed for physician peer review,[34] and has focused more on annual performance appraisal than on case review.[35] No aggregate studies of clinical nursing peer review practices have been published. Nevertheless, more sophisticated studies have been reported.[36]

Mostly what is mistakenly referred to as "peer review" in clinical practice is really a form of the annual performance evaluation. The annual performance review is a managerial process and does not meet the definition or outcomes needed related to peer review. Other organizational practices may violate the peer review guidelines set forth 1988 by the ANA 1988.[37] The most frequent violation is the performance of direct care peer review by managers. One of the reasons for the confusion is that the ANA guidelines for peer review had been out of print prior to being reprinted and updated in 2011.[38]

The early ANA Peer Review Guidelines (1988) and Code of Ethics for Nurses (2001) focus on maintaining standards of nursing practice and upgrading nursing care in three contemporary focus areas for peer review. The three dimensions of peer review are: (a) quality and safety, (b) role actualization, and (c) practice advancement. Each area of contemporary peer review has an organizational, unit, and individual focus.[39] The following six peer review practice principles stem from and are grounded in the 1988 ANA Guidelines and may help to assure an evidence-based and consistent approach to peer review: 1. A peer is someone of the same rank. 2. Peer review is practice focused. 3. Feedback is timely, routine and a continuous expectation. 4. Peer review fosters a continuous learning culture of patient safety and best practice. 5. Feedback is not anonymous. 6. Feedback incorporates the developmental stage of the nurse.

Written and standardized operating procedures for peer review also need development and adoption by the direct care staff and incorporation into the professional practice model (shared governance) bylaws.[40]

Confusion exists about the differences between the Professional Peer Review process, the Annual Performance Review (APR) and the role of peer evaluation. The APR is a managerial human resource function performed with direct reports, and is aimed at defining, aligning and recognizing each employee’s contribution to the organization’s success. In contrast, professional peer review is conducted within the professional practice model and is not a managerial accountability. Peer evaluation is the process of getting feedback on one’s specific role competencies or "at work" behaviors from people that one works within the department and from other departments. "Colleague evaluation" is a more appropriate term than "peer evaluation" as this is not a form of professional peer review.[41]

Composition of peer review boards

There is no one standard composition of Medical peer review bodies, nor are there different names for peer review bodies of varying constituent parts. They may be carried out by State medical boards (with different standards for membership), hospital administration, senior staff, department heads, etc., or a combination of these.

State medical boards conduct peer review of licentiates, composed of physicians only or including attorneys and other non-physicians, varying by state. Physicians may be board members in primarily advisory capacities. Medical peer review may be carried out by committees that may include physicians not on the board. The same is true of state boards run by physicians from that state; board physicians or physicians unaffiliated with the board may be in medical peer review committees.

In hospitals, only a peer review committee authorized by the physician medical staff is authorized to take action regarding a physician's medical privileges at that institution. A committee convened by the hospital administration or other group within the hospital may make disciplinary recommendations to the physician medical staff.

Departmental peer review committees are composed of physicians, while hospital-based performance-appraisal and systems-analysis committees may include nurses or administrators with or without the participation of physicians.

Although medical staff bodies utilize hospital attorneys and hospital funds to try peer review cases, the California Medical Association discourages this practice; California legislation requires separation of the hospital and medical staff.[42]

Nursing professionals have historically been less likely to participate or be subject to Peer Review.[43][44] This is changing,[45][46] as is the previously limited extensiveness (for example, no aggregate studies of clinical nursing peer review practices had been published as of 2010) of the literature on nursing peer review[47]

In response to the Health Care Quality Improvement Act of 1987, (HCQIA) (P.L. 99-660 ) national medical associations' executives and health care organizations formed the non-profit American Medical Foundation for Peer Review and Education[48] to provide independent assessment of medical care.

Abuse

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Sham peer review is a name given to the abuse of a medical peer review process to attack a doctor for personal or other non-medical reasons.[49]

Controversy exists over whether medical peer review has been used as a competitive weapon in turf wars among physicians, hospitals, HMOs, and other entities. The American Medical Association conducted an investigation of medical peer review in 2007 and concluded that while it is easy to allege misconduct, proven cases of malicious peer review are rare.[50]

Abuse is also referred to as "malicious peer review" by those who consider it endemic, and they allege that the creation of the National Practitioner Data Bank under the 1986 Healthcare Quality Improvement Act (HCQIA) facilitates such abuse, creating a 'third-rail' or a 'first-strike' mentality by granting significant immunity from liability to doctors and others who participate in peer reviews.

The California legislature framed its statutes so as to allow that a peer review can be found in court to have been improper due to bad faith or malice, in which case the peer reviewers' immunities from civil liability "fall by the wayside".[51]

Many medical staff laws specify guidelines for the timeliness of peer review, in compliance with JCAHO standards.

Some physicians allege that sham peer review is routinely conducted in retaliation for whistleblowing, although a study of the phenomenon did not support this charge.

Cases of alleged sham peer review include Khajavi v. Feather River Anesthesiology Medical Group,[50][52][53] Mileikowsky v. Tenet,[54][55][56] and Roland Chalifoux.[57][58]

Defenders of the Health Care Quality Improvement Act state that the National Practitioner Data Bank protects patients by helping preventing errant physicians who have lost their privileges in one state from traveling to practice in another state. Physicians who allege they have been affected by sham peer review are also less able to find work when they move to another state, as Roland Chalifoux did.[58] Moreover, neither opponents or supporters of the NPDB can be completely satisfied, as Chalifoux' case shows that just as physicians who were unjustly accused may be deprived of work in this way, those who have erred might still find work in other states.

See also

References

  1. 1.0 1.1 Edwards MT, Benjamin EM. The process of peer review in US hospitals. Journal of Clinical Outcomes Management. 2009(Oct);16(10):461-467.
  2. 2.0 2.1 2.2 Dans P.E., Clinical peer review: burnishing a tarnished image. Annals of Internal Medicine 1993;118(7):566-568.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Ray Spier (2002), "The history of the peer-review process", Trends in Biotechnology 20 (8), p. 357-358 [357].
  7. Shaw CD. Aspects of audit in British hospitals. BMJ 1980 (May31):1314-1316.
  8. Jamtvedt G, Young JM, Kristofferen DT, O’Brien MA, Oxman AD. Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback. Quality and Safety in Health Care 2006;15:433-436.
  9. Sanazaro PJ, Mills DH. A critique of the use of generic screening in quality assessment. JAMA. 1991;265(15):1977-1981.
  10. Goldman RL. The reliability of peer assessments: A meta-analysis. Evaluation and the Health Professions 1994;17(1):3-21.
  11. 11.0 11.1 11.2 11.3 Berwick DM. Peer review and quality management: are they compatible? Quality Review Bulletin 1990;16(7):246-51.
  12. 12.0 12.1 12.2 12.3 Edwards MT. Peer review: a new tool for quality improvement. The Physician Executive Journal of Medical Management 2009;35(5):54-59. reprint requests
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Haines S, Hospital peer review systems: An overview. Health Matrix 1984-5; 2(4):30-32.
  15. Ajlouni KM, Al-Khalidi U. Medical records, patients outcome, and peer review in eleventh-century Arab medicine. Annals of Saudi Medicine 1997;17(3):326-327.
  16. Codman EA. A Study in Hospital Efficiency. Boston, MA: T Todd Company; 1917.
  17. Ponton TR, Gauging efficiency of hospital and its staff. Modern Hospital 1928;31(Aug):64-68.
  18. Lembbcke PA. Evolution of the medical audit. JAMA. 1967;199(8):111-118.
  19. Legge D. Peer review in the USA: an historical perspective. Medical Journal of Australia 1981;1:709-711.
  20. Shaw CD. Aspects of audit in British hospitals. BMJ 1980 (May31):1314-1316.
  21. Jamtvedt G, Young JM, Kristofferen DT, O’Brien MA, Oxman AD. Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback. Quality and Safety in Health Care 2006;15:433-436.
  22. Sanazaro PJ, Mills DH. A critique of the use of generic screening in quality assessment. JAMA. 1991;265(15):1977-1981.
  23. Goldman RL. The reliability of peer assessments: A meta-analysis. Evaluation and the Health Professions 1994;17(1):3-21.
  24. Lichtmacher A. Quality assessment tools: ACOG Voluntary Review of Quality Program, peer review reporting system. Obstet Gynecol Clin North Am. 2008;35(1):147-162.
  25. Edwards MT. Clinical peer review program self-evaluation for US hospitals. Am J Med Qual. 2010; 25(6):474-480. [1]
  26. Edwards MT. The objective impact of clinical peer review on hospital quality and safety. Am J Med Qual. 2010; published online before print December 15, doi:10.1177/1062860610380732. [2]
  27. Edwards MT. [3] ache.org A Longitudinal Study of Clinical Peer Review's Impact on Quality and Safety in U.S. Hospitals.] J Healthcare Manage 2013;58(5):369-384.
  28. Institute of Medicine, Lohr KN, ed. Medicare: A Strategy for Quality Assurance. Washington, DC: National Academy Press; 1990. (see chapter 7)
  29. Rubin HR, Rogers WH, Kahn KL, Rubenstein LV, Brook RH. Watching the doctor watchers: how well do peer review organization methods detect hospital quality of care problems? JAMA. 1992;267(17):2349-2354.
  30. Roth RR, Porter PJ, Bisbey GR, May CR. The attitudes of family physicians toward the peer review process. Archives of Family Medicine 1993;2(12):1271-5.
  31. The American Health Quality Foundation. Quality Improvement Organizations and Health Information Exchange. Washington, DC: March 6, 2006. (page 14)
  32. http://www.nursecredentialing.org/Magnet/ProgramOverview/GrowthoftheProgram.aspx
  33. Davis KK, Capozzoli J, Parks J. Implementing Peer Review: Guidelines for Managers and Staff. Nursing Admin Q. 2009;33(3):251-257
  34. Rout A, Roberts P. Peer review in nursing and midwifery: a literature review. J Clin Nursing. 2008;17:427-442.
  35. Hitchings KS, Davies-Hathen N, Capuano TA, Morgan G, Bendekovits R. Peer case review sharpens event analysis. J Nurs Care Qual. 2008;23(4):296-304.
  36. Pearson ML, Lee JL, Chang BL, Elliott M, Kahn KL, Rubenstein LV. Structured implicit review: a new method for monitoring nursing care quality. Med Care. 2000 Nov;38(11):1074-1091.
  37. American Nurses Association. (1988). Peer review in nursing practice. Kansas City, MO
  38. Haag-Heitman, B & George, V. (2011). Nursing Peer Review: Principles and Practice. American Nurse Today (Special Magnet Edition), 6 (3), 61-63.
  39. Haag-Heitman, B. & George, V. (2011). Peer Review In Nursing: Principles for Successful Practice. Sudbury, MA: Jones and Bartlett
  40. Haag-Heitman, B. & George, V. (2010). Guide for Establishing Shared Governance: A Starter’s Tool Kit. Sliver Spring. MD: American Nurses Credentialing Center (ANCC)
  41. George, V. & Haag-Heitman, B. (2011). Nursing Peer Review: The Manager’s Role. Journal of Nursing Management, 19(2).
  42. Lua error in package.lua at line 80: module 'strict' not found.
  43. http://www.nursecredentialing.org/Magnet/ProgramOverview/GrowthoftheProgram.aspx
  44. Davis KK, Capozzoli J, Parks J. Implementing Peer Review: Guidelines for Managers and Staff. Nursing Admin Q. 2009;33(3):251-257
  45. Pearson ML, Lee JL, Chang BL, Elliott M, Kahn KL, Rubenstein LV. Structured implicit review: a new method for monitoring nursing care quality. Med Care. 2000 Nov;38(11):1074-1091.
  46. Hitchings KS, Davies-Hathen N, Capuano TA, Morgan G, Bendekovits R. Peer case review sharpens event analysis. J Nurs Care Qual. 2008;23(4):296-304.
  47. Rout A, Roberts P. Peer review in nursing and midwifery: a literature review. J Clin Nursing. 2008;17:427-442.
  48. AMFPRE home
  49. Lua error in package.lua at line 80: module 'strict' not found.
  50. 50.0 50.1 "Inappropriate Peer Review. Report of the Board of Trustees of the American Medical Association."
  51. Lua error in package.lua at line 80: module 'strict' not found. California law allows "aggrieved physicians the opportunity to prove that the peer review to which they were subject was in fact carried out for improper purposes, i.e., for purposes unrelated to assuring quality care or patient safety".
  52. Lua error in package.lua at line 80: module 'strict' not found.
  53. Lua error in package.lua at line 80: module 'strict' not found.
  54. Mileikowsky v. Tenet Healthsystem (April 18, 2005) 128 Cal.App.4th 531, 27 Cal.Rptr.3d 171.
  55. Lua error in package.lua at line 80: module 'strict' not found.
  56. Mileikowsky v. West Hills Hosp. and Medical Center (2009) 45 Cal.4th 1259, 203 P.3d 1113, 91 Cal.Rptr.3d 516.
  57. Horvit M and Jarviss J, "Board revokes doctor's license," Fort Worth Star-Telegram (TX), 12 June 2004, p.1B
  58. 58.0 58.1 Mitchell M, "Former Texas neurosurgeon granted licenses in West Virginia," Fort Worth Star-Telegram (TX), 7 July 2005

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.