Investing In Early Developing Country

The aspect of globalization was the core consideration with regard to the type of company selected. Since I was more inclined to a service company particularly in the baking and financial services sector, I settled on Capital One Financial, which is a reputed banking corporation with a global presence as well as being among the fortune 500 companies. Capital One Financial Corp is a bank holding firm based in the U.S and focuses in auto loans, home loans, and credit cards banking along with savings products (Icon Group International, Inc. Staff and Icon Group Ltd 12-15). An associate of the Fortune 500, the corporation helped establish the mass marketing of credit cards in the initial periods of the 1990s, and it is currently the fourth-largest client of the American Postal Service and its deposit assortment is ranked fifth in the country (Paige 14). Capital One Financial firm is the mother corporation of Capital One Auto Finance, or COAF, stationed in Plano, Texas. Subsequent to buying PeopleFirst, it grew to be the largest Internet auto lender and one of the highly ranked US auto lenders in general (Hitt et al 85). Kenya is my country of choice for investment for a number of reasons, first is the fact that Kenya is the fastest growing economy in the expanse and its performance is robust making it a viable destination for investment (Ndung’u, Collier and Adam 89-92). Commercially, Kenya has made numerous gains and its financial sector along with the general economic environment is based on contemporary economic standards. Kenya’s financial and banking sector is among the most robust and lucrative not only in East Africa but also in the entire world. Therefore, investing in the Kenyan financial and banking sector is a lucrative idea. The investment plan by Capital One Financial in Kenya’s financial and banking system will be organized in a number of stages to achieve the required results (Goodman and Downes 106). In essence, the investment program will echo the relevant realities in Kenya regarding the investment protocols that ought to be followed. Essentially, the investment will be done through joint ventures that represent the most convenient way of investing in Kenya. Therefore, Capital One Financial will seek a joint venture with local banks in Kenya through which it will launch its services and operations in conjunction with the local bank. The choice of a local company will be done in a categorical manner to make certain that the concerns and goals of the investing company are safeguarded. Nevertheless, the option of foreign direct investment (FDI) will be left open so as to ensure that Capital One Financial may invest directly in the Kenyan financial system. However, this will depend on the probability of success of FDI by the company on request of the Kenyan authorities. Financial banking is the discipline of administration of money along with other valuables pertaining to a particular business. It is obvious that banks tender basic advances, deposits in addition to financial counsel, though they as well facilitate dealings on complicated financial instruments like private equity, bonds along with mutual funds (IBP USA Staff 56-61). The majority of top-performing contenders typically perceive careers in Banking as the pinnacle of accomplishment, and sectors such as coffers, equity trading, speculation banking alongwith private banking are perceived as the most worthwhile jobs for innovative graduates.

Global and Products Industry Porter’s Six Forces Analysis

Porter’s six forces analysis of the global paper industry would involve certain factors such as threat of new entrants, rivalry among existing firms, threat of substitute products or services, bargaining power of buyers, bargaining power of suppliers and relative power of other stakeholders. The explanations of the factors are given below:Threat of New EntrantsEconomies of scaleThe Gross Domestic Product (GDP) of the global paper industry has increased simultaneously with the growing usage of the paper by its consumers. In certain cases, the usage of papers remains intact where the GDP growth is witnessed to be almost stagnant. The global paper industry produces paper products of $750 billion each year comprising small enterprises globally (Scheihing, 2005).Product Differentiation Product differentiation is one of the most challenging measures of expanding or intensifying a business or industry. With the intense use of internet, the usage of paper is however decreasing day by day and as a result the global paper industry is trying to diversify or differentiate their products and trying to expand their product lines, especially based on the quality aspect (Scheihing, 2005).Capital Requirement A new entrant to the paper industry initially requires around $4.5 Million capital which may be recognized as a demanding level to position themselves in the industry (Scheihing, 2005). Switching CostThe switching cost for the paper industry is low. Therefore, the scope of new entrants is high, as new entrants can any time switch over to another industry if they do not feel competitive in the paper industry incurring minimum cost (Uronen, 2010). … Capital Requirement A new entrant to the paper industry initially requires around $4.5 Million capital which may be recognized as a demanding level to position themselves in the industry (Scheihing, 2005). Switching Cost The switching cost for the paper industry is low. Therefore, the scope of new entrants is high, as new entrants can any time switch over to another industry if they do not feel competitive in the paper industry incurring minimum cost (Uronen, 2010). Accesses to Distribution Channels The distribution channel of the paper industry comprises of various, dealers, shareholders, retailors, and consumers to serve the ultimate customers in the corporate and educational sectors. Notably, these better facilities to an industry encourage the new entrants to enter the existing market (Uronen, 2010). Cost Disadvantages Independent of Size Due to the high installation cost, and high maintenance cost, the probability of new entrants reduces. However, due to the independence of determining the size of the firms the scope of new entrants rises depicting a moderate level of threat to new entrants (Uronen, 2010). Government In relation to the global paper industry, the government has implemented certain rules and norms, laws, and regulations. In addition, there are many associations who are protesting against paper industry due to the usage of forest products and deforestations (Uronen, 2010). Rivalry among Existing Firms Number of Competitors There are too many paper mills or companies existing within the global paper industry, but the top five existing competitors are Paper Associates PTY.LTD, International Paper Company, Kimberly-Clark de Mexico, Georgia-Pacific LLC and Svenska Cellulosa Aktiebolaget SCA among others (SKC, 2012). Rate of Industry Growth

The Key Aspects of Criminal Law

eath, or the intent to cause grievous body harm and almost all kinds of criminal offenses require a demonstration of mens rea to endorse culpability of offense. (Mens rea. 2006).In this case, it could be seen that Amelia’s mens reus was to cause death to her husband, and therefore, prima facie, Amelia is guilty of first-degree murder, or manslaughter. However, there are mitigating circumstances, which shall presently be considered. It is seen that in this case, Amelia has been a victim of continuous torture and violence over a period of time and could be said to be suffering from battered wife syndrome.In the leading case of R v. Ahluwalia (1992) 4 AER 889, the woman killed her abusive and aggressive husband and claimed provocation as the precipitating cause for her actions. The Court questioned the jury as to whether an educated woman living in the UK could have lost her self-will to such an extent that she needed to take recourse to murder her husband. The defense pleaded that the loss of self will be caused by the physical and emotional battering the wife had endured over a long period of time, which forced her to take such an extreme step. Considering the mitigating circumstances of the case, the Court called for a retrial on the basis of this fresh medical evidences emanating. (Judgments – Regina v. Smith (On Appeal from the Court of Appeal (Criminal Division).The features in the Ahluwalia case were also seen in the R v. Thornton (No 2) 1996 2 AER 1023 which is akin to that of Ahluwalia case? In this case, the defendant claimed to be suffering from mental disease and the Court ordered retrial based on these extenuating grounds. (Judgments – Regina v. Smith (On Appeal from the Court of Appeal (Criminal Division).When the mens reus regarding Amelia’s contribution towards the death of her baby is concerned, it is seen that in all probability, means reus cannot be established, since there are no ostensible intention of either killing the baby or causing her grievous bodily harm.

Investing In Early Developing Country

The aspect of globalization was the core consideration with regard to the type of company selected. Since I was more inclined to a service company particularly in the baking and financial services sector, I settled on Capital One Financial, which is a reputed banking corporation with a global presence as well as being among the fortune 500 companies. Capital One Financial Corp is a bank holding firm based in the U.S and focuses in auto loans, home loans, and credit cards banking along with savings products (Icon Group International, Inc. Staff and Icon Group Ltd 12-15). An associate of the Fortune 500, the corporation helped establish the mass marketing of credit cards in the initial periods of the 1990s, and it is currently the fourth-largest client of the American Postal Service and its deposit assortment is ranked fifth in the country (Paige 14). Capital One Financial firm is the mother corporation of Capital One Auto Finance, or COAF, stationed in Plano, Texas. Subsequent to buying PeopleFirst, it grew to be the largest Internet auto lender and one of the highly ranked US auto lenders in general (Hitt et al 85). Kenya is my country of choice for investment for a number of reasons, first is the fact that Kenya is the fastest growing economy in the expanse and its performance is robust making it a viable destination for investment (Ndung’u, Collier and Adam 89-92). Commercially, Kenya has made numerous gains and its financial sector along with the general economic environment is based on contemporary economic standards. Kenya’s financial and banking sector is among the most robust and lucrative not only in East Africa but also in the entire world. Therefore, investing in the Kenyan financial and banking sector is a lucrative idea. The investment plan by Capital One Financial in Kenya’s financial and banking system will be organized in a number of stages to achieve the required results (Goodman and Downes 106). In essence, the investment program will echo the relevant realities in Kenya regarding the investment protocols that ought to be followed. Essentially, the investment will be done through joint ventures that represent the most convenient way of investing in Kenya. Therefore, Capital One Financial will seek a joint venture with local banks in Kenya through which it will launch its services and operations in conjunction with the local bank. The choice of a local company will be done in a categorical manner to make certain that the concerns and goals of the investing company are safeguarded. Nevertheless, the option of foreign direct investment (FDI) will be left open so as to ensure that Capital One Financial may invest directly in the Kenyan financial system. However, this will depend on the probability of success of FDI by the company on request of the Kenyan authorities. Financial banking is the discipline of administration of money along with other valuables pertaining to a particular business. It is obvious that banks tender basic advances, deposits in addition to financial counsel, though they as well facilitate dealings on complicated financial instruments like private equity, bonds along with mutual funds (IBP USA Staff 56-61). The majority of top-performing contenders typically perceive careers in Banking as the pinnacle of accomplishment, and sectors such as coffers, equity trading, speculation banking alongwith private banking are perceived as the most worthwhile jobs for innovative graduates.

Global and Products Industry Porter’s Six Forces Analysis

Porter’s six forces analysis of the global paper industry would involve certain factors such as threat of new entrants, rivalry among existing firms, threat of substitute products or services, bargaining power of buyers, bargaining power of suppliers and relative power of other stakeholders. The explanations of the factors are given below:Threat of New EntrantsEconomies of scaleThe Gross Domestic Product (GDP) of the global paper industry has increased simultaneously with the growing usage of the paper by its consumers. In certain cases, the usage of papers remains intact where the GDP growth is witnessed to be almost stagnant. The global paper industry produces paper products of $750 billion each year comprising small enterprises globally (Scheihing, 2005).Product Differentiation Product differentiation is one of the most challenging measures of expanding or intensifying a business or industry. With the intense use of internet, the usage of paper is however decreasing day by day and as a result the global paper industry is trying to diversify or differentiate their products and trying to expand their product lines, especially based on the quality aspect (Scheihing, 2005).Capital Requirement A new entrant to the paper industry initially requires around $4.5 Million capital which may be recognized as a demanding level to position themselves in the industry (Scheihing, 2005). Switching CostThe switching cost for the paper industry is low. Therefore, the scope of new entrants is high, as new entrants can any time switch over to another industry if they do not feel competitive in the paper industry incurring minimum cost (Uronen, 2010). … Capital Requirement A new entrant to the paper industry initially requires around $4.5 Million capital which may be recognized as a demanding level to position themselves in the industry (Scheihing, 2005). Switching Cost The switching cost for the paper industry is low. Therefore, the scope of new entrants is high, as new entrants can any time switch over to another industry if they do not feel competitive in the paper industry incurring minimum cost (Uronen, 2010). Accesses to Distribution Channels The distribution channel of the paper industry comprises of various, dealers, shareholders, retailors, and consumers to serve the ultimate customers in the corporate and educational sectors. Notably, these better facilities to an industry encourage the new entrants to enter the existing market (Uronen, 2010). Cost Disadvantages Independent of Size Due to the high installation cost, and high maintenance cost, the probability of new entrants reduces. However, due to the independence of determining the size of the firms the scope of new entrants rises depicting a moderate level of threat to new entrants (Uronen, 2010). Government In relation to the global paper industry, the government has implemented certain rules and norms, laws, and regulations. In addition, there are many associations who are protesting against paper industry due to the usage of forest products and deforestations (Uronen, 2010). Rivalry among Existing Firms Number of Competitors There are too many paper mills or companies existing within the global paper industry, but the top five existing competitors are Paper Associates PTY.LTD, International Paper Company, Kimberly-Clark de Mexico, Georgia-Pacific LLC and Svenska Cellulosa Aktiebolaget SCA among others (SKC, 2012). Rate of Industry Growth

The Key Aspects of Criminal Law

eath, or the intent to cause grievous body harm and almost all kinds of criminal offenses require a demonstration of mens rea to endorse culpability of offense. (Mens rea. 2006).In this case, it could be seen that Amelia’s mens reus was to cause death to her husband, and therefore, prima facie, Amelia is guilty of first-degree murder, or manslaughter. However, there are mitigating circumstances, which shall presently be considered. It is seen that in this case, Amelia has been a victim of continuous torture and violence over a period of time and could be said to be suffering from battered wife syndrome.In the leading case of R v. Ahluwalia (1992) 4 AER 889, the woman killed her abusive and aggressive husband and claimed provocation as the precipitating cause for her actions. The Court questioned the jury as to whether an educated woman living in the UK could have lost her self-will to such an extent that she needed to take recourse to murder her husband. The defense pleaded that the loss of self will be caused by the physical and emotional battering the wife had endured over a long period of time, which forced her to take such an extreme step. Considering the mitigating circumstances of the case, the Court called for a retrial on the basis of this fresh medical evidences emanating. (Judgments – Regina v. Smith (On Appeal from the Court of Appeal (Criminal Division).The features in the Ahluwalia case were also seen in the R v. Thornton (No 2) 1996 2 AER 1023 which is akin to that of Ahluwalia case? In this case, the defendant claimed to be suffering from mental disease and the Court ordered retrial based on these extenuating grounds. (Judgments – Regina v. Smith (On Appeal from the Court of Appeal (Criminal Division).When the mens reus regarding Amelia’s contribution towards the death of her baby is concerned, it is seen that in all probability, means reus cannot be established, since there are no ostensible intention of either killing the baby or causing her grievous bodily harm.

Wk7articles

Critical Appraisal of the Evidence: Part I In May’s evidence-based prac- tice (EBP) article, Rebecca R., our hypothetical staff nurse, and Carlos A., her hospital’s ex- pert EBP mentor, learned how to search for the evidence to answer their clinical question (shown here in PICOT format): “In hos­ pitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac ar­ rests (O) and unplanned admis­ sions to the ICU (O) during a three­month period (T)?” With the help of Lynne Z., the hospi- tal librarian, Rebecca and Car- los searched three databases, PubMed, the Cumulative Index of Nursing and Allied Health Literature (CINAHL), and the Cochrane Database of Systematic Reviews. They used keywords from their clinical question, in- cluding ICU, rapid response team, cardiac arrest, and un­ planned ICU admissions, aswell as the following synonyms: failure to rescue, never events, medical emergency teams, rapid response systems, and codeblue. Whenever terms from a database’s own indexing lan- guage, or controlled vocabulary, matched the keywords or syn- onyms, those terms were also searched. At the end of the data- base searches, Rebecca and Car- los chose to retain 18 of the 18 studies found in PubMed; six of the 79 studies found in CINAHL; and the one study found in the Cochrane Database of System- atic Reviews, because they best answered the clinical question. As a final step, at Lynne’s rec- ommendation, Rebecca and Car- los conducted a hand search of the reference lists of each study they retained looking for any rele- vant studies they hadn’t found in their original search; this process is also called the ancestry method. The hand search yielded one ad- ditional study, for a total of 26. RAPID CRITICAL APPRAISAL The next time Rebecca and Car- los meet, they discuss the next step in the EBP process—critically appraising the 26 studies. They obtain copies of the studies by printing those that are immedi- ately available as full text through library subscription or those flagged as “free full text” by a database or journal’s Web site. Others are available through in- terlibrary loan, when another hospital library shares its articles with Rebecca and Carlos’s hospi- tal library. Carlos explains to Rebecca that the purpose of critical appraisal isn’t solely to find the flaws in a study, but to determine its worth to practice. In this rapid critical appraisal (RCA), they will review each study to determine • its level of evidence.• how well it was conducted. • how useful it is to practice. Once they determine which studies are “keepers,” Rebecca and Carlos will move on to the final steps of critical appraisal: evaluation and synthesis (to be discussed in the next two install- ments of the series). These final steps will determine whether overall findings from the evi- dence review can help clinicians improve patient outcomes. Rebecca is a bit apprehensive because it’s been a few years since she took a research class. She ajn@wolterskluwer.com AJN ▼ July 2010 ▼ Vol. 110, No. 7 47 shares her anxiety with Chen M., a fellow staff nurse, who saysshe never studied research in school but would like to learn; she asks if she can join Carlos and Rebecca’s EBP team. Chen’s spirit of inquiry encourages Re- becca, and they talk about the opportunity to learn that this project affords them. Together they speak with the nurse man- ager on their medical–surgical unit, who agrees to let them use their allotted continuing educa- tion time to work on this project, after they discuss their expecta- tions for the project and how its outcome may benefit the patients, the unit staff, and the hospital. Learning research terminol- ogy. At the first meeting of the new EBP team, Carlos provides Rebecca and Chen with a glossary of terms so they can learn basic research terminology, such as sam­ ple, independent variable, and de­ pendent variable. The glossary also defines some of the study de- signs the team is likely to come across in doing their RCA, such as systematic review, randomized controlled trial, and cohort, qual- itative, and descriptive studies. (For the definitions of these terms and others, see the glossaries pro- vided by the Center for the Ad- vancement of Evidence-Based Practice at the Arizona State Uni- versity College of Nursing and Health Innovation [http://nursing andhealth.asu.edu/evidence-based- practice/resources/glossary.htm] and the Boston University Medi- cal Center Alumni Medical Li- brary [http://medlib.bu.edu/ bugms/content.cfm/content/ ebmglossary.cfm#R].) Determining the level of evi- dence. The team begins to divide the 26 studies into categories ac- cording to study design. To help in this, Carlos provides a list of several different study designs (see Hierarchy of Evidence for Intervention Studies). Rebecca, Carlos, and Chen work together to determine each study’s design by reviewing its abstract. They also create an “I don’t know” pile of studies that don’t appear to fit a specific design. When they find studies that don’t actively answer the clinical question but Hierarchy of Evidence for Intervention Studies Type of evidence Level of evidence Description Systematic review or meta-analysis I A synthesis of evidence from all relevant randomized controlled trials. Randomized con- trolled trial II An experiment in which subjects are randomized to a treatment group or control group. Controlled trial with- out randomization III An experiment in which subjects are nonrandomly assigned to a treatment group or control group. Case-control or cohort study IV Case-control study: a comparison of subjects with a condition (case) with those who don’t have the condition (control) to determine characteristics that might predict the condition. Cohort study: an observation of a group(s) (cohort[s]) to determine the development of an outcome(s) such as a disease. Systematic review of qualitative or descrip- tive studies V A synthesis of evidence from qualitative or descriptive studies to answer a clinical question. Qualitative or de- scriptive study VI Qualitative study: gathers data on human behavior to understand why and how decisions are made. Descriptive study: provides background information on the what, where, and when of a topic of interest. Expert opinion or consensus VII Authoritative opinion of expert committee. Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins. 48 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com Critical Appraisal Guide for Quantitative Studies 1. Why was the study done?• Was there a clear explanation of the purpose of the study and, if so, what was it? 2. What is the sample size?• Were there enough people in the study to establish that the findings did not occur by chance? 3. Are the instruments of the major variables valid and reliable?• How were variables defined? Were the instruments designed to measure a concept valid (did they measure what the researchers said they measured)? Were they reliable (did they measure a concept the same way every time they were used)? 4. How were the data analyzed? • What statistics were used to determine if the purpose of the study was achieved? 5. Were there any untoward events during the study? • Did people leave the study and, if so, was there something special about them? 6. How do the results fit with previous research in the area? • Did the researchers base their work on a thorough literature review? 7. What does this research mean for clinical practice? • Is the study purpose an important clinical issue? Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins. may inform thinking, such as descriptive research, expert opin- ions, or guidelines, they put them aside. Carlos explains that they’ll be used later to support Rebecca’s case for having a rapid response team (RRT) in her hospital, sh- ould the evidence point in that direction. After the studies—including those in the “I don’t know” group—are categorized, 15 of the original 26 remain and will be included in the RCA: three systematic reviews that include one meta-analysis (Level I evi- dence), one randomized con- trolled trial (Level II evidence), two cohort studies (Level IV evi- dence), one retrospective pre- post study with historic controls (Level VI evidence), four preex- perimental (pre-post) interven- tion studies (no control group) (Level VI evidence), and four EBP implementation projects (Level VI evidence). Carlos reminds Rebecca and Chen that Level I evidence—a systematic review of randomized controlled trials or a meta-analysis—is the most reliable and the best evidence to answer their clinical question. Using a critical appraisal guide. Carlos recommends that the team use a critical appraisal checklist (see Critical Appraisal Guide for Quantitative Studies) to help evaluate the 15 studies. This checklist is relevant to all studies and contains questions about the essential elements of research (such as, purpose of the study, sample size, and major variables). The questions in the critical ap- praisal guide seem a little strange to Rebecca and Chen. As they re- view the guide together, Carlos explains and clarifies each ques- tion. He suggests that as they try to figure out which are the essen- tial elements of the studies, they focus on answering the first three questions: Why was the study done? What is the sample size? Are the instruments of the major variables valid and reliable? The remaining questions will be ad- dressed later on in the critical appraisal process (to appear in future installments of this series). Creating a study evaluation table. Carlos provides an online template for a table where Re- becca and Chen can put all the data they’ll need for the RCA. Here they’ll record each study’s essential elements that answer the three questions and begin to ap- praise the 15 studies. (To use this template to create your own eval- uation table, download the Eval­ uation Table Template at http:// links.lww.com/AJN/A10.) EXTRACTING THE DATA Starting with level I evidence studies and moving down the hierarchy list, the EBP team takes each study and, one by one, finds and enters its essential elements into the first five columns of the evaluation table (see Table1; to see the entire table withall 15 studies, go to http://links. lww.com/AJN/A11). The team discusses each element as they enter it, and tries to determine if it meets the criteria of the critical ajn@wolterskluwer.com AJN ▼ July 2010 ▼ Vol. 110, No. 7 49                                                                                      50 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.comTable 1. Evaluation Table, Phase IFirst Author (Year)Conceptual Design/Method FrameworkSample/SettingMajor Variables Studied (and Their Definitions)Measure- mentData AnalysisFindingsAppraisal: Worth to PracticeChan PS, et al.None SRPurpose: effect of RRT onN = 18 studiesIV: RRT DV1: HMR DV2: CRArch Intern MedHMR and CR• Searched 5 databasesSetting: acute care hos- pitals; 13 adult, 5 peds2010;170(1):18-26.McGaughey J, et al.None SR (Cochrane review) Purpose: effect of RRTN = 2 studies24 adult hospitals Attrition: NRIV: RRT DV1: HMRCochrane Database Syst Rev 2007;3: CD005529.on HMR• Searched 6 databasesWinters BD, et al.None SRPurpose: effect of RRT onN = 8 studiesAverage no. beds: 500 Attrition: NRIV: RRT DV1: HMR DV2: CRCrit Care MedHMR and CR• Searched 3 databases2007;35(5): 1238-43.from 1990-2005• Included only studies with a control groupHillman K, et al. Lancet 2005; 365(9477): 2091-7.None RCTPurpose: effect of RRT onN = 23 hospitals Average no. beds: 340 • Intervention groupIV: RRT protocol for6 months• 1 AP• 1ICUorEDRNDV1: HMR (unexpected deaths, excluding DNRs) DV2: CR (excluding DNRs)HMR CR rates of UICUANote:• Criteria forfrom 1950-2008, and “grey literature” from MD conferencesAverage no. beds: NR Attrition: NR• Included only studies with a control groupfrom 1990-2006 • Excluded all but 2RCTsCR, HMR, and UICUA(n=12)• Control groupactivating RRT(n = 11) Setting: Australia Attrition: noneDV3: UICUAShaded columns indicate where data will be entered in future installments of the series.AP = attending physician; CR = cardiopulmonary arrest or code rates; DNR = do not resuscitate; DV = dependent variable; ED = emergency department; HMR: hospital-wide mor- tality rates; ICU = intensive care unit; IV = independent variable; MD = medical doctor; NR = not reported; Peds = pediatric; RCT = randomized controlled trial; RN = registered nurse; RRT = rapid response team; SR = systematic review; UICUA = unplanned ICU admissions.appraisal guide. These elements— such as purpose of the study, sam- ple size, and major variables—are typical parts of a research report and should be presented in a pre- dictable fashion in every study so that the reader understands what’s being reported. suggests they leave the column in. He says they can further discuss this point later on in the process when they synthesize the studies’ findings. As Rebecca and Chen review each study, they enter its citation in a separate reference list so that they won’t have to create findings, not to compare them with other like studies. Rebecca realizes that she enjoys this kind of conversation, in which she and Chen have a voice and can contribute to a deeper under- standing of how research impacts practice. As Rebecca and Chen con- tinue to enter data into the table, they begin to see similarities and differences across studies. They mention this to Carlos, who tells them they’ve begun the process of synthesis! Both nurses are en- couraged by the fact that they’re learning this new skill. The MERIT trial is next in the stack of studies and it’s a good trial to use to illustrate this phase of the RCA process. Set in Aus- tralia, the MERIT trial1 examined whether the introduction of an RRT (called a medical emergency team or MET in the study) would reduce the incidence of cardiac arrest, unplanned admissions to the ICU, and death in the hospi- tals studied. See Table 1 to follow along as the EBP team finds and enters the trial data into the table. Design/Method. After Rebecca and Chen enter the citation infor- mation and note the lack of a con- ceptual framework, they’re ready to fill in the “Design/Method” column. First they enter RCTfor randomized controlled trial, which they find in both the study title and introduction. But MERIT is called a “cluster-randomised controlled trial,” and cluster is a term they haven’t seen before. Carlos explains that it means that hospitals, not individuals or pa- tients, were randomly assigned to the RRT. He says that the likely reason the researchers chose to randomly assign hospitals is that if they had randomly assigned individual patients or units, oth- ers in the hospital might have heard about the RRT and poten- tially influenced the outcome. Usually the important information in a study can be found in the abstract. As the EBP team continues to review the studies and fill in the evaluation table, they realize that it’s taking about 10 to 15 minutes per study to locate and enter the information. This may be because when they look for a description of the sample, for example, it’s important that they note how the sample was obtained, how many patients are included, other char- acteristics of the sample, as well as any diagnoses or illnesses the sample might have that could be important to the study outcome. They discuss with Carlos the like- lihood that they’ll need a few ses- sions to enter all the data into the table. Carlos responds that the more studies they do, the less time it will take. He also says that it takes less time to find the information when study reports are clearly written. He adds that usually the important informa- tion can be found in the abstract. Rebecca and Chen ask if it would be all right to take out the “Conceptual Framework” column, since none of the stud- ies they’re reviewing have con- ceptual frameworks (which help guide researchers as to how a study should proceed). Carlos replies that it’s helpful to know that a study has no framework underpinning the research and this list at the end of the process. The reference list will be shared with colleagues and placed at the end of any RRT policy that re- sults from this endeavor. Carlos spends much of his time answering Rebecca’s and Chen’s questions concerning how to phrase the information they’re entering in the table. He suggests that they keep it simple and con- sistent. For example, if a study indicated that it was implement- ing an RRT and hoped to see a change in a certain outcome, the nurses could enter “change in [the outcome] after RRT” as the purpose of the study. For studies examining the effect of an RRT on an outcome, they could say as the purpose, “effect of RRT on [the outcome].” Using the same words to describe the same pur- pose, even though it may not have been stated exactly that way in the study, can help when they compare studies later on. Rebecca and Chen find it frus- trating that the study data are not always presented in the same way from study to study. They ask Carlos why the authors or journals wouldn’t present similar information in a similar manner. Carlos explains that the purpose of publishing these studies may have been to disseminate the ajn@wolterskluwer.com AJN ▼ July 2010 ▼ Vol. 110, No. 7 51 To randomly assign hospitals (instead of units or patients) to the intervention and comparison groups is a cleaner research de- sign. the RRTs were activated and pro- vided their protocol for calling the RRTs. However, these elements might be helpful to the EBP team later on when they make decisions continue the work—as long as Carlos is there to help. In applying these principles for evaluating research studiesto your own search for the evi- dence to answer your PICOT question, remember that this se- ries can’t contain all the available information about research meth- odology. Fortunately, there are many good resources available in books and online. For example, to find out more about sample size, which can affect the likeli- hood that researchers’ results oc- cur by chance (a random finding) rather than that the intervention brought about the expected out- come, search the Web using terms that describe what you want to know. If you type sample size findings by chance in a search en- gine, you’ll find several Web sites that can help you better under- stand this study essential. Be sure to join the EBP team in the next installment of the se- ries, “Critical Appraisal of the Evidence: Part II,” when Rebecca and Chen will use the MERIT trial to illustrate the next steps in the RCA process, complete the rest of the evaluation table, and dig a little deeper into the studies in order to detect the “keepers.” ▼ Ellen Fineout­Overholt is clinical profes­ sor and director of the Center for the Advancement of Evidence­Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwellis clinical associate professor and pro­ gram coordinator of the Nurse Educator Evidence­Based Practice Mentorship Program, and Kathleen M. Williamsonis associate director of the Center for the Advancement of Evidence­Based Practice. Contact author: Ellen Fineout­Overholt, ellen.fineout­overholt@asu.edu. REFERENCE 1.Hillman K, et al. Introduction ofthe medical emergency team (MET) system: a cluster-randomised con- trolled trial. Lancet 2005;365(9477): 2091-7. Keep the data in the table consistent by using simple, inclusive terminology. To keep the study purposes consistent among the studies in the RCA, the EBP team uses inclu- sive terminology they developed after they noticed that different trials had different ways of de- scribing the same objectives. Now they write that the purpose of the MERIT trial is to see if an RRT can reduce CR, for cardiopulmo- nary arrest or code rates, HMR, for hospital-wide mortality rates, and UICUA for unplanned ICU admissions. They use those same terms consistently throughout the evaluation table. Sample/Setting. A total of 23 hospitals in Australia with an average of 340 beds per hospi- tal is the study sample. Twelve hospitals had an RRT (the inter- vention group) and 11 hospitals didn’t (the control group). Major Variables Studied. The independent variable is the vari- able that influences the outcome (in this trial, it’s an RRT for six months). The dependent vari- able is the outcome (in this case, HMR, CR, and UICUA). In this trial, the outcomes didn’t include do-not-resuscitate data. The RRT was made up of an attending phy- sician and an ICU or ED nurse. While the MERIT trial seems to perfectly answer Rebecca’s PICOT question, it contains ele- ments that aren’t entirely relevant, such as the fact that the research- ers collected information on how about implementing an RRT in their hospital. So that they can come back to this information, they place it in the last column, “Appraisal: Worth to Practice.” After reviewing the studies to make sure they’ve captured the essential elements in the evalua- tion table, Rebecca and Chen still feel unsure about whether the in- formation is complete. Carlos reminds them that a system-wide practice change—such as the change Rebecca is exploring, that of implementing an RRT in her hospital—requires careful consid- eration of the evidence and this is only the first step. He cautions them not to worry too much about perfection and to put their efforts into understanding the information in the studies. He re- minds them that as they move on to the next steps in the critical appraisal process, and learn even more about the studies and proj- ects, they can refine any data in the table. Rebecca and Chen feel uncomfortable with this uncer- tainty but decide to trust the pro- cess. They continue extracting data and entering it into the table even though they may not com- pletely understand what they’re entering at present. They both realize that this will be a learn- ing opportunity and, though the learning curve may be steep at times, they value the outcome of improving patient care enough to 52 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com Critical Appraisal of the Evidence: Part II Digging deeper—examining the “keeper” studies. By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub- lished with November’s Evidence-Based Practice, Step by Step. In July’s evidence-based prac- tice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, collected the evidence to an- swer their clinical question: “In hospitalized adults (P), how does a rapid response team(I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three- month period (T)?” As part of their rapid critical appraisal (RCA) of the 15 potential “keeper” studies, the EBP team found and placed the essential elements of each study (such as its population, study design, and setting) into an evaluation table. In so doing, they began to see similarities and differ- ences between the studies, which Carlos told them is the beginning of synthesis. We now join the team as they continue with their RCA of these studies to determine their worth to practice. RAPID CRITICAL APPRAISAL Carlos explains that typically an RCA is conducted along with an RCA checklist that’s specific to the research design of the study being evaluated—and before any data are entered into an evalua- tion table. However, since Rebecca and Chen are new to appraising studies, he felt it would be easier for them to first enter the essen- tials into the table and then eval- uate each study. Carlos shows Rebecca several RCA checklists and explains that all checklists have three major questions in common, each of which contains other more specific subquestions about what constitutes a well- conducted study for the research design under review (see Example of a Rapid Critical Appraisal Checklist). Although the EBP team will be looking at how well the re- searchers conducted their studies and discussing what makes a “good” research study, Carlos reminds them that the goal of critical appraisal is to determine the worth of a study to practice, not solely to find flaws. He also suggests that they consult their glossary when they see an unfa- miliar word. For example, the term randomization, or random assignment, is a relevant feature of research methodology for in- tervention studies that may be unfamiliar. Using the glossary, he explains that random assignment and random sampling are often confused with one another, but that they’re very different. When researchers select subjects from within a certain population to participate in a study by using a random strategy, such as tossing a coin, this is random sampling. It allows the entire population to be fairly represented. But because it requires access to a particular population, random sampling is not always feasible. Carlos adds that many health care studies are based on a con- venience sample—participants recruited from a readily available population, such as a researcher’s affiliated hospital, which may or may not represent the desired population. Random assignment, on the other hand, is the use of a random strategy to assign study ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 41 participants to the intervention or control group. Random as- signment is an important feature of higher-level studies in the hier- archy of evidence. Carlos also reminds the team that it’s important to begin the RCA with the studies at the high- est level of evidence in order to see the most reliable evidence first. In their pile of studies, these are the three systematic reviews, includ- ing the meta-analysis and the Cochrane review, they retrieved from their database search (see “Searching for the Evidence,” and “Critical Appraisal of the Evidence: Part I,” Evidence- Based Practice, Step by Step, May and July). Among the RCA checklists Carlos has brought with him, Rebecca and Chen find the checklist for systematic reviews. As they start to rapidly criti- cally appraise the meta-analysis, they discuss that it seems to be biased since the authors included only studies with a control group. Carlos explains that while hav- ing a control group in a study is ideal, in the real world most stud- ies are lower-level evidence and don’t have control or compari- son groups. He emphasizes that, in eliminating lower-level studies, the meta-analysis lacks evidence that may be informative to the question. Rebecca and Chen— who are clearly growing in their appraisal skills—also realize that three studies in the meta-analysis are the same as three of their potential “keeper” studies. They wonder whether they should keep those studies in the pile, or if, as duplicates, they’re unnecessary. Carlos says that because the meta- analysis only included studies with control groups, it’s impor- tant to keep these three studies so that they can be compared with other studies in the pile that don’t have control groups. Rebecca notes that more than half of their 15 studies don’t have control or comparison groups. They agree as a team to include all 15 stud- ies at all levels of evidence and go on to appraise the two remaining systematic reviews. The MERIT trial1 is next in the EBP team’s stack of studies. Example of a Rapid Critical Appraisal Checklist Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments 1. Are the results of the review valid? A. Are the studies in the review randomized controlled trials? B. Does the review include a detailed description of the search strategy used to find the relevant studies?C. Does the review describe how the validity of the individual studies was assessed (such as, methodological quality, including the use of random assignment to study groups and complete follow-up of subjects)? D. Are the results consistent across studies?E. Did the analysis use individual patient data or aggregate data? 2. What are the results? A. How large is the intervention or treatment effect (odds ratio, relative risk, effect size, level of significance)? B. How precise is the intervention or treatment (confidence interval)? 3. Will the results assist me in caring for my patients? A. Are my patients similar to those in the review?B. Is it feasible to implement the findings in my practice setting? C. Were all clinically important outcomes considered, including both risks and benefits of the treatment?D. What is my clinical assessment of the patient, and are there any contraindications or circumstances that would keep me from implementing the treatment?E. What are my patients’ and their families’ preferences and values concerning the treatment? © Fineout-Overholt and Melnyk, 2005. Yes No Yes No Yes No Yes No Patient Aggregate Yes No Yes No Yes No Yes No Yes No 42 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com As we noted in the last install- ment of this series, MERIT is a good study to use to illustrate the different steps of the critical appraisal process. (Readers may want to retrieve the article, if possible, and follow along with the RCA.) Set in Australia, the MERIT trial examined whether the introduction of a rapid re- sponse team (RRT; called a med- ical emergency team or MET in the study) would reduce the incidence of cardiac arrest, death, and unplanned admissions to the ICU in the hospitals studied. To follow along as the EBP team addresses each of the essential elements of a well-conducted randomized controlled trial (RCT) and how they apply to the MERIT study, see their notes in Rapid Critical Appraisal of the MERIT Study. ARE THE RESULTS OF THE STUDY VALID? The first section of every RCA checklist addresses the validity of the study at hand—did the researchers use sound scientific methods to obtain their study results? Rebecca asks why valid- ity is so important. Carlos replies that if the study’s conclusion can be trusted—that is, relied upon to inform practice—the study must be conducted in a way that reduces bias or eliminates con- founding variables (factors that influence how the intervention affects the outcome). Researchers typically use rigorous research methods to reduce the risk of bias. The purpose of the RCA checklist is to help the user deter- mine whether or not rigorous methods have been used in the study under review, with most questions offering the option of a quick answer of “yes,” “no,” or “unknown.” Were the subjects randomly assigned to the intervention and control groups? Carlos explains that this is an important question when appraising RCTs. If a study calls itself an RCT but didn’t randomly assign participants, then bias could be present. In appraising the MERIT study, the team discusses how the research- ers randomly assigned entire hospitals, not individual patients, to the RRT intervention and control groups using a technique called cluster randomization. To better understand this method, the EBP team looks it up on the Internet and finds a PowerPoint presentation by a World Health Organization researcher that explains it in simplified terms: “Cluster randomized trials are experiments in which social units or clusters [in our case, hospitals] rather than individuals are ran- domly allocated to intervention groups.”2 Was random assignment concealed from the individuals enrolling the subjects? Conceal- ment helps researchers reduce potential bias, preventing the person(s) enrolling participants from recruiting them into a study with enthusiasm if they’re des- tined for the intervention group or with obvious indifference if they’re intended for the control or comparison group. The EBP team sees that the MERIT trial used an independent statistician to conduct the random assign- ment after participants had already been enrolled in the study, which Carlos says meets the criteria for concealment. Were the subjects and pro- viders blind to the study group? Carlos notes that it would be difficult to blind participants or researchers to the interven- tion group in the MERIT study because the hospitals that were to initiate an RRT had to know it was happening. Rebecca and Chen wonder whether their “no” answer to this question makes the study findings invalid. Carlos says that a single “no” may or may not mean that the study findings are invalid. It’s their job as clinicians interpreting the data to weigh each aspect of the study design. Therefore, if the answer to any validity question isn’t affirmative, they must each ask themselves: does this “no” make the study findings untrustworthy to the extent that I don’t feel comfortable using them in my practice? Were reasons given to explain why subjects didn’t complete the study? Carlos explains that sometimes par- ticipants leave a study before the end (something about the study or the participants themselves may prompt them to leave). If all or many of the participants leave for the same reason, this may lead to biased findings. Therefore, it’s important to look for an explanation for why any subjects didn’t complete a study. Since no hospitals dropped out of the MERIT study, this ques- tion is determined to be not applicable. Were the follow-up assess- ments long enough to fully study the effects of the intervention? Chen asks Carlos why a time frame would be important in studying validity. He explains that researchers must ensure that the outcome is evaluated for a long enough period of time to show that the intervention indeed caused it. The researchers in the MERIT study conducted the RRT intervention for six months be- fore evaluating the outcomes. The team discusses how six months was likely adequate to determine how the RRT affected cardio- pulmonary arrest rates (CR) but might have been too short to es- tablish the relationship between the RRT and hospital-wide mor- tality rates (HMR). ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 43 44 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com Rapid Critical Appraisal of the MERIT Study 1. Are the results of the study valid?A. Were the subjects randomly assigned to the intervention and control groups? Yes No Unknown Random assignment of hospitals was made to either a rapid response team (RRT; intervention) group or no RRT (con- trol) group. To protect against introducing further bias into the study, hospitals, not individual patients, were randomly assigned to the intervention. If patients were the study subjects, word of the RRT might have gotten around, potentially influencing the outcome. B. Was random assignment concealed from the individuals enrolling the subjects? Yes No Unknown An independent statistician randomly assigned hospitals to the RRT or no RRT group after baseline data had been collected; thus the assignments were concealed from both researchers and participants. C. Were the subjects and providers blind to the study group? Yes No Unknown Hospitals knew to which group they’d been assigned, as the intervention hospitals had to put the RRTs into practice. Management, ethics review boards, and code committees in both hospitals knew about the intervention. The control hospitals had code teams and some already had systems in place to manage unstable patients. But control hospitals didn’t have a placebo strategy to match the intervention hospitals’ educational strategy for how to implement an RRT (a red flag for confounding!). If you worked in one of the control hospitals, unless you were a member of one of the groups that gave approval, you wouldn’t have known your hospital was participating in a study on RRTs; this lessens the chance of confounding variables influencing the outcomes. D. Were reasons given to explain why subjects didn’t complete the study? Yes No Not Applicable This question is not applicable as no hospitals dropped out of the study. E. Were the follow-up assessments long enough to fully study the effects of theinter vention? Yes No Unknown The intervention was conducted for six months, which should be adequate time to have an impact on the outcomes of car- diopulmonary arrest rates (CR), hospital-wide mortality rates (HMR), and unplanned ICU admissions (UICUA). However, the authors remark that it can take longer for an RRT to affect mortality, and cite trauma protocols that took up to 10 years. F. Were the subjects analyzed in the group to which they were randomly assigned? Yes No Unknown All 23 (12 intervention and 11 control) hospitals remained in their groups, and analysis was conducted on an intention- to-treat basis. However, in their discussion, the authors attempt to provide a reason for the disappointing study results; they suggest that because the intervention group was “inadequately implemented,” the fidelity of the intervention was compromised, leading to less than reliable results. Another possible explanation involves the baseline quality of care; if high, the improvement after an RRT may have been less than remarkable. The authors also note a historical confounder: in Australia, where the study took place, there was a nationwide increase in awareness of patient safety issues. G. Was the control group appropriate? Yes No Unknown See notes to question C. Controls had no time built in for education and training as the intervention hospitals did, so this time wasn’t controlled for, nor was there any known attempt to control the organizational “buzz” that something was going on. The study also didn’t account for the variance in how RRTs were implemented across hospitals. The researchers indicate that the existing code teams in control hospitals “did operate as [RRTs] to some extent.” Because of these factors, the appropriateness of the control group is questionable. H. Were the instruments used to measure the outcomes valid and reliable? Yes No Unknown The primary outcome was the composite of HMR (that is, unexpected deaths, excluding do not resuscitates [DNRs]), CR (that is, no palpable pulse, excluding DNRs), and UICUA (any unscheduled admissions to the ICU). I. Were the demographics and baseline clinical variables of the subjectsin each of the groups similar? Yes No Unknown The researchers provided a table showing how the RRT and control hospitals compared on several variables. Some variability existed, but there were no statistical differences between groups. 2. What are the results? A. How large is the intervention or treatment effect? The researchers reported outcome data in various ways, but the bottom line is that the control group did better than the intervention group. For example, RRT calling criteria were documented more than 15 minutes before an eventby more hospitals in the control group than in the intervention group, which is contrary to expectation. Half the HMR cases in the intervention group met the criteria compared with 55% in the control group (not statistically significant). But only 30% of CR cases in the intervention group met the criteria compared with 44% in the control group, which was statistically significant (P = 0.031). Finally, regarding UICUA, 51% in the intervention group compared with 55% in the control group met the criteria (not significant). This indicates that the control hospitals were doing a better job of documenting unstable patients before events occurred than the intervention hospitals. B. How precise is the intervention or treatment? The odds ratio (OR) for each of the outcomes was close to 1.0, which indicates that the RRT had no effect in the intervention hospitals compared with the control hospitals. Each confidence interval (CI) also included the num-ber 1.0, which indicates that each OR wasn’t statistically significant (HMR OR = 1.03 (0.84 – 1.28); CR OR = 0.94 (0.79 – 1.13); UICUA OR = 1.04 (0.89 – 1.21). From a clinical point of view, the results aren’t straightfor- ward. It would have been much simpler had the intervention hospitals and the control hospitals done equally badly; but the fact that the control hospitals did better than the intervention hospitals raises many questions about the results. 3. Will the results help me in caring for my patients? A. Were all clinically important outcomes measured? Yes No Unknown It would have been helpful to measure cost, since participating hospitals that initiated an RRT didn’t eliminate their code team. If a hospital has two teams, is the cost doubled? And what’s the return on investment? There’s also no mention of the benefits of the code team. This is a curious question . . . maybe another PICOT question? B. What are the risks and benefits of the treatment? This is the wrong question for an RRT. The appropriate question would be: What is the risk of not adequately introduc- ing, monitoring, and evaluating the impact of an RRT? C. Is the treatment feasible in my clinical setting? Yes No Unknown We have administrative support, once we know what the evidence tells us. Based on this study, we don’t know much more than we did before, except to be careful about how we approach and evaluate the issue. We need to keep the following issues, which the MERIT researchers raised in their discussion, in mind: 1) allow adequate time to measure outcomes; 2) some outcomes may be reliably measured sooner than others; 3) the process of implementing an RRT is very important to its success. D. What are my patients’ and their families’ values and expectations for the outcome and the treatment itself? We will keep this in mind as we consider the body of evidence. ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 45 Were the subjects analyzed in the group to which they were randomly assigned? Rebecca sees the term intention-to-treat analysis in the study and says that it sounds like statistical language. Carlos confirms that it is; it means that the researchers kept the hos- pitals in their assigned groups when they conducted the analysis, a technique intended to reduce possible bias. Even though the MERIT study used this technique, Carlos notes that in the discussion section the authors offer some important caveats about how the study was conducted, including poor intervention implementation, which may have contributed to MERIT’s unexpected findings.1 Was the control group appro- priate? Carlos explains that it’s challenging to establish an ap- propriate comparison or control group without an understanding of how the intervention will be implemented. In this case, it may be problematic that the interven- tion group received education and training in implementing the RRT and the control group re- ceived no comparable placebo (meaning education and training about something else). But Car- los reminds the team that the re- searchers attempted to control for known confounding variables by stratifying the sample on char- acteristics such as academic versus nonacademic hospitals, bed size, and other important parameters. This method helps to ensure equal representation of these pa- rameters in both the intervention and control groups. However, a major concern for clinicians con- sidering whether to use the MERIT findings in their decision making involves the control hos- pitals’ code teams and how they may have functioned as RRTs, which introduces a potential con- founder into the study that could possibly invalidate the findings. Were the instruments used to measure the outcomes valid and reliable? The overall measure in the MERIT study is the compos- ite of the individual outcomes: CR, HMR, and unplanned ad- missions to the ICU (UICUA). These parameters were defined reasonably and didn’t include do not resuscitate (DNR) cases. Car- los explains that since DNR cases are more likely to code or die, in- cluding them in the HMR and CR would artificially increase these outcomes and introduce bias into the findings. As the team moves through the questions in the RCA check- list, Rebecca wonders how she and Chen would manage this kind of appraisal on their own. Carlos assures them that they’ll get better at recognizing well- conducted research the more RCAs they do. Though Rebecca feels less than confident, she appre- ciates his encouragement nonethe- less, and chooses to lead the team in discussion of the next question. Were the demographics and baseline clinical variables of the subjects in each of the groups similar? Rebecca says that the intervention group and the con- trol or comparison group need to be similar at the beginning of any intervention study because any differences in the groups could influence the outcome, poten- tially increasing the risk that the outcome might be unrelated to the intervention. She refers the team to their earlier discussion about confounding variables. Carlos tells Rebecca that her explana- tion was excellent. Chen remarks that Rebecca’s focus on learning appears to be paying off. WHAT ARE THE RESULTS? As the team moves on to the sec- ond major question, Carlos tells them that many clinicians are apprehensive about interpreting statistics. He says that he didn’t take courses in graduate school on conducting statistical analysis; rather, he learned about different statistical tests in courses that re- quired students to look up how to interpret a statistic whenever they encountered it in the articles they were reading. Thus he had a context for how the statistic was being used and interpreted, what question the statistical analysis was answering, and what kind of data were being analyzed. He also learned to use a search engine, such as Google.com, to find an explanation for any statistical tests with which he was unfamil- iar. Because his goal was to un- derstand what the statistic meant clinically, he looked for simple Web sites with that same focus and avoided those with Greek symbols or extensive formulas that were mostly concerned with conducting statistical analysis. How large is the intervention or treatment effect? As the team goes through the studies in their RCA, they decide to construct a list of statistics terminology for quick reference (see A Sampling of Statistics). The major statistic used in the MERIT study is the odds ratio (OR). The OR is used to provide insight into the measure of association between an inter- vention and an outcome. In the MERIT study, the control group did better than the intervention group, which is contrary to what was expected. Rebecca notes that the researchers discussed the pos- sible reasons for this finding in the final section of the study. Carlos says that the authors’ discussion about why their findings occurred is as important as the findings themselves. In this study, the discussion communicates to any clinicians considering initiating an RRT in their hospital that they should assess whether the current code team is already functioning 46 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com A Sampling of Statistics Statistic Simple Definition Important Parameters Understanding the Statistic Clinical Implications Odds Ratio (OR) The odds of an outcome occurring in the intervention group compared with the odds of it occurring in the comparison or control group. IfanORisequalto1,thenthe intervention didn’t make a differ- ence. Interpretation depends on the out- come. If the outcome is good (for exam- ple, fall prevention), the OR is pre- ferred to be above 1. If the outcome is bad (for example, mortality rate), the OR is preferred to be below 1. The OR for hospital-wide mor- tality rates (HMR) in the MERIT study was 1.03 (95% CI, 0.84 – 1.28). The odds of HMR in the intervention group were about the same as HMR in the comparison group. From the HMR OR data alone, a clinician may not feel confident that a rapid response team (RRT) is the best intervention to reduce HMR but may seek out other evidence before making a decision. Relative Risk (RR) The risk of an out- come occurring in the intervention group compared with the risk of it occurring in the comparison or control group. IfanRRisequalto1,thenthe intervention didn’t make a differ- ence. Interpretation depends on the out- come. If the outcome is good (for example fall prevention), the RR is preferred to be above 1. If the outcome is bad (for example, mortality rate), the RR is preferred to be below 1. The RR of cardiopulmonary ar- rest in adults was reported in the Chan PS, et al., 2010 sys- tematic reviewa as 0.66 (95% CI, 0.54 – 0.80), which is sta- tistically significant because there’s no 1.0 in the CI. Thus, the RR of cardiopulmo- nary arrest occurring in the intervention group compared with the RR of it occurring in the control group is 0.66, or less than 1. Since cardiopulmonary arrest is not a good outcome, this is a desirable finding. The RRT significantly reduced the RR of cardiopulmonary arrest in this study. From these data, clinicians can be reasonably confident that ini- tiating an RRT will reduce CR in hospitalized adults. Confidence Interval (CI) The range in which clinicians can expect to get results if they pres- ent the interven- tion as it was in the study. CI provides the precision of the study finding: a 95% CI indicates that clinicians can be 95% con- fident that their findings will be within the range given in the study. CI should be narrow around the study finding, not wide. If a CI contains the number that indicates no effect (for OR it’s 1; for effect size it’s 0), the study finding is not statistically significant. See the two previous examples. In the Chan PS, et al., 2010 systematic review,a the CI is a close range around the study finding and is statistically significant. Clinicians can be 95% confident that if they conduct the same interven- tion, they’ll have a result simi- lar to that of the study (that is, a reduction in risk of cardio- pulmonary arrest) within the range of the CI, 0.54 – 0.80. The narrower the CI range, the more confident clinicians can be that, using the same intervention, their results will be close to the study findings. Mean (X) Average • Caveat: Averaging captures only those subjects who surround a central tendency, missing those who may be unique. For example, the mean (average) hair color in a classroom of schoolchildren cap- tures those with the predominant hair color. Children with hair color different from the predominant hair color aren’t captured and are con- sidered outliers (those who don’t converge around the mean). In the Dacey MJ, et al., 2007 study,a before the RRT the aver- age (mean) CR was 7.6 per 1,000 discharges per month; after the RRT, it decreased to 3 per 1,000 discharges per month. Introducing an RRT decreased the average CR by more than 50% (7.6 to 3 per 1,000 discharges per month). a For study details on Chan PS, et al., and Dacey MJ, et al., go to http://links.lww.com/AJN/A11. ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 47 as an RRT prior to RRT imple- mentation. How precise is the interven- tion or treatment? Chen wants to tackle the precision of the findings and starts with the OR for HMR, CR, and UICUA, each of which has a confidence interval (CI) that includes the number 1.0. In an EBP workshop, she learned that a1.0inaCIforORmeansthat the results aren’t statistically sig- nificant, but she isn’t sure what statistically significant means. Car- los explains that since the CIs for the OR of each of the three out- comes contains the number 1.0, these results could have been ob- tained by chance and therefore aren’t statistically significant. For clinicians, chance findings aren’t reliable findings, so they can’t confidently be put into practice. Study findings that aren’t statisti- cally significant have a probabil- ity value (P value) of greater than 0.5. Statistically significant find- ings are those that aren’t likely to be obtained by chance and have a P value of less than 0.5. WILL THE RESULTS HELP ME IN CARING FOR MY PATIENTS?The team is nearly finished with their checklist for RCTs. The third and last major question addresses the applicability of the study— how the findings can be used to help the patients the team cares for. Rebecca observes that it’s easy to get caught up in the de- tails of the research methods and findings and to forget about how they apply to real patients. Were all clinically important outcomes measured? Chen says that she didn’t see anything in the study about how much an RRT costs to initiate and how to com- pare that cost with the cost of one code or ICU admission. Carlos agrees that providing costs would have lent further insight into the results. What are the risks and ben- efits of the treatment? Chen won- ders how to answer this since the findings seem to be confounded by the fact that the control hos- pital had code teams that func- tioned as RRTs. She wonders if there was any consideration of the risks and benefits of initiating an RRT prior to beginning the study. Carlos says that the study doesn’t directly mention it, but the consideration of the risks and benefits of an RRT is most likely what prompted the researchers to conduct the study. It’s helpful to remember, he tells the team, that often the answer to these questions is more than just “yes” or “no.” Is the treatment feasible in my clinical setting? Carlos acknowl- edges that because the nursing administration is open to their project and supports it by provid- ing time for the team to conduct its work, an RRT seems feasible in their clinical setting. The team discusses that nursing can’t bethe sole discipline involved in the project. They must consider how to include other disciplines as part of their next step (that is, the im- plementation plan). The team con- siders the feasibility of getting all disciplines on board and how to address several issues raised by the researchers in the discussion sec- tion (see Rapid Critical Appraisal of the MERIT Study), particu- larly if they find that the body of evidence indicates that an RRT does indeed reduce their chosen outcomes of CR, HMR, and UICUA. What are my patients’ and their families’ values and expec- tations for the outcome and the treatment itself? Carlos asks Rebecca and Chen to discuss with their patients and their patients’ families their opinion of an RRT and if they have any objectionsto the intervention. If there are objections, the patients or fami- lies will be asked to reveal them. The EBP team finally com- pletes the RCA checklists for the 15 studies and finds them all to be “keepers.” There are some studies in which the findings are less than reliable; in the case of MERIT, the team decides to in- clude it anyway because it’s con- sidered a landmark study. Allthe studies they’ve retained have something to add to their under- standing of the impact of an RRT on CR, HMR, and UICUA. Car- los says that now that they’ve determined the 15 studies to be somewhat valid and reliable, they can add the rest of the data to the evaluation table. Be sure to join the EBP team for “Critical Appraisal of the Evi- dence: Part III” in the next install- ment in the series, when Rebecca, Chen, and Carlos complete their synthesis of the 15 studies and determine what the body of evi- dence says about implementing an RRT in an acute care setting. ▼ Ellen Fineout-Overholt is clinical pro- fessor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foun- dation professor of nursing, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout-Overholt, ellen. fineout-overholt@asu.edu. REFERENCES 1. Hillman K, et al. Introduction of the medical emergency team (MET) sys- tem: a cluster-randomised controlled trial. Lancet 2005;365, 2091-7. 2. Wojdyla D. Cluster randomized trials and equivalence trials [PowerPoint presentation]. Geneva, Switzerland: Geneva Foundation for Medical Education and Research; 2005. http:// www.gfmer.ch/PGC_RH_2005/pdf/ Cluster_Randomized_Trials.pdf. 48 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com Critical Appraisal of the Evidence: Part III The process of synthesis: seeing similarities and differences across the body of evidence. By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician exper- tise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. See details below. In September’s evidence- based practice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospi- tal’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, ra- pidly critically appraised the 15 articles they found to answer their clinical question—“In hospital- ized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?”—and determined that they were all “keepers.” The team now begins the process of evaluation and synthesis of the articles to see what the evidence says about initiating a rapid re- sponse team (RRT) in their hos- pital. Carlos reminds them that evaluation and synthesis are syn- ergistic processes and don’t neces- sarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time. STARTING THE EVALUATION Rebecca, Carlos, and Chen begin to work with the evaluation table they created earlier in this process when they found and filled in the essential elements of the 15 stud- ies and projects (see “Critical Ap- praisal of the Evidence: Part I,” July). Now each takes a stack of the “keeper” studies and system- atically begins adding to the table any remaining data that best re- flect the study elements pertain- ing to the group’s clinical question (see Table 1; for the entire table with all 15 articles, go to http:// links.lww.com/AJN/A17). They had agreed that a “Notes” sec- tion within the “Appraisal: Worth to Practice” column would be a good place to record the nuances of an article, their impressionsof it, as well as any tips—such as what worked in calling an RRT— that could be used later when they write up their ideas for ini- tiating an RRT at their hospital, if the evidence points in that direc- tion. Chen remarks that although she thought their initial table con- tained a lot of information, this final version is more thorough by far. She appreciates the opportu- nity to go back and confirm her original understanding of the study essentials. The team members discuss the evolving patterns as they complete the table. The three systematic Need Help with Evidence-Based Practice? Chat with the Authors on November 16! On November 16 at 3 PM EST, join the “Chat with the Au- thors” call. It’s your chance to get personal consultation from the experts! Dial-in early! U.S. and Canada, dial 1-800-947-5134 (International, dial 001-574-941-6964). When prompted, enter code 121028#. Go to www.ajnonline.com and click on “Podcasts” and then on “Conversations” to listen to our interview with Ellen Fineout- Overholt and Bernadette Mazurek Melnyk. ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 43 44 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com Table 1. Final Evaluation Table First Author (Year) Conceptual Framework Design/Method Sample/Setting Major Variables Studied (and Their Definitions) Measurement Data Analysis Findings Appraisal: Worth to Practice Chan PS, et al. None SRPurpose: effect of RRT on HMR and CR• Searched 5 N = 18 out of 143 potential studies IV: RRTDV1: HMR (including DNR, excluding DNR, not treated in ICU, no HMR definition)DV2: CR RRT: was the MD involved? • Fre – quency 13/16 studies reporting team structure Weaknesses:• Potential missed evi- Arch Intern Med • Relative risk dence with exclusion of all studies except those with control groups 2010;170(1): 18-26 Setting: acute care hospitals; 13 adult, 5 peds HMR: overall hospital deaths (see definition) 7/11 adult and 4/5 peds studies had sig- nificant reduc- tion in CR databases from 1950–2008 and “grey literature” from MD confer- ences Average no. beds: NR CR: cardio and/or pulmo- nary arrest; cardiac arrest calls • Grey literature search limited to medical meet- ings • Included only 1) RCTs and CR:• In adults, • Only included HMR and CR outcomes prospective Strengths:• Identified no. of activa- studies with 2) a control group or tions of RRT/1,000 admissionsIdentified variancein outcome definition and measurement (for example, 10 of 15 stud- ies included deaths from DNRs in their mortality measurement) control period • • and3) hospital mortality well described as outcome • Excluded 5 studies that met criteria due tono response to e-mail by primary authors HMR:• In adults, Conclusion:• RRT reduces CR in Attrition: NR 21%–48% reduction in CR; RR 0.66 (95% CI, 0.54–0.80) In peds, 38% reduction in CR; RR 0.62 (95% CI, 0.46–0.84) • No cost data HMR RR 0.96 (95% CI, 0.84– 1.09) adults, and CR and HMR in peds • In peds, HMR RR Feasibility:• RRT is reasonable to 0.79 (95% CI, 0.63– 0.98) implement; evaluating cost will help in making decisions about using RRT • Risk/Benefit (harm): benefits outweigh risks ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 45 McGaughey J, et al. Cochrane Database Syst Rev 2007;3: CD005529 None SR (Cochrane review) N = 2 studies IV: RRT DV1: HMR HMR: Australia: overall hospital mortality with- out DNR OR OR of Aus- tralian study, 0.98 (95% CI, 0.83–1.16) Weaknesses:• Didn’t include full body Winters BD, et al. Crit Care Med 2007;35(5): 1238-43 None SRPurpose: effect of RRT on HMR and CR• Searched 3 N = 8 studies IV: RRT DV1: HMR DV2: CR HMR: overall death rate Risk ratio HMR:• Observa- Strengths:• Provides comparison Purpose: effect of RRT on HMR• Searched 6 Acute care set- tings in Australia and the UK of evidence• Conflicting results of databases from Attrition: NR UK: Simplified Acute Physiol- ogyScore (SAPS) II death probabil- ity estimate OR of UK study, 0.52 (95% CI, 0.32–0.85) retained studies, but no discussion of the impact of lower-level evidence 1990 –2006• Excluded all but • Recommendation “need more research” 2 RCTs Conclusion:• Inconclusive databases from Attrition: NR 4–82 months)™ Sample size (range, 1990 –2005 • Included only studies with a control group • Cluster RCTs, risk ratio for RRT on HMR, 0.76 (95% CI, 0.39– 1.48) 2,183–199,024)™ Criteria for RRT initia- Average no. beds: 500 CR: no. of in- hospital arrests tional studies, risk ratio for RRT on HMR, 0.87 (95% CI, 0.73– 1.04) across studies for™ Study lengths (range, CI = confidence interval; CR = cardiopulmonary arrest or code rates; DNR = do not resuscitate; DV = dependent variable; HMR = hospital-wide mortality rates; ICU = intensive care unit; IV = independent variable; MD = medical doctor; NR = not reported; OR = odds ratio; Peds = pediatrics; RCT = randomized controlled trial; RR = relative risk; RRT = rapid response team; SR = systematic review; UK = United Kingdom CR:• Observa- tion (common: respira- tory rate, heart rate, blood pressure, mental status change; not all studies, but notewor- thy: oxygen saturation, “worry”) tional studies, risk ratio for RRT on CR, 0.70 (95% CI, 0.56– 0.92) Conclusion:• Some support for RRT, • Cluster RCTs, risk ratio for RRT on CR, 0.94 (95% CI, 0.79– 1.13) but not reliable enough to recommend as stan- dard of care • Includes ideas about future evidence gen- eration (conducting research)—finding out what we don’t know reviews, which are higher-level evidence, seem to have an inher- ent bias in that they included only studies with control groups. In general, these studies weren’t in favor of initiating an RRT. Carlos asks Rebecca and Chen whether, Chen in their efforts to appraise the MERIT study and comments on how well they’re putting the pieces of the evidence puzzle to- gether. The nurses are excited that they’re able to use their new knowledge to shed light on the as well as a good number of jour- nals have encouraged their use. When they review the actual guidelines, the team notices that they seem to be focused on re- search; for example, they require a research question and refer to It’s not the number of studies or projects that determines the reliability of their ndings, but the uniformity and quality of their methods. now that they’ve appraised all the evidence about RRTs, they’re con- fident in their decision to include all the studies and projects (in- cluding the lower-level evidence) among the “keepers.” The nurses reply with an emphatic affirma- tive! They tell Carlos that the proj- ects and descriptive studies were what brought the issue to life for them. They realize that the higher- level evidence is somewhat in conflict with the lower-level evi- dence, but they’re most interested in the conclusions that can be drawn from considering the entire body of evidence. Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study’s finding that the RRT had no effect, but didn’t emphasize the MERIT study authors’ discussion about how their study methods may have influenced the reliability of the findings (for more, see “Critical Appraisal of the Evidence: Part II,” September). Carlos says that this is an excellent observation. He also reminds the team that clinicians may read a systematic review for the conclusion and never consider the original stud- ies. He encourages Rebecca and study. They discuss with Carlos how the interpretation of the MERIT study has perhaps con- tributed to a misunderstanding of the impact of RRTs. Comparing the evidence. As the team enters the lower-level evi- dence into the evaluation table, they note that it’s challenging to compare the project reports with studies that have clearly described methodology, measurement, anal- ysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how out- comes were measured, data were analyzed, and results interpreted, comparing the studies as they’re currently written adds another layer of complexity to the eval- uation. Carlos says that while it would be great to have studiesand projects written in a similar for- mat so they’re easier to compare, that’s unlikely to happen. But he tells the team not to lose all hope, as a format has been developed for reporting quality improve- ment initiatives called the SQUIRE Guidelines; however, they aren’t ideal. The team looks up the guide- lines online (www.squire-statement. org) and finds that the Institute for Healthcare Improvement (IHI) the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians authoring the re- ports on their projects. In addition, they note that there’s no mention of the synthesis of the body of evidence that should drive an evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen con- clude that, for now, they’ll need to learn to read these studies as they find them — looking care- fully for the details that inform their clinical question. Once the data have been en- tered into the table, Carlos sug- gests that they take each column, one by one, and note the similari- ties and differences across the studies and projects. After they’ve briefly looked over the columns, he asks the team which ones they think they should focus on to an- swer their question. Rebecca and Chen choose “Design/Method,” “Sample/Setting,” “Findings,” and “Appraisal: Worth to Practice” (see Table 1) as the initial onesto consider. Carlos agrees that these are the columns in which they’re most likely to find the most pertinent information for their synthesis. 46 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE Design/Method. The team starts with the “Design/Method” column because Carlos reminds them that it’s important to note each study’s level of evidence. He suggests that they take this information and create a synthesis table (one in which data is extracted from the evaluation table to better see the similarities and differences between studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without methodological issues, which will increase the challenge of coming to a conclusion about the impact of an RRT on the out- comes. Sample/Setting. In reviewing the “Sample/Setting” column, the group notes that the number of hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 com- munity, 4 no mention, 2 acute care hospitals, and 1 public hos- pital). The evidence they’ve col- lected seems applicable, since their hospital is a community hospital. Findings. To help the team better discuss the evidence, Car- los suggests that they refer to all projects or studies as “the body of evidence.” They don’t want to get confused by calling them all studies, as they aren’t, but at the same time continually referring to “studies and projects” is cum- bersome. He goes on to say that, as part of the synthesis process, it’s important for the group to determine the overall impact of the intervention across the body of evidence. He helps them create a second synthesis table contain- ing the findings of each study or project (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, particularly outside the ICU, whereas unplanned ICU admissions (UICUA) don’t seem to be as affected by them. However, 10 of the 15 studies and projects reviewed didn’t evaluate this outcome, so it may not be fair to write it off just yet. Table 2: The 15 Studies: Levels and Types of Evidence 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Level I: Systematic review or meta-analysis X X X Level II: Randomized con- trolled trial X Level III: Controlled trial without randomization Level IV: Case-control or cohort study X X Level V: Systematic review of qualitative or descrip- tive studies Level VI: Qualitative or descriptive study (includes evidence implementation projects) X X X X X X X X X Level VII: Expert opinion or consensus Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice. 2nd ed. Philadelphia: Wolters Kluwer Health / Lippincott Williams and Wilkins; 2010. 1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al. ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 47 Table 3: Effect of the Rapid Response Team on Outcomes 1a 2a 3a 4a 5a b 6a 7 8 c 9 b 10 11 12 13 c 14 15 b HMR adult b peds NE NR NE NE ,d CRO NE b NE NE b NE c b NE b NE c b c b c NE b c c CR peds and adult NE NE NE NE NE NE NE NE UICUA NE NE NE NE NE NE NE b c NE NE NE b 1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.;6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al. CR = cardiopulmonary arrest or code rates; CRO = code rates outside the ICU; HMR = hospital-wide mortality rates; NE = not evaluated; NR = not reported; UICUA = unplanned ICU admissions a higher-level evidence; b statistically significant findings; c statistical significance not reported; d non-ICU mortality was reduced The EBP team can tell from reading the evidence that research- ers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group re- mains unconvinced that this out- come is the best for evaluating the purpose of an RRT, which, according to the IHI, is early in- tervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren’t research. The findings produced at this level of evidence are typi- cally less reliable than those at higher levels of evidence; how- ever, Carlos notes that two articles having level-VI evidence, a study and a project, had statistically significant (less likely to occur by chance, P < 0.05) reductions in HMR, which increases the reli- ability of the results. Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos re- plies that it’s not the number of studies or projects that determines the reliability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is what leads clinicians to act in con- fidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study’s results are questionable because of problems with the study meth- ods, and this affects the reliability of the three systematic reviews as well as the MERIT study itself; second, the reasonably conducted lower-level studies/projects, with their statistically significant find- ings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may reduce code rates outside the ICU (CRO) and may impact non- ICU mortality; both are outcomes they would like to address. The evidence doesn’t provide equally 48 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com promising results for UICUA, but the team agrees to include it in the outcomes for their RRT proj- ect because it wasn’t evaluated in most of the articles they ap- praised. As the EBP team continues to discusses probable outcomes, Rebecca points to one study’s data in the “Findings” column that shows a financial return on investment for an RRT.9 Carlos remarks to the group that this is only one study, and that they’ll need to make sure to collect data on the costs of their RRT as well as the cost implications of the outcomes. They determine that the important outcomes to mea- sure are: CRO, non-ICU mortality (excluding patients with do not resuscitate [DNR] orders), UICUA, and cost. Appraisal: Worth to Practice. As the team discusses their syn- thesis and the decision they’ll make based on the evidence, Table 4. Defined Criteria for Initiating an RRT Consult 4 8 9 13 15 Respiratory distress (breaths/min) Airway threatened Respiratory arrest RR < 5 or < 36 RR < 10 or < 30 RR < 8 or < 30 Unexplained dys- pnea RR < 8 or < 28 New-onset difficulty breathing RR < 10 or < 30 Shortness of breath Change in mental status Change in LOC Decrease in Glasgow Coma Scale of < 2 points ND Unexplained change Sudden decrease in LOC with normal blood glucose Decreased LOC Tachycardia (beats/ min) <140 < 130 Unexplained < 130 for 15 min < 120 < 130 Bradycardia (beats/ min) < 40 < 60 Unexplained < 50 for 15 min < 40 < 40 Blood pressure (mmHg) SBP < 90 SBP < 90 or < 180 Hypotension (unex- plained) SBP < 200 or < 90 SBP < 90 Chest pain Cardiac arrest ND ND Complaint of nontrau- matic chest pain Complaint of nontraumatic chest pain Seizures Sudden or extended ND ND Repeated or pro- longed ND Concern/worry about patient Serious concern about a patient who doesn’t fit the above criteria NE Nurse concern about overall deterioration in patients’ condi- tion without any of the above criteria (p. 2077) Nurse concern • Uncontrolled pain • Failure to respond to treatment • Unable to obtain prompt assistance for unstable patient Pulse oximetry (SpO2) NE NE NE < 92% < 92% Other Color change of patient Unexplained agita- tion for < 10 min CIWA < 15 points • UOP < 50 cc/4 hr • Color change of patient (pale, dusky, gray, or blue) • New-onset limb weak- ness or smile droop • Sepsis: ≥ 2 SIRS criteria 4 = Hillman K, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 13 = Benson L, et al.; 15 = Bader MK, et al. cc = cubic centimeters; CIWA = Clinical Institute Withdrawal Assessment; hr = hour; LOC = level of consciousness; min = minute; mmHg = millimeters of mercury; ND = not defined; NE = not evaluated; RR = respiratory rate; SBP = systolic blood pressure; SIRS = systemic inflammatory response syndrome; SpO2= arterial oxygen saturation; UOP = urine output ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 49 Rebecca raises a question that’s been on her mind. She reminds them that in the “Appraisal: Worth to Practice” column, teaching was identified as an important factor in initiating an RRT and expresses concern that their hospital is not an academic medical center. Chen reminds her that even though theirs is not a designated teaching hospital with residents on staff 24 hours a day, it has a culture of teaching that should enhance the success of an RRT. She adds that she’s already hearing a buzz of excitement about their project, that their colleagues across all disciplines have been eager to hear the results of their review of the evidence. In addition, Carlos says that many resources in their hos- pital will be available to help them get started with their project and reminds them of their hospital administrators’ commitment to support the team. ACTING ON THE EVIDENCE As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable inter- vention to initiate. They decide to take the criteria for activating an RRT from several successful studies/projects and put them into a synthesis table to better see their major similarities (see Table 44, 8, 9, 13, 15). From this com- bined list, they choose the criteria for initiating an RRT consult that they’ll use in their project (see Table 5). The team also begins discussing the ideal make up for their RRT. Again, they go back to the evaluation table and look Table 5. Defined Criteria for Initiating an RRT Consult at Our Hospital Pulmonar y Ventilation Color change of patient (pale, dusky, gray, or blue) Respiratory distress RR < 10 or < 30 breaths/min or unexplained dyspnea or new-onset difficulty breathing or shortness of breath Cardiovascular Tachycardia Unexplained < 130 beats/min for 15 min Bradycardia Unexplained < 50 beats/min for 15 min Blood pressure Unexplained SBP < 90 or < 200 mmHg Chest pain Complaint of nontraumatic chest pain Pulse oximetry < 92% SpO2 Perfusion UOP < 50 cc/4 hr Neurologic Seizures Initial, repeated, or prolonged Change in mental status • Sudden decrease in LOC with normal blood glucose • Unexplained agitation for < 10 min • New-onset limb weakness or smile droop Concern/worry about patient Nurse concern about overall deterioration in patients’ condition without any of the above criteria Sepsis • Temp, < 38°C • HR, < 90 beats/min • RR, < 20 breaths/min • WBC, < 12,000, < 4,000, or < 10% bands cc = cubic centimeters; hr = hours; HR = heart rate; LOC = level of consciousness; min = minute; mmHg = millimeters of mercury; RR = respiratory rate; SBP = systolic blood pressure; SpO2 = arterial oxygen saturation; Temp = temperature; UOP = urine output; WBC = white blood count 50 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com over the “Major Variables Studied” column, noting that the composition of the RRT varied among the studies/projects. Some evidence that led to the project, how to call an RRT, and out- come measures that will indicate whether or not the implementation 3. Winters BD, et al. Rapid response sys- tems: a systematic review. Crit Care Med 2007;35(5):1238-43. 4.Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised con- trolled trial. Lancet 2005;365(9477): 2091-7. 5. Sharek PJ, et al. Effect of a rapid re- sponse team on hospital-wide mortal- ity and code rates outside the ICU in a children’s hospital. JAMA 2007; 298(19):2267-74. 6.Chan PS, et al. Hospital-wide code rates and mortality before and after implementation of a rapid response team. JAMA 2008;300(21):2506-13. 7. DeVita MA, et al. Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care 2004;13(4): 251-4. 8.Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006; 13(4):178-82. 9. Dacey MJ, et al. The effect of a rapid response team on major clinical out- come measures in a community hos- pital. Crit Care Med 2007;35(9): 2076-82. 10. McFarlan SJ, Hensley S. Implementa- tion and outcomes of a rapid response team. J Nurs Care Qual 2007;22(4): 307-13. 11. Offner PJ, et al. Implementation of a rapid response team decreases cardiac arrest outside the intensive care unit. J Trauma 2007;62(5):1223-8. 12. Bertaut Y, et al. Implementing a rapid- response team using a nurse-to-nurse consult approach. J Vasc Nurs 2008; 26(2):37-42. 13. Benson L, et al. Using an advanced practice nursing model for a rapid re- sponse team. Jt Comm J Qual Patient Saf 2008;34(12):743-7. 14. Hatler C, et al. Implementing a rapid response team to decrease emergen- cies. Medsurg Nurs 2009;18(2):84-90, 126. 15. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient popu- lation. Jt Comm J Qual Patient Saf 2009;35(4):199-205. 16. Institute for Healthcare Improvement. Establish a rapid response team.n.d. http://www.ihi.org/IHI/topics/ criticalcare/intensivecare/changes/ establisharapidresponseteam.htm. As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable intervention to initiate. RRTs had active physician partic- ipation (n = 6), some had desig- nated physician consultation on an as-needed basis (n = 2), and some were nurse-led teams (n = 4). Most RRTs also had a respira- tory therapist (RT). All RRT mem- bers had expertise in intensive care and many were certified in advanced cardiac life support (ACLS). They agree that their team will be comprised of ACLS- certified members. It will be led by an acute care nurse practi- tioner (ACNP) credentialed for advanced procedures, such as central line insertion. Members will include an ICU RN and an RT who can intubate. They also discuss having physicians will- ing to be called when needed. Although no studies or projects had a chaplain on their RRT, Chen says that it would make sense in their hospital. Carlos, who’s been on staff the longest of the three, says that interdisci- plinary collaboration has been a mainstay of their organization. A physician, ACNP, ICU RN, RT, and chaplain are logical choices for their RRT. As the team ponders the evi- dence, they begin to discuss the next step, which is to develop ideas for writing their project implementation plan (also called a protocol). Included in this pro- tocol will be an educational plan to let those involved in the proj- ect know information such as the of the evidence was successful. They’ll also need an evaluation plan. From reviewing the studies and projects, they also realize that it’s important to focus their plan on evidence implementation, in- cluding carefully evaluating both the process of implementation and project outcomes. Be sure to join the EBP team in the next installment of this se- ries as they develop their imple- mentation plan for initiating an RRT in their hospital, including the submission of their project proposal to the ethics review board. ▼ Ellen Fineout-Overholt is clinical pro- fessor and director of the Center for the Advancement of Evidence-Based Prac- tice at Arizona State University in Phoe- nix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and pro- gram coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout- Overholt, ellen.fineout-overholt@asu. edu. REFERENCES 1. Chan PS, et al. (2010). Rapid re- sponse teams: a systematic review and meta-analysis. Arch Intern Med 2010;170(1):18-26. 2. McGaughey J, et al. Outreach and early warning systems (EWS) for the prevention of intensive care admission and death of critically ill adult patients on general hospital wards. Cochrane Database Syst Rev 2007;3:CD005529. ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 51 Evidence-based practice (EBP) is an approach that enables psychiatric mental health care practitioners as well as all clinicians to provide the highest quality of care using the best evidence available (Melnyk & Fineout-Overholt, 2005). One of the key steps of EBP is to critically appraise evidence to best answer a clinical question. For many mental health questions, understanding levels of evidence, qualitative inquiry methods, and questions used to appraise the evidence are necessary to implement the best qualitative evi- dence into practice. Drawing conclusions and making judgments about the evidence are imperative to the EBP process and clinical decision making (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). The over- all purpose of this article is to familiarize clinicians with qualitative research as an important source of evidence to guide practice decisions. In this article, an overview of the goals, methods and types of qualita- tive research, and the criteria used to appraise the quality of this type of evidence will be presented. QUALITATIVE BELIEFS Qualitative research aims to generate insight, describe, and understand the nature of reality in Kathleen M. Williamson, PhD, RN, associate director, Center for the Advancement of Evidence-Based Practice, Arizona State University, College of Nursing & Healthcare Innovation, Phoenix, Arizona; Kathleen.Williamson@asu.edu. human experiences (Ayers, 2007; Milne & Oberle, 2005; Polit & Beck, 2008; Saddler, 2006; Sandelowski, 2004; Speziale & Carpenter, 2003; Thorne, 2000). Qualitative researchers are inquisitive and seek to understand knowledge about how people think and feel, about the circumstances in which they find themselves, and use methods to uncover and decon- struct the meaning of a phenomenon (Saddler, 2006; Thorne, 2000). Qualitative data are collected in a natural setting. These data are not numerical; rather, they are full and rich descriptions from participants who are experiencing the phenomenon under study. The goal of qualitative research is to uncover the truths that exist and develop a complete understand- ing of reality and the individual’s perception of what is real. This method of inquiry is deeply rooted in descriptive modes of research. “The idea that multiple realties exist and create meaning for the individuals studied is a fundamental belief of qualitative research- ers” (Speziale & Carpenter, 2003, p. 17). Qualitative research is the studying, collecting, and understand- ing the meaning of individuals’ lives using a variety of materials and methods (Denzin & Lincoln, 2005). WHAT IS A QUALITATIVE RESEARCHER? Qualitative researchers commonly believe that indi- viduals come to know and understand their reality in 202 Copyright © 2009 The Author(s) TABLE 1. Most Commonly Used Qualitative Research Methods Critical Appraisal of Qualitative Evidence Method Purpose Research question(s) Sample size (on average) Data sources/collection EthnographyDescribe culture of people What is it like to live . . .What is it . . .30-50Interviews, observations, field notes, records, chart data, life histories PhenomenologyDescribe phenomena, the appearance of things, as lived experience of humans in a natural setting What is it like to have this experience? What does it feel like? 6-8Interviews, videotapes, observations, in-depth conversations Grounded theoryTo develop a theory rather than describe a phenomenon Questions emerge from the data 25-50Taped interview, observation, diaries, and memos from researcher Source. Adapted from Polit and Beck (2008) and Speziale and Carpenter(2003). different ways. It is through the lived experience and the interactions that take place in the natural setting that the researcher is able to discover and understand the phenomenon under study (Miles & Huberman, 1994; Patton, 2002; Speziale & Carpenter, 2003). To ensure the least disruption to the environ- ment/natural setting, qualitative researchers care- fully consider the best research method to answer the research question (Speziale & Carpenter, 2003). These researchers are intensely involved in all aspects of the research process and are considered participants and observers in setting or field (Patton, 2002; Polit & Beck, 2008; Speziale & Carpenter, 2003). Flexibility is required to obtain data from the richest possible sources of information. Using a holistic approach, the researcher attempts to cap- ture the perceptions of the participants from an “emic” approach (i.e., from an insider’s viewpoint; Miles & Huberman, 1994; Speziale & Carpenter, 2003). Often, this is accomplished through the use of a variety of data collection methods, such as inter- views, observations, and written documents (Patton, 2002). As the data are collected, the researcher simultaneously analyzes it, which includes identi- fying emerging themes, patterns, and insights within the data. According to Patton (2002), quali- tative analysis engages exploration, discovery, and inductive logic. The researcher uses a rich literary account of the setting, actions, feelings, and mean- ing of the phenomenon to report the findings (Patton, 2002). COMMONLY USED QUALITATIVE DESIGNS According to Patton (2002), “Qualitative methods are first and foremost research methods. They are ways of finding out what people do, know, think, and feel by observing, interviewing, and analyzing docu- ments” (p. 145). Qualitative research designs vary by type and purpose: data collection strategies used and the type of question or phenomenon under study. To critically appraise qualitative evidence for its valid- ity and use in practice, an understanding of the types of qualitative methods as well as how they are employed and reported is necessary. Many of the methods are routed in the anthropol- ogy, psychological, and sociology disciplines. Many commonly used methods in the health sciences research are ethnography, phenomenology, and grounded theory (see Table 1). Ethnography Ethnography has its traditions in cultural anthropology, which describe the values, beliefs, and practice of cultural groups (Ploeg, 1999; Polit & Beck, 2008). According to Speziale and Carpenter (2003), the characteristics that are central to eth- nography are that (a) the research is focused on culture, (b) the researcher is totally immersed in the culture, and (c) the researcher is aware of her/ his own perspective as well as those in the study. Ethnographic researchers strive to study cultures from an emic approach. The researcher as a par- ticipant observer becomes involved in the culture to collect data, learn from participants, and report on the way participants see their world (Patton, 2002). Data are primarily collected through obser- vations and interviews. Analysis of ethnographic results involves identifying the meanings attrib- uted to objects and events by members of the cul- ture. These meanings are often validated by members of the culture before finalizing the results (called member checks). This is a labor-intensive method that requires extensive fieldwork. Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 203 Williamson Phenomenology Phenomenology has its roots in both philosophy and psychology. Polit and Beck (2008) reported, “Phenomenological researchers believe that lived experience gives meaning to each person’s percep- tion of a particular phenomenon” (p. 227). According to Polit and Beck, there are four aspects of the human experience that are of interest to the phe- nomenological researcher: (a) lived space (spatial- ity), (b) lived body (corporeality), (c) lived human relationships (relationality), and (d) lived time (tem- porality). Phenomenological inquiry is focused on exploring how participants in the experience make sense of the experience, transform the experience into consciousness, and the nature or meaning of the experience (Patton, 2002). Interpretive phenom- enology (hermeneutics) focuses on the meaning and interpretation of the lived experience to better understand social, cultural, political, and historical context. Descriptive phenomenology shares vivid reports and describes the phenomenon. In a phenomenological study, the researcher is an active participant/observer who is totally immersed in the investigation. It involves gaining access to participants who could provide rich descriptions from in-depth interviews to gather all the informa- tion needed to describe the phenomenon under study (Speziale & Carpenter, 2003). Ongoing analyses of direct quotes and statements by participants occur until common themes emerge. The outcome is a vivid description of the experience that captures the meaning of the experience and communicates clearly and logically the phenomenon under study (Speziale & Carpenter, 2003). Grounded Theory Grounded theory has its roots in sociology and explores the social processes that are present within human interactions (Speziale & Carpenter, 2003). The purpose is to develop or build a theory rather than test a theory or describe a phenomenon (Patton, 2002). Grounded theory takes an inductive approach in which the researcher seeks to generate emergent categories and integrate them into a theory grounded in the data (Polit & Beck, 2008). The research does not start with a focused problem; it evolves and is discovered as the study progresses. A feature of grounded theory is that the data collection, data analysis, and sampling of participants occur simulta- neously (Polit & Beck, 2008; Powers, 2005). The researchers using ground theory methodology are able to critically analyze situations, not remove themselves from the study but realize that they are part of it, recognize bias, obtain valid and reliable data, and think abstractly (Strauss & Corbin, 1990). Data collection is through in-depth interview and observations. A constant comparative process is used for two reasons: (a) to compare every piece of data with every other piece to more accurately refine the relevant categories and (b) to assure the researcher that saturation has occurred. Once saturation is reached the researcher connects the categories, pat- terns, or themes that describe the overall picture that emerged that will lead to theory development. ASPECTS OF QUALITATIVE RESEARCH The most important aspects of qualitative inquiry is that participants are actively involved in the research process rather than receiving an interven- tion or being observed for some risk or event to be quantified. Another aspect is that the sample is pur- posefully selected and is based on experience with a culture, social process, or phenomena to collect infor- mation that is rich and thick in descriptions. The final essential aspect of qualitative research is that one or more of the following strategies are used to collect data: interviews, focus groups, narratives, chat rooms, and observation and/or field notes. These methods may be used in combination with each other. The researcher may choose to use triangulation strategies on data collection, investigator, method, or theory and use multiple sources to draw conclusions about the phenomenon (Patton, 2002; Polit & Beck, 2009). SUMMARY This is not an inclusive list of qualitative methods that researchers could choose to use to answer a research question, other methods include historical research, feminist research, case study method, and action research. All qualitative research methods are used to describe and discover meaning, understand- ing, or develop a theory and transport the reader to the time and place of the observation and/or inter- view (Patton, 2002). THE HIERARCHY OF QUALITATIVE EVIDENCE Clinical questions that require qualitative evi- dence to answer them focus on human response and 204 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 Critical Appraisal of Qualitative Evidence TABLE 2. How were they selected? Was it adequate? How were the data collected? Subquestions to Further Answer, Are the Study Findings Valid? Participants Sample Data collection Did they provide rich and thick descriptions? Was the setting appropriate to acquire an adequate sample? Were the tools adequate? Were the participants’ rights protected? Was the sampling method appropriate? How were the data coded? If so how? Did the researcher eliminate bias? Do the data accurately represent the study participants? How accurate and complete were the data? Was the group or population adequately described? Was saturation achieved? Does gathering the data adequately portray the phenomenon? Source. Adapted from Powers (2005), Polit and Beck (2008), Russell and Gregory (2003), and Speziale and Carpenter (2003). meaning. An important step in the process of apprais- ing qualitative research as a guide for clinical prac- tice is the identification of the level of evidence or the “best” evidence. The level of evidence is a guide that helps identify the most appropriate, rigorous, and clinically relevant evidence to answer the clinical question (Polit & Beck, 2008). Evidence hierarchy for qualitative research ranges from opinion of authori- ties and/or reports of expert committees to a single qualitative research study to metasynthesis (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). A metasynthesis is comparable to meta-analysis (i.e., systematic reviews) of quantitative studies. A meta- synthesis is a technique that integrates findings of multiple qualitative studies on a specific topic, pro- viding an interpretative synthesis of the research findings in narrative form (Polit & Beck, 2008). This is the strongest level of evidence in which to answer a clinical question. The higher the level of evidence the stronger the evidence is to change practice. However, all evidence needs be critically appraised based on (a) the best available evidence (i.e., level of evidence), (b) the quality and reliability of the study, and (c) the applicability of the findings to practice. CRITICAL APPRAISAL OF QUALITATIVE EVIDENCE Once the clinical issue has been identified, the PICOT question constructed, and the best evidence located through an exhaustive search, the next step is to critically appraise each study for its validity (i.e., the quality), reliability, and applicability to use in practice (Melnyk & Fineout-Overholt, 2005). Although there is no consensus among qualitative researchers on the quality criteria (Cutcliffe & McKenna, 1999; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Sandelowski, 2004), many have published excellent tools that guide the process for critically appraising qualitative evidence (Duffy, 2005; Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Speziale & Carpenter, 2003). They all base their cri- teria on three primary questions: (a) Are the study findings valid? (b) What were the results of the study? (c) Will the results help me in caring for my patients? According to Melnyk and Fineout-Overholt (2005), “The answers to these questions ensure rele- vance and transferability of the evidence from the search to the specific population for whom the practi- tioner provides care” (p. 120). In using the questions in Tables 2, 3, and 4, one can evaluate the evidence and determine if the study findings are valid, the method and instruments used to acquire the knowl- edge credible, and if the findings are transferable. The qualitative process contributes to the rigor or trustworthiness of the data (i.e., the quality). “The goal of rigor in qualitative research is to accurately represent study participants’ experiences” (Speziale & Carpenter, 2003, p. 38). The qualitative attributes of validity include credibility, dependability, confirm- ability, transferability, and authenticity (Guba & Lincoln, 1994; Miles & Huberman, 1994; Speziale & Carpenter, 2003). Credibility is having confidence and truth about the data and interpretations (Polit & Beck, 2008). The credibility of the findings hinges on the skill, competence, and rigor of the researcher to describe the content shared by the participants and the abil- ity of the participants to accurately describe the phenomenon (Patton, 2002; Speziale & Carpenter, 2003). Cutcliffe and McKenna (1999) reported that the most important indicator of the credibility of findings is when a practitioner reads the study find- ings and regards them meaningful and applicable and incorporates them into his or her practice. Confirmability refers to the way the researcher documents and confirms the study findings (Speziale Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 205 Williamson TABLE 3. Subquestions to Further Answer, What Were the Results of the Study? Was the purpose of the study clear? Is the research design appropriate for the research question? Is the description of findings thorough? Do findingsfit the datafrom whichthey weregenerated? follow? Were all themes identified, useful, creative, and convincing of the phenomena? Are the results logical, consistent, and easy to Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003).TABLE 4. Subquestions to Further Answer, Will the Results Help Me in Caring for My Patients? What meaning and relevance does this study have for my patients? How would I use these findings in my practice? How does the study help provide perspective on my practice? Are the conclusions appropriate to my patient population? Are the results applicable to my patients? How would patient and family values be considered in applying these results? Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003). & Carpenter, 2003). Confirmability is the process of confirming the accuracy, relevance, and meaning of the data collected. Confirmability exists if (a) the researcher identifies if saturation was reached and (b) records of the methods and procedures are detailed enough that they can be followed by an audit trail (Miles & Huberman, 1994). Dependability is a standard that demonstrates whether (a) the process of the study was consistent, (b) data remained consistent over time and conditions, and (c) the results are reliable (Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). For example, if study methods and results are depend- able, the researcher consistently approaches each occurrence in the same way with each encounter and results were coded with accuracy across the study. Transferability refers to the probability that the study findings have meaning and are usable by oth- ers in similar situations (i.e., generalizable to others in that situation; Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). To deter- mine if the findings of a study are transferable and can be used by others, the clinician must consider the potential client to whom the findings may be applied (Speziale & Carpenter, 2003). Authenticity is when the researcher fairly and faithfully shows a range of different realities and develops an accurate and authentic portrait for the phenomenon under study (Polit & Beck, 2008). For example, if a clinician were to be in the same environment as the researcher describes, they would experience the phenomenon similarly. All mental health providers need to become familiar with these aspects of qualitative evidence and hone their criti- cal appraisal skills to enable them to improve the outcomes of their clients. CONCLUSION Qualitative research aims to impart meaning of the human experience and understand how people think and feel about their circumstances. Qualitative researchers use a holistic approach in an attempt to uncover truths and understand a person’s reality. The researcher is intensely involved in all aspects of the research design, collection, and analysis pro- cesses. Ethnography, phenomenology, and grounded theory are some of the designs that a researcher may use to study a culture, phenomenon, or theory. Data collection strategies vary based on the research question, method, and informants. Methods such as interviews, observations, and journals allow for information-rich participants to provide detailed lit- erary accounts of the phenomenon. Data analysis occurs simultaneously as data collection and is the process by which the researcher identifies themes, concepts, and patterns that provide insight into the phenomenon under study. One of the crucial steps in the EBP process is to critically appraise the evidence for its use in practice 206 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 Based Nursing, 3, 68-70.For reprints and permission queries, please visit SAGE’s Web site at http://www.sagepub.com/journalsPermissions.nav. Critical Appraisal of Qualitative Evidence and determine the value of findings. Critical appraisal is the review of the evidence for its validity (i.e., strengths and weaknesses), reliability, and usefulness for clients in daily practice. “Psychiatric mental health clinicians are practicing in an era emphasizing the use of the most current evidence to direct their treatment and interventions” (Rice, 2008, p. 186). Appraising the evidence is essential for assurance that the best knowledge in the field is being applied in a cost-effective, holistic, and effective way. To do this, one must incorporate the critically appraised findings with their abilities as clinicians and their clients’ preferences. As professionals, clinicians are expected to use the EBP process, which includes appraising the evidence to determine if the best results are believable, useable, and dependable. Clinicians in psychiatric mental health must use qualitative evidence to inform their practice deci- sions. For example, how do clients newly diagnosed with bipolar and their families perceive the life impact of this diagnosis? Having a well done meta- synthesis that provides an accurate representation of the participants’ experiences, and is trustworthy (i.e., credible, dependable, confirmable, transferable, and authentic), will provide insight into the situational context, human response, and meaning for these cli- ents and will assist clinicians in delivering the best care to achieve the best outcomes. REFERENCES Ayers, L. (2007). Qualitative research proposals—Part I. Journal Wound Ostomy Continence Nursing, 34, 30-32. Cutcliffe, J. R., & McKenna, H. P. (1999). Establishing the credibil- ity of qualitative research findings: The plot thickens. Journal of Advanced Nursing, 30, 374-380. Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of qualitative research (3rd ed.). Thousand Oaks, CA: Sage. Duffy, M. E. (2005). Resources for critically appraising qualitative research evidence of nursing practice clinical question. Clinical Nursing Specialist, 19, 288-290. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 105-117). Thousand Oaks, CA: Sage. Melnyk, B. M., & Fineout-Overholt, E. (Eds.). (2005). Evidence-based practice in nursing and healthcare. Philadelphia: Lippincott Williams & Wilkins. Miles, M. B., & Huberman, A. M. (1994). An expend sourcebook qualitative data analysis (4th ed.). Thousand Oaks, CA: Sage. Milne, J., & Oberle, K. (2005). Enhancing rigor in qualitative description: A case study. Journal Wound Ostomy Continence Nursing, 32, 413-420. Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks: Sage. Ploeg, J. (1999). Identifying the best research design to fit the question. Part 2: Qualitative designs. Evidence-Based Nursing, 2, 36-37. Polit, D. F., & Beck, C. T. (2008). Nursing research: Generating and assessing evidence fro nursing practice. Philadelphia: Lippincott Williams & Wilkins. Powers, B. A. (2005). Critically appraising qualitative evidence. In B. M. Melnyk & E. Fineout-Overholt (Eds.), Evidence-based practice in nursing and healthcare (pp. 127-162). Philadelphia: Lippincott Williams & Wilkins. Rice, M. J. (2008). Evidence-based practice in psychiatric care: Defining levels of evidence. Journal of the American Psychiatric Nurses Association, 14(3), 181-187. Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence-Based Nursing, 6, 36-40. Saddler, D. (2006). Research 101. Gastroenterology Nursing, 30, 314-316. Sandelowski, M. (2004). Using qualitative research. Qualitative Health Research, 14, 1366-1386. Speziale, H. J. S., & Carpenter, D. R. (2003). Qualitative research in nursing: Advancing the humanistic imperative. Philadelphia: Lippincott Williams & Wilkins. Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. London: Sage. Thorne, S. (2000). Data analysis in qualitative research. Evidence- Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 207

WeeksixLiteraturereview_Maclure1

Attention-Deficit Hyperreactivity Disorder in Twins and HOW Often Siblings are Affected or DiagnosedBarbara MaclureDr. Daniel KuchinkaKeiser UniversityADHD In Twins and SiblingsIntroductionAttention-Deficit Hyperreactivity Disorder (ADHD) is also known as Hyperreactivity. This is a disorder that begins during childhood. According to studies, twins are more likely to acquire this condition than singletons. Besides, a child who has an identical twin with ADHD has got a higher chance of developing this disorder (Faraone & Larsson, 2017. The most common symptoms in this disorder include a continued inability to hear, the patient focusing on a specific task for a prolonged time, and the inability to control impulses. Children exhibiting this condition manifest these behaviors quite often and severely than their agemates. A person suffering from this hyperreactivity may experience difficulty in schoolwork, family life, personal tasks, or friendship. ADHD comprise one of the most known disorders during childhood. Furthermore, it is known to affect 3%-5% of school-aged children. ADHD is more prevalent in boy child than in a girl child (Burke & Loeber, 2015). Although the symptoms of this condition may disappear with advanced age, it can persist up to adolescence or even adulthood. It has been estimated that 2% of all adults’ exhibit ADHD.DiagnosisDiagnosis of this condition is difficult because many children are sometimes hyperreactive, inactive, and impulsive. In the diagnosis of this condition, experts make use of the guidelines that are listed in diagnostic and Statistical Manual and Mental Disorders. The guidelines provide for a child manifesting behavior that is typical of this condition before they reach the age of seven (Lenzi F., 2018). This behavior is expected to last for about six months and has to occur regularly as compared to other children of the same age. The behavior must also be exhibited in two or more settings, like at home or school, instead of just a single setting. There is an existing controversy over the diagnosis of this condition. In America, physicians diagnose ADHD than in any other country globally. Critics have adopted this discrepancy as part of the evidence to disregard psychologists as well clinicians in showing that children with this condition are naturally nuisance or active to parents and teachers (Langer, Garbe, & Tobias Banaschewski, 2015).How Twins get diagnosedChildren and grown-ups with this condition consistently manifest various degrees of hyperreactivity, impulsiveness, and inattention. Inattention in this case means that those people who exhibit this condition have difficulty in focusing their minds on a single item. A good example is that such people may be quickly bored by assignments or a given task within minutes, may have trouble listening, may make mistakes out of carelessness, and may as well indicate instances of daydream (Jain R, 2016). Children may concentrate on one task that is not interesting. Hyperreactivity also involves a constant motion, which may seem like out of a motor influence. At school, children may fidget or touch things always, disturb their peers and talk in a constant waylaid manner, and may as well  make other children impulsive, thereby making them act before they can think. In this case, they may make comments that are not appropriate. While in class, they may interrupt conversations and engage in activities that are likely to cause harm to them. Children who have this condition may as well manifest learning problems that are severe due to their inability to pay attention, follow given instructions, or incomplete assigned tasks.Additionally, their aggressive behavior makes them unpopular with other children. Following this, children suffering from this condition are usually criticized by others and are always corrected by their parents and teachers, who unknowingly tend to think that such behavior is done intentionally. The child’s poor academic performance, poor social relations, and negative feedback may make develop low self-esteem and other emotional challenges (Jain R, 2016).CausesIt is not yet known even to scientists about the causes of ADHD (Freitag, C. M., & Retz, W. 2010). Nonetheless, scientists have disregarded theories that were highly regarded and accepted before. One theory is that of undetectable brain damage or minor brain damage, which is a result of birth complications or due to infections. Another theory that has been used to explain ADHD is the consumption of refined food addictive or sugar that has been refined. Scientists disregarded this theory on the account that there was no evidence to prove that all the children with ADHD had benefited from food colorings or diets that restricted sugar. Many scientists as well have disregarded the allegation that poor parenting cased ADHD. The majority of the scientists believe that this condition is biological, and its primary cause is an abnormality within the brain (Jain R. 2016). Studies have shown that in people exhibiting ADHD, the part of the brain that regulates the attention span is much less active as compared to other people who do not have this condition. Another thing is that the condition seems to be prominent within families, thus not ruling out genetic factors.Stephen and Henrick (2010) provide that after decades of research, genes have come to be known to play a very critical role in the attention of ADHD as well as the condition’s comorbidity with other range of disorders. Adoption studies and family and twin studies reveal that this condition runs in families, and has got a very high probability for inheritability, which stands at 74%, and which is motivated by the search for susceptibility ADHD genes. According to Deeann Wallis (2016), today, it is generally agreed that ADHD has got a primary genetic base as well as a biological one. Nonetheless, despite the identification of various candidate genes, none has been found to have a significant impact, and therefore this condition has remained erosive.TreatmentAlthough there is yet to be a cure that is effective for ADHD, there exist a variety of treatments that may be of great assistance to children suffering from this disorder. They comprise of counseling, medication as well as training in social skills. The use of drugs in medication is the most usual form of ADHD treatment and may be useful in reducing the symptoms of this disorder. Doctors also regard stimulants as safe, although they may bring side effects like nervousness, loss of appetite, insomnia, or stomachache (Swarze, Allan, 2013). Drug therapy is known to cause a slow growth rate, but during adolescence, a state of normalcy is restored. It is recommended that children take these drugs during school time, and only take them during weekends when schools are closed, to reduce the adverse side effects likely to arise. According to Geoffrey and Loeber (2015), the program of Stop Now and Plan (SNAP)would help children in problem, and emotional solving skills, prosocial, and as well as reduce parental stress.  The use of therapies for treatment is highly encouraged. Counseling, for example, has been found to help children recognize as well as deal with negative feelings. Social skills may effectively assist children to recognize the way their behavior affects others, and consequently assist them in developing more appropriate behavior (Lenzi FC., 2018). Children who have ADHD may as well benefit from a select category of academic tutors who can lead them in breaking down assignments given in school into parts to address them efficiently. In this case, the results indicate that independent processes may result in effective behavioral outcomes, with specificity concerning the mechanisms that are related to different treatment results (Geoffrey & Lieber, 2015) ConclusionTwin and family studies on ADHD condition in both adolescents have manifested an active component that is heritable 60-80% for all cases reported. According to Retz and Clein, (2010), the rate of remittance or persistence of this disorder in an individual’s lifespan shows a heterogenicity of the condition which may as well be found to be made of attentive and the ADHD combination. There can be no conclusion that can be made regarding the general inheritance pattern as family studies, as well as twin studies reveal different inheritance modes (Retz & Clein, 2010). However, studies agree on integrating sex differences concerning the genetic risk of ADHD. The two studies as well agree on the role of the environment in shaping ADHD+CD, which is another subtype of the condition. Another subtype with genetic roots is persistent ADHD during adulthood, which is a solid genetically influenced subtype of ADHD. There are different criteria for diagnosis, depending on the environmental factors and scale methods. Research has been ongoing on how to understand ADHD etiology to understand this condition better and as well treat it. However, the specific causes of ADHD are yet to be known, thus slowing effective diagnosis and treatment.ReferencesBurke, J. D., & Loeber, R. (2015). Mechanism of Behavioral and Affective Treatment Outcomes in a Cognitive Behavioral Intervention for Boys. Springer Science and Business Media.Faraone, S. V., & Larsson, H. (2017). Genetics of Attention Deficiency Disorder. Open Access.Freitag, C. M., & Retz, W. (2010). Family and Twin Studies in Attention-Deficit Hyperreactivity Disorder. Psychology and Psychiatry.Jain, R. (2016). Current and Investigational Medication Delivery Systems for treating Attention-Deficit/Hyperreactivity Disorder. The Primary Care Companion for CNS Disorders.L., C. D., T., B., C., S., C. M., & A., Z. (2017). A systematic review of the quality of life and functional outcomes in randomized placebo-controlled studies of medications for attention-deficit/hyperreactivity disorder. European Child & Adolescent Psychiatry, 1283-1307.Langer, I., Garbe, E., & Tobias Banaschewski, R. T. (2015). Twin and Sibling Studies Using Health Insurance Data: The Example of Attention-Deficit/ Hyperactivity Disorder (ADHD). Open Access.Lenzi FC. (2018). Pharmacotherapy of emotional dysregulation in adults with ADHD: A systematic review and meta-analysis. Neuroscience and Biobehavioral Reviews.Retz, K. (2015). Attention-Deficit Hyperreactivity Disorder (ADHD) in Adults: Key Issues in Mental Health. Basel, Karger.Rommel, A. S., Rijsdijk, F., Greven, C. U., & Kuntsi, P. A. (2015). A Longitudinal Twin Study of the Direction of Effects between ADHD Symptoms and IQ. Journal Pone.Schwarz, Alan (Mar 31, 2013). “A.D.H.D. Seen in 11% of U.S. Children as Diagnoses Rise”. New York Times.      Retrieved 2 August 203

Week2ReflectionDiscussion

Reflection and Discussion Forum Week 2Reflect on the assigned readings for the week. Identify what you thought was the most important concept(s), method(s), term(s), and/or any other thing that you felt was worthy of your understanding.Also, provide a graduate-level response to each of the following questions:1. Marco Manager supervises three employees at a bank. Several times over the last three months, money has been missing from a specific employee’s till at the end of the shift. Marco Manager has worked with this employee for five years and considers this employee a friend. What ethical dilemmas does this present for Marco Manager?2. Cash Right Now, LLC provides very high interest loans to people with poor credit scores that have a high probability of defaulting on the loan. Many people do in fact default on these loans; however, Cash Right Now, LLC does make a substantial profit overall, even considering these defaults. The people that borrow from Cash Right Now, LLC are unlikely to obtain credit elsewhere. Discuss if Cash Right Now, LLC’s business practices are ethical considering it charges much higher interest rates than traditional banks.[should be at least450+ wordsand in APA format (including Times New Roman with font size 12 and double spaced). Post the actual body of your paper in the discussion thread then attach a Word version of the paper for APA review]

Week1Discussions

Week 1Respond to classmates in a minimum of 175 words each person, post must be substantive responses: T.W.Pharmacotherapy, or the use of drugs for treatment, is based on two concepts: pharmacokinetics and pharmacodynamics. Most simply put,pharmacokinetics(PK) is how the body affects the drug whilepharmacodynamics(PD) is how the drug affects the body.Please explain the pharmacokinetics and pharmacodynamics of either a stimulant, antidepressant, anti-psychotic, mood stabilizer, or anti-anxiety medication.The pharmacokinetics of various drugs depends on several factors. The way the drug is administered, the length time it takes the drug to work, how fast it gets there, and how much of the drug actually reaches the intended site, and the fat solubility. The walls of the intestines, blood vessels, and neurons are composed of fats calledlipids. Therefore, the more fat soluble a drug is, the more easily it cross the blood-brain barrier.The pharmacodynamics is how the drug affects the body.This process involves 11 mechanisms.The text however uses the broad terms of agonism and antagonism.One assists or facilitates and the other interferes with the cell functioning.An agonist facilitates an action, and increases the probability of the cell firing naturally.The antagonist interferes with an action and decreases the probability of the cells firing naturally.In turn one will either take longer for the drug to take effect or be effective at all and one will all the drug to take its course and work as intended.Ingersoll, R. E. & Rak, C. F. (2016). Psychopharmacology for mental health professionals: An integrative approach (2nd ed.). Belmont, CA: Cengage Learning.H.GPharmacokinetics consists of how one’s body metabolizes and eliminated drugs. The most common way that individuals receive antidepressants is oral administration. Once one has taken the medication, it will have to go through the bloodstream and pass through membranes to reach the brain in which the medication may begin to work throughout the body or the drug may begin to be distributed. It may take longer for some because of the way that the medication is administered. Depending on how much of the medication reaches the brain will determine rather an individual’s doses needs to stay the same or be changed. The most common metabolizing drugs are the cytochrome P450 enzyne family. With positive results, the medication begins to improve one’s depression symptoms.Pharmacodynamics is how drugs affect the nervous system. Medications/drugs changes the way the nervous system processes different things. For example, antidepressants may change the way an individual experiences depression or it may reduce the symptoms of depression by changing different functions within the brain. Both pharmacokinetics and pharmacodynamics are involved with how medication may affect the body/brain, and the process of medication going the ones body system.Reference:Ingersoll, R.E. & Rak, C.F. (2016).Psychopharmacology for mental health professionals:An integrative approach (2nded.). Belmont, CA: Cengage Learning