gms | German Medical Science

GMS Journal for Medical Education

Gesellschaft für Medizinische Ausbildung (GMA)

ISSN 2366-5017

Development, testing and generalizability of a standardized evaluation form for the assessment of patient-directed reports in the new final medical licensing examination in Germany

article assessment methods

  • corresponding author Lena Selgert - Institut für medizinische und pharmazeutische Prüfungsfragen (IMPP), Mainz, Germany
  • author Bernd Bender - Institut für medizinische und pharmazeutische Prüfungsfragen (IMPP), Mainz, Germany
  • author Barbara Hinding - Institut für medizinische und pharmazeutische Prüfungsfragen (IMPP), Mainz, Germany
  • author Aline Federmann - Institut für medizinische und pharmazeutische Prüfungsfragen (IMPP), Mainz, Germany
  • author André L. Mihaljevic - Universitätsklinikum Heidelberg, Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Heidelberg, Germany
  • author Rebekka Post - "Was hab' ich?" gGmbH, Dresden, Germany
  • author Ansgar Jonietz - "Was hab' ich?" gGmbH, Dresden, Germany
  • author John Norcini - SUNY Upstate Medical University, Department of Psychiatry, New York, USA
  • author Ara Tekian - University of Illinois at Chicago, College of Medicine, Illinois, USA
  • author Jana Jünger - Institut für medizinische und pharmazeutische Prüfungsfragen (IMPP), Mainz, Germany

GMS J Med Educ 2021;38(3):Doc71

doi: 10.3205/zma001467, urn:nbn:de:0183-zma0014672

This is the English version of the article.
The German version can be found at: http://www.egms.de/de/journals/zma/2021-38/zma001467.shtml

Received: March 31, 2020
Revised: August 10, 2020
Accepted: September 21, 2020
Published: March 15, 2021

© 2021 Selgert et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Abstract

Background: As doctors often fail to explain diagnoses and therapies to patients in an understandable and appropriate way, the improvement of doctor-patient communication is essential. The current medical training and examinations are focused on verbal rather than on written communication. Following the premise of “assessment drives learning”, the final medical licensing examination in Germany has been further developed by the German National Institute for state examinations in Medicine, Pharmacy and Psychotherapy (IMPP). As part of the discharge management the candidates have to prepare a report for the patient that is understandable and provides them with all important information about their stay in hospital.

Aim: A standardized evaluation form for formative and summative feedback has been developed and tested with regard to applicability and the assurance of test quality criteria, especially the reliability to assess the written communication skills of the students.

Methodology: In an expert consensus procedure, a draft for a standardized evaluation form was developed. This form was revised after an initial trial run on patient-directed reports written by students in their last year of medical studies. Afterwards twenty-one patient-directed reports were evaluated by fourteen different examiners. The reliability was tested by calculating the generalizability-coefficient and by analysing the inter-rater reliability.

Results: The first test on the evaluation of the patient-directed reports indicated the practicability of the application and the usefulness of the evaluation form as an instrument for assessing the written communication skills of students. The analyses of the inter-rater reliability showed that the degree of agreement in the evaluations was partly different between two groups of examiners. The calculated G-coefficient indicates a high reliability. The content validity of the evaluation form was given through the comprehensive medical expertise in the development process.

Conclusion: Assessing written patient-directed communication is a benefit of the newly developed last part of the medical licensing examination in Germany. Continuous formative assessment and feedback based on the evaluation form is intended to improve the written communication skills of future doctors. Furthermore, a better understanding of their diagnosis and treatment as well as a trusting relationship with their doctor may empower patients in the medical decision process and lead to fewer dismissal errors in the future. For consistent use of the evaluation form a standardized training of examiners should be implemented.

Keywords: communication, education, patient participation


1. Introduction

Following the “patient’s rights law” by the German civil law code every patient has the right to be fully informed [1]. Nevertheless, several studies show that doctors often fail to explain diagnoses and therapies to patients in an understandable and appropriate way. Therefore an improvement of doctor-patient communication is essential: 22% of patients receive incomprehensible answers to their questions and in 29% incomprehensible explanations of examination results from their doctors. As a result, 39% of patients feel left alone with their worries and fears [2].

Incorrect communication by medical professionals caused up to 33% of dismissal errors shown in a study between October 2012 and September 2013 [3]. Poor communication during discharge lead to medication errors, poor wound care, inadequate nutrition, rehospitalization, life-threatening situations, avoidable and unnecessary medical services and procedures as well as additional work for nursing services and increased costs for the health system. Providing sufficient and written information for patients is essential to guarantee their adherence to therapy and to implement preventive measures [2], [4], [5], [6], [7]. Especially communicating without using medical terminology is emphasized as a meaningful strategy to empower patients in their decision making process [8].

To improve the doctor-patient-communication of future doctors, Jünger et al. developed a longitudinal communication curriculum [9], [10]. Based on this curriculum the essential learning objectives of doctor-patient-communication can be integrated into medical training and assessment [11].

As assessment drives learning [12], the final part of the medical licensing examination in Germany was further developed by the German National Institute for state examinations in Medicine, Pharmacy and Psychotherapy (IMPP) to meet the needs of patients in order to optimally prepare medical students for their first day at work. Common assessments of communication skills e.g. objective structured clinical examinations, often used simulated patients and focused on verbal communication skills [13]. To increase the authenticity the new workplace-based examination demands real patients: on a surgical or internal medicine ward as well as in outpatient care (see figure 1 [Fig. 1]). As part of the improved discharge management candidates will prepare an evidence-based patient report for the post-discharge attending physician and a report for the patients themselves that is easily understandable and provides them with all important information [14].

Use of simple language is one of the most common strategies used by doctors, nurses and pharmacists in order to improve communication with their patients [15]. Earl et al. showed the impact of a health literacy module on the improvement of students' written patient education materials in the areas of readability, message content, computational power, statistics and concepts of patient activity. On the other hand, the simplification of medical language remained difficult [16]. Summarizing, there is already a number of international studies that have dealt with doctor-patient-communication in general and written communication in particular [17], [18], [19], [20], [21]. So far, these studies have dealt primarily with the relationship between patients' health literacy and communication, the legibility and comprehensibility of written patient information materials, as well as the benefits of written communication strategies. A larger randomized, controlled study based on an understandable patient report was already conducted with 417 patients by the initiative “Was hab’ ich?” gGmbH. The physicians at “Was hab' ich?” provided patients with an easily understandable patient report after discharge from hospital. Their study showed significant effects of the patient reports on the patients’ understanding of examination results, medication indications and prescriptions [22]. The language used in the reports was characterized by simple words, short, complete and simple sentences as well as positive language and the avoidance of medical terms. Relevant background information was provided and the text had a logical structure [23], [24]. On the basis of this collection of criteria for the preparation of a report in patient-directed language, “Was hab’ ich?” designed a template for the preparation of such reports by medical students. So far, there is no evaluation instrument for these patient-directed reports.

The aim of this study was to develop and to test a standardized evaluation form for formative and summative assessment of reports which covered the important aspects of patient-directed writing. Based on real rather than simulated patients and situations, this form had to be used individually and to cover all different kinds of settings and diseases.

This evaluation form was based on the collection of criteria by “Was hab’ ich?” gGmbH, literature analysis and expert opinions. The applicability of the form and the test quality criteria especially the reliability were tested.


2. Methodology

2.1. Development of a standardized evaluation form for patient-directed reports

In order to assess the quality of written communication in patient-directed reports by medical students, a first draft of an evaluation form was drawn up based on a literature analysis and on a collection of important criteria for writing patient-directed by “Was hab´ ich?” gGmbH and the IMPP [23], [24]. In August 2018, a group of 27 medical experts from eight German faculties consisting of specialists in general medicine, internal medicine, anaesthesia, psychiatry, surgery, psychosomatic medicine and psychotherapy, psychologists, who were all participants in the German “Master of Medical Education” (MME) study programme, and of five students prepared a precise evaluation form in a consensus procedure: The first draft was revised first in a small group of seven experts and afterwards discussed in the whole group until a consensus was found.

In October 2018, the evaluation form was tested on a total of ten students at the Department of Surgery of the University Hospital of Heidelberg [25]. These students were in their 4th till 6th year of medical studies. Each student wrote a patient-directed report by filling out a standardized template. This report was assessed by eleven physicians and three students who had been involved in the development of the evaluation form in August 2018. Based on these experiences the evaluation form was revised by the participants of the initiative “Was hab’ ich?”gGmbH and the IMPP.

2.2. Test and revision of the evaluation form

In January and February 2019, the revised evaluation form was tested extensively on twenty-one patient-directed reports written by students in their last (6th) year of medical studies (PJ) at the interprofessional training ward (HIPSTA) of the University Hospital of Heidelberg. At this HIPSTA, medical students and trainees from different health professions treat patients together under the supervision of medical and nursing facilitators for four weeks [26]. One part of the practical training is writing patient-directed reports in addition to the conventional discharge report to the attending physician. The PJ-students receive a structured training in patient-directed writing by the initiative “Was hab’ ich?”gGmbH. This training includes elements like the reliable recognition and avoidance of medical terms, the explanation of background information and simply structured writing.

These twenty-one patient-directed reports were evaluated by two groups of examiners, each consisting of two physicians from the initiative “Was hab’ ich?” gGmbH. The first group used the developed evaluation form with the precise sub-items. The second group that evaluated the reports focused on the three main criteria without knowledge of the more precise sub-items [27].

2.3. Evaluation study

After revision of the evaluation form based on the first testing at HIPSTA, the twenty-one patient-directed reports were evaluated by a total of fourteen examiners. These examiners consisted of members from the initiative “Was hab’ ich?” gGmbH who were experienced in patient-directed writing and of physicians from the IMPP as well as general practitioners, as these are the physicians who usually receive discharge reports. The corresponding discharge report for the attending post-discharge physicians was available for comparison. The sample used for the statistical analyses included all existing valuations of all examiners on the twenty-one reports. Since not all examiners evaluated all reports, a sample of n=205 valuations resulted. The sample for the analyses on inter-rater reliability consisted of the evaluations of nine examiners, who had fully evaluated eleven of the twenty-one reports. As no personal data were evaluated, no approval of the ethics committee was needed.

2.4. Test quality criteria and statistical methods

Statistically, descriptive statistics in the form of mean values and standard deviations were first analysed separately for each of the three categories. In addition, bivariate correlations between the categories were calculated. Therefore, Pearson’s correlation coefficient was used [28].

Reliability is given when the respective instrument does not produce any measurement errors. It can be assumed if there is a high degree of intercorrelation between the individual parts of a measuring instrument. The reliability was analysed by the calculation of the generalizability coefficient [29]. Numerous measurements by the same examiners lead to an overestimation of reliability due to exercise effects when using Cronbach’s alpha [30]. In order to correct that, variance components were calculated for the factors examiner, report and background of the examiner (“Was hab’ ich?” vs. IMPP and general practitioners) regarding the reached score given in all valuations based on the generalizability theory [29], [31]. This helped to identify possible sources of measurement errors in the evaluations of the reports [32]. The relative error variance is determined based on the calculated variance components. With this relative error variance the G-coefficient can be computed. The G-coefficient estimates if the results can be transferred to the study population or if the interaction effect between the facets and the participants make the results singular to the study sample. A G-coefficient of 1 indicates that the available data and results can be perfectly generalized to all evaluations outside the study. A high value for this coefficient thus indicates high reliability [29].

The evaluation agreement in the form of inter-rater reliability [33] was calculated on the basis of a sample of nine examiners, who fully evaluated eleven of the twenty-one reports. The other examiners evaluated only some of these reports. The inter-rater reliability was handled separately for each of the three categories across the eleven reports. The analysis of inter-rater reliability was based on the coefficient Kendall W [34]. The evaluation scale of the developed form consisted of a rating scale ascending from 0 to 5 points. As at least an ordinal scale level could be assumed, Kendall W seemed to be the appropriate coefficient, unlike Fleiss Kappa for example, which requires categorical data [34]. The aim was to determine whether or not the examiners were consistent in their valuations within one evaluation category.

Furthermore, the content validity of the evaluation form was given through the comprehensive medical expertise in the development process. This means that the developed categories of the evaluation form represent exactly the content that is intended [35]. To avoid biases the examiners did not know the students who had prepared the patient-directed reports personally.


3. Results

3.1. Development and first test of the evaluation form

This first draft contained the categories “Selection of content”, “Medical correctness”, “Report structure and syntax”, “Linguistic design” and “Grammar”. Each category could be awarded with zero to five points. Each of the five categories was weighted with ten to thirty percent. Resulting from the first test, the main evaluation criteria were summarized into three: “Selection of content and medical correctness”, “Transfer of medical language into lay language” and “Easily understandable language”. The selection of content was presented more clearly and differentiated. The three central evaluation categories were provided with specific sub-items to explain the rating categories in detail. Each category could be awarded with zero to five points. The category “Selection of content and medical correctness” was given a slightly higher weighting of forty percent. The other two categories were given a weighting of thirty percent each.

Following the first testing at HIPSTA, the previous version of the evaluation form was revised and specified. The second category was renamed “Lay language and background information”. To reduce the number of sub-items especially those of the category “selection of content and technical correctness” have been summarized from eight to five.

3.2. Tested version of the evaluation form

A standardized evaluation form for patient-directed reports has been created successfully with three main evaluation categories. The percentage weighting remained forty percent for “Selection of content and medical correctness” and thirty percent each for the categories “Lay language and background information” and “Easily understandable language”.

The sub-items served to specify the content of these evaluation categories (see figure 2 [Fig. 2]).

3.3. Evaluation study
3.3.1. Descriptive statistics and correlations

Based on the sample of 205 individual valuations by the examiners, mean values, standard deviations and paired correlations between the three categories were calculated.

First, an overview of mean values and standard deviations for each of the three categories of the evaluation form can be provided (see table 1 [Tab. 1]).

The mean values and standard deviations of the three categories were almost identical. This clearly showed that, on average, the same number of points was awarded across the evaluations by all fourteen examiners for the twenty-one reports. Thus, no category was rated better or worse on average.

Since the results for the descriptive statistics already suggested a possible correlation, this was checked using bivariate correlations between the three categories (see table 2 [Tab. 2]).

The results of the pairwise correlations showed medium and high positive correlations between the three categories, which were all highly significant (p-value <0.001). The strongest correlation was found between the categories “Lay language and background information” and “Easily understandable language” (r=0.61). In contrast, the correlations between “Selection of content and technical correctness” and “Lay language and background information” (r=0.45) and between “Selection of content and technical correctness” and “Easily understandable language” (r=0.31) were medium strong.

3.3.2. Reliability

The inter-rater reliability was handled separately for each of the three categories across the eleven reports. This was done for the examiners from “Was hab’ ich?” and for those from the IMPP and general practitioners, in order to compare the degree of agreement of these two groups (see table 3 [Fig. 3]).

The degree of agreement in the first category was substantially higher for the examiners of “Was hab’ ich?” compared to the examiners from the IMPP and general practitioners. Regarding the two other categories the degree of agreement was moderate in both groups.

The calculated G-coefficient was 0.72 based on all 205 valuations. This rather high value is an indicator that the evaluation results of the patient-directed reports are not limited to the sample of the study, but can be transferred to evaluations outside the study.

3.3.3. Revision of the evaluation form based on the evaluation

Based on these results the evaluation form was slightly revised: The category “Lay language and background information” was specified to “Provision of background information and patient-understandable use of technical terms”. The category “Easily understandable language” was specified to “Patient-understandable language style, readability and everyday speech” to enable better differentiation of these two categories.

The explanation of the item “your medication” was complemented with “Explains the intake schedule, gives intake instructions.” and “Indicates relevant interactions and/or adverse effects.” as well as the explanation of the behavourial recommendations of “The next steps” with “hygiene, wound care, nutrition, exercise, drinking quantity, nicotine consumption”.


4. Discussion

A multi-stage conception and revision process was successfully used to create a standardized evaluation form for assessment of patient-directed reports. This is a noticeable improvement of the students training in the field of doctor-patient-communication.

It has been shown, that medical students who have undergone written communication training and regularly translate findings use better explanations than untrained students when talking to standardized patients in a simulated physician-patient contact [36]. This corresponds to the self-awareness of the students working at “Was hab’ ich”gGmbH: they are united in their opinion that written translations of doctors’ reports to patient-reports improve their ability to communicate in a way that can be understood better by patients [37].

In this study students had to prepare patient-directed reports in their last year of medical training. The test of a newly developed evaluation form used on twenty-one of these patient-directed reports showed the practicability as well as the usefulness of the instrument for assessing these written communication skill of the students. The individual evaluation categories represent the most important steps of writing a patient-directed report as the evaluation form has been developed by different medical experts. The implementation of specific sub-items supports the examiners in interpreting the categories.

Medium or high correlations between the three categories could be observed. Above all, the high correlation between lay language and patient-understandable language is an indicator of a good reliability of the evaluation form. In contrast, the medium-strong correlations of the content selection with lay language as well as with patient-understandable language are an indication that the evaluations of content and language can be independent of each other. Those students who, in the opinion of the examiners, used appropriate lay language in the patient reports and communicated the background information well were also able to write in language that was understandable to the patient. In contrast, an appropriate selection of content and technical correctness was not necessarily dependent on the use of lay language or easily understandable language.

The analyses of inter-rater reliability showed that the degree of agreement in the evaluations for all three categories was partly different between the two groups of examiners. The examiners of the initiative “Was hab’ ich?” had a higher level of agreement in the category “selection of content and medical correctness” than the examiners of the IMPP and the general practitioners. This fact is an indication for different starting conditions of the examiners. This can be attributed to their different disciplinary backgrounds and especially their different previous experiences with patient-understandable language. This finding highlights the need for an uniform training of examiners on standards for writing a patient-directed report before using the evaluation form in the national licensing examination.

The calculated G-coefficient showed that the data and results of the study can be applied to evaluations outside the study and were therefore generalizable, which showed the reliability of the evaluation form.

As the content validity of the developed evaluation form was given, the test quality criteria were mostly fulfilled. Thus, the evaluation form was a useful instrument to rate patient-directed reports with central evaluation categories.

Some limitations include that the results were only obtained from reports written at one interprofessional training ward in Heidelberg. In the context of further research, it would be interesting to evaluate and validate the performance of assessment based on the evaluation of a larger sample of examiners and reports written on a conventional ward or in the outpatient area of different faculties. Examiner trainings on assessing patient-directed reports using the developed evaluation form should be mandatory. This training is intended to ensure a uniform assessment standard to avoid different interpretations and weightings of sub-items and to contribute to a higher assessment agreement. The inter-rater reliability of examiners who have been trained should be analysed.

Continuous formative assessment and feedback based on the evaluation form would improve the medical training for PJ-students. As the patients are the receivers of these reports, the comparison of the assessment of the patient-directed reports by the patients themselves to the assessment by medical experts would be of great interest. Combining these two feedbacks can ensure that the patient-directed reports will deliver the important information to avoid dismissal errors in the future.

The improvement of the verbal communication skills based on this improvement of medical training should be investigated in further studies.


5. Conclusion

Assessing written patient-directed communication is a benefit of the new developed last part of the medical licensing examination in Germany. Continuous formative assessment and feedback based on the evaluation form is intended to improve the written communication skills of future doctors. Furthermore, a better understanding of their diagnosis and treatment as well as a trusting relationship with their doctor may empower patients in the medical decision process and lead to fewer dismissal errors in the future. To reach this goal, clear instructions and training for writing a patient-directed report must be part of the medical curriculum. For consistent use of the evaluation form a standardized training of examiners should be implemented.


List of abbreviations

  • G-coefficient = Generalizability coefficient
  • HIPSTA = Interprofessional training ward of the University Hospital of Heidelberg
  • IMPP = German National Institute for state examinations in Medicine, Pharmacy and Psychotherapy
  • MME = Master of Medical Education
  • PJ = Elective clerkships in the final (6th) year
  • r = Pearson’s correlation coefficient r

Funding

This project was funded by the Bertelsmann Stiftung (duration: 1.10.2017 – 30.06.2021).


Current professional roles of the authors

Dr. med. Lena Selgert

  • Physician
  • Research assistant at the German National Institute for state examinations in Medicin, Pharmacy and Psychotherapy (IMPP)

Bernd Bender

  • Graduate sociologist
  • Research assistant at the German National Institute for state examinations in Medicin, Pharmacy and Psychotherapy (IMPP)

Dr. phil. Barbara Hinding

  • Graduate psychologist
  • Research assistant at the German National Institute for state examinations in Medicin, Pharmacy and Psychotherapy (IMPP)
  • Research areas: medical conversation and interprofessional communication in teaching and assessment, implementation of communication curricula in medical education and advanced training.

Aline Federmann, M.A.

  • Graduate sociologist
  • Research assistant at the German National Institute for state examinations in Medicin, Pharmacy and Psychotherapy (IMPP)

Prof. Dr. med. André L. Mihaljevic

  • Physician and medical study program coordination (Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital)
  • Medical director of the interprofessional training ward in Heidelberg - (HIPSTA)
  • Clinical scientist (deputy spokesman of the surgical study network CHIR-Net)

Rebekka Post

  • physician working at the initiative "Was hab' ich?" gGmbH

Ansgar Jonietz

  • co-founder and CEO of the initiative "Was hab' ich?" gGmbH
  • computer scientist

John J. Norcini, Ph.D.

  • Research Professor in the Department of Psychiatry at SUNY Upstate Medical University
  • President Emeritus of FAIMER

Ara Tekian, Ph.D., MHPE

  • professor, Department of Medical Education, and associate dean, International Education, University of Illinois at Chicago College of Medicine, Illinois, USA.

Prof. Dr. med. Jana Jünger, MME (Bern)

  • Director of the German National Institute for state examinations in Medicine, Pharmacy and Psychotherapy (IMPP)
  • Development of the post-graduation study program Master of Medical Education (MME), Germany
  • Member of the MME-study program management and lecturer for the modules Assessment, Education Research and Evaluation
  • Management of various programs on the implementation of communication curricula in medical training and the development of new examination formats for assessing communicative skills.

Competing interests

The authors declare that they have no competing interests.


References

1.
Bundesministerium für Gesundheit. Patientenrechte. Berlin: Bundesministerium für Gesundheit; 2019. Zugänglich unter/available from: https://www.bundesgesundheitsministerium.de/themen/praevention/patientenrechte/patientenrechte.html External link
2.
Stahl K, Nadj-Kittler M. Picker report 2016. Vertrauen braucht gute Verständigung. Erfolgreiche Kommunikation mit Kindern, Eltern und erwachsenen Patienten. Hamburg: Picker Institut Deutschland gGmbH; 2016.
3.
NHS England. Review of National Reporting and Learning System (NRLS) incident data relating to discharge from acute and mental health trusts - August 2014. London: NHS England; 2014.
4.
Williams H, Edwards A, Hibbert P, Rees P, Prosser Evans H, Panesar S, Carter B, Parry G, Makeham M, Jonas A, Avery A, Sheikh A, Donaldson L, Carson-Stevens A. Harms from discharge to primary care: mixed methods analysis of incident reports. Br J Gen Pract. 2015;65(641):829-837. DOI: 10.3399/bjgp15X687877 External link
5.
Hesselink G, Zegers M, Vernooij-Dassen M, Barach P, Kalkman C, Flink M, Öhlen G, Olsson M, Bergenbrant S, Orrego C, Suñol R, Toccafondi G, Venneri F, Dudzik-Urbaniak E, Kutryba B, Schoonhoven L, Wollersheim H; European HANDOVER Research Collaborative. Improving patient discharge and reducing hospital readmissions by using Intervention mapping. BMC Health Serv Res. 2014;14:389. DOI: 10.1186/1472-6963-14-389 External link
6.
Pinelli V, Papp KK, Gonzalo JD. Interprofessional communication patterns during patient discharges: A social network analysis J. Gen Intern Med. 2015;30(9):1299-306. DOI: 10.1007/s11606-015-3415-2 External link
7.
Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in Comunication and Information Transfer between Hospital-Based and Primary Care Physicians: Implications for Patient Safety and Continuity of Care. JAMA. 2007;297(8):831-841. DOI: 10.1001/jama.297.8.831 External link
8.
Noordman J, van Vliet L, Kaunang M, van den Muijsenbergh M, Boland G, van Dulmen S. Towards appropriate information provision for and decision-making with patients with limited health literacy in hospital-based palliative care in Western countries: a scoping review into available communication strategies and tools for healthcare providers. BMC Palliat Care. 2019;18(1):37. DOI: 10.1186/s12904-019-0421-x External link
9.
Jünger J, Mutschler A, Kröll K, Weiss C, Fellmer-Drüg E, Köllner V, Ringel N. Ärztliche Gesprächsführung in der medizinischen Aus- und Weiterbildung: Das nationale longitudinale Mustercurriculum Kommunikation. Med Welt. 2015;66: 189-192.
10.
Sator M, Jünger J. From Stand-Alone Solution to Longitudinal Communication Curriculum -Development and Implementation at the Faculty of Medicine in Heidelberg. Psychother Psych Med. 2015;65(05):191-198. DOI: 10.1055/s-0034-1398613 External link
11.
Jünger J, Weiss C, Fellmer-Drüg E, Semrau J. Verbesserung der kommunikativen Kompetenzen im Arztberuf am Beispiel der Onkologie: Ein Projekt des Nationalen Krebsplans. Forum. 2016;31:473-478. DOI: 10.1007/s12312-016-0162.1 External link
12.
van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, van Tartwijk J. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205-214. DOI: 10.3109/0142159X.2012.652239 External link
13.
Nikendei C, Jünger J. OSCE - hands on instructions for the implementation of an objective structured clinical examination. GMS Z Med Ausbild. 2006;23(3):Doc47. Zugänglich unter/available from: https://www.egms.de/static/de/journals/zma/2006-23/zma000266.shtml External link
14.
Jünger J. Kompetenzorientiert prüfen im Staatsexamen Medizin. Bundesgesundheitsbl. 2018;61:171-177. DOI: 10.1007/s00103-017-2668-9 External link
15.
Schwartzberg JG, Cowett A, VanGeest J, Wolf MS. Communication techniques for patients with low health literacy: a survey of physicians, nurses, and pharmacists. Am J Health Behav. 2007;Suppl 1:96-104. DOI: 10.5555/ajhb.2007.31.supp.S96 External link
16.
Earl GL, Harris EM, Dave M, Estriplet-Jiang, J. Implementing a health literacy module fostering patient-centered written communication in a cardiovascular prevention elective course. Curr Pharm Teach Learn. 2019;11(7):702-709. DOI: 10.1016/j.cptl.2019.03.008 External link
17.
Lopez Ramos C, Williams JE, Bababekov YJ, Chang DC, Carter BS, Jones PS. Assessing the Understandability and Actionability of Online Neurosurgical Patient Education Materials. World Neurosurg. 2019;130:588-597. DOI: 10.1016/j.wneu.2019.06.166 External link
18.
Roberts HJ, Zhang D, Earp BE, Blazar P, Dyer GSM. Patient self-reported utility of hand surgery online patient education materials. Musculoskeletal Care. 2018;16(4):458-462. DOI: 10.1002/msc.1360 External link
19.
Davis TC, Fredrickson DD, Arnold C, Murphy PW, Herbst M, Bocchini JA. A polio immunization pamphlet with increased appeal and simplified language does not improve comprehension to an acceptable level. Patient Educ Couns. 1998;33(1):25-37. DOI: 10.1016/s0738-3991(97)00053-0 External link
20.
Rubin DT, Ulitsky A, Poston J, Day R, Huo D. What is the most effective way to communicate results after endoscopy? Gastrointest Endosc. 2007;66(1):108-112. DOI: 10.1016/j.gie.2006.12.056 External link
21.
Schumaier AP, Kakazu R, Minoughan CE, Grawe BM. Readability assessment of American Shoulder and Elbow Surgeons patient brochures with suggestions for improvement. JSES Open Access. 2018;2(2):150-154. DOI: 10.1016/j.jses.2018.02.003 External link
22.
"Was hab' ich?" gGmbH. Patientenbriefe wirken. Ergebnisbericht zum Projekt "Mehr Gesundheitskompetenz durch Patientenbriefe". Hamburg: "Was hab' ich?" gGmbH; 2019.
23.
Netzwerk Leichte Sprache. Die Regeln für Leichte Sprache. Münster: Netzwerk Leichte Sprache; 2017. Zugänglich unter/available from: https://www.leichte-sprache.org/wp-content/uploads/2017/11/Regeln_Leichte_Sprache.pdf External link
24.
Bredel U, Maaß C. Duden Leichte Sprache. Theoretische Grundlagen, Orientierung für die Praxis. Berlin: Dudenverlag; 2016.
25.
Selgert L, Samigullin A, Lux R, Gornostayeva M, Hinding B, Schlasius-Ratter U, Hendelmeier M, Mihaljevic AL, Wienand S, Schneidewind S, Bintaro P, Jonitz A, Jünger J. Weiterentwicklung des medizinischen Staatsexamens in Deutschland: Prüfung am Patienten. In: Gemeinsame Jahrestagung der Gesellschaft für Medizinische Ausbildung (GMA), des Arbeitskreises zur Weiterentwicklung der Lehre in der Zahnmedizin (AKWLZ) und der Chirurgischen Arbeitsgemeinschaft Lehre (CAL). Frankfurt am Main, 25.-28.09.2019. Düsseldorf: German Medical Science GMS Publishing House; 2019. DocP-05-03.DOI: 10.3205/19gma287 External link
26.
Mihaljevic AL, Schmidt J, Mitzkat A, Probst P, Kenngott T, Mink J, Fink CA, Ballhausen A, Chen J, Cetin A, Murrmann L, Müller G, Mahler C, Götsch B, Trierweiler-Hauke B. Heidelberger interprofessionelle Ausbildungsstation (HIPSTA): a practice- and theory-guided approach to development and implementation of Germany's first interprofessional training ward. GMS J Med Educ. 2018;35(3):Doc33. DOI: 10.3205/zma001179 External link
27.
Post R, Jonietz A, Selgert L, Lux R, Mihaljevic AL, Jünger J. Entwicklung, Testung und Validierung eines Bewertungsbogens zur Beurteilung laienverständlicher Patientenbriefe. In: Gemeinsame Jahrestagung der Gesellschaft für Medizinische Ausbildung (GMA), des Arbeitskreises zur Weiterentwicklung der Lehre in der Zahnmedizin (AKWLZ) und der Chirurgischen Arbeitsgemeinschaft Lehre (CAL). Frankfurt am Main, 25.-28.09.2019. Düsseldorf: German Medical Science GMS Publishing House; 2019. DocP-05-02. DOI: 10.3205/19gma286 External link
28.
Weins C. Uni- und bivariate deskriptive Statistik. In: Wolf C, Best H, editors. Handbuch der sozialwissenschaftlichen Datenanalyse. Wiesbaden: VS-Verlag für Sozialwissenschaften; 2010. p.65-89. DOI: 10.1007/978-3-531-92038-2_4 External link
29.
Field S, Egan R, Beesley T. Applying G-Theory and Multivariate G-Analysis to improve Clinical Data quality and performance assessment accuracy. OHSE Working Paper. San Francisco, CA: Academia.edu; 2017. Zugänglich unter/available from: https://www.academia.edu/37118667/_Applying_G-Theory_and_Multivariate_G-Analysis_to_improve_Clinical_Data_quality_and_performance_assessment_accuracy_2017_OHSE_Working_Paper External link
30.
Schmolck P. Begleittext: Methoden der Reliabilitätsschätzung. München: Universität der Bundeswehr München; 2007. Zugänglich unter/available from: https://dokumente.unibw.de/pub/bscw.cgi/1787978 External link
31.
Cardinet J, Tourneur Y, Allal L. Extension of generalizability theory and its applications in educational measurement. J Educ Measurement. 1981;18(4):183-204. DOI: 10.1111/j.1745-3984.1981.tb00852.x External link
32.
Bühl A. SPSS 16. Einführung in die moderne Datenanalyse. 11th ed. München: Pearson-Studium; 2008.
33.
Döring N, Bortz J. Datenerhebung. In: Döring N, Bortz J, editors. Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften. Berlin, Heidelberg: Springer; 2016. p.321-577. DOI: 10.1007/978-3-642-41089-5_10 External link
34.
Bortz J, Lienert GA, Boehnke K. Verteilungsfreie Methoden in der Biostatistik. 3rd ed. Heidelberg: Springer Medizin Verlag; 2008.
35.
Rammstedt B. Reliabilität, Validität, Objektivität. In: Wolf C, Best H, editors. Handbuch der sozialwissenschaftlichen Datenanalyse. Wiesbaden: VS-Verlag für Sozialwissenschaften; 2010. p.239-258. DOI: 10.1007/978-3-531-92038-2_11 External link
36.
Bittner A, Bittner J, Jonietz A, Dybowski C, Harendza S. Translating medical documents improves students' communication skills in simulated physician-patient encounters. BMC Med Educ. 2016;16:72. DOI: 10.1186/s12909-016-0594-4 External link
37.
Operation Karriere. Unsere Vision ist es, Arzt und Patient auf Augenhöhe zu bringen. Köln: Deutscher Ärzteverlag GmbH; 2017. Zugänglich unter/available from: https://www.operation-karriere.de/karriereweg/bewerbung-berufsstart/unsere-vision-ist-es-arzt-und-patient-auf-augenhoehe-zu-bringen.html External link
38.
Kühnel SM, Krebs D. Statistik für die Sozialwissenschaften. Grundlagen, Methoden, Anwendungen. 4th ed. Reinbek: Rowohlt; 2007.
39.
Schmidt RC. Managing Delphi surveys using nonparametric statistical techniques. Decision Sci. 1997;28(3):763-774. DOI: 10.1111/j.1540-5915.1997.tb01330.x External link