Center for Dental Informatics

Contact: Titus Schleyer, DMD, PhD: For immediate release (9/13/10)

Center for Dental Informatics will present papers, workshops, panels and posters at two biomedical informatics conferences

The Center for Dental Informatics (CDI) at the School of Dental Medicine, University of Pittsburgh, will present multiple papers, workshops, panels and posters at the American Medical Informatics Association 2010 Annual Symposium and the 1st ACM International Health Informatics Symposium in November, 2010.

The research to be presented spans a wide range of topics, from terminologies and natural language processing of dental discussion group postings to an information model for general dentistry and an investigation of social tagging v controlled indexing. "More than anything, our research highlights the breadth and depth of trainee and faculty talent at the CDI," said Dr. Titus Schleyer, Associate Professor and Director of the Center for Dental Informatics. "It is a privilege to work with such a group of smart and motivated people."

The Center typically presents at informatics as well as dental conferences, highlighting the interdisciplinary nature of its work. For more information, please see the Center's Web site at

Presentations at the the AMIA and IHI conferences:

AMIA 2010 Workshop

Bonnie Kaplan, Linda Hogan, Joan E Kapusnik-Uner, Gail Keenan, Ross Koppel, Christoph “Chris” U Lehmann, Nigam Shah, Miguel Humberto Torres-Urquidy, Charlotte Weaver. Making "Meaningful Use" More Meaningful and Useful

"Meaningful use" requires collecting and sharing quality indicator data and making clinical record information available to practitioners, patients, and government agencies. Criteria currently are focused on what would be meaningful for accreditation, reimbursement, and health care policy more than they are focused on what would be meaningful for patients and clinicians who care for them. This presents opportunities for future modifications to the criteria. Presenters representing nine AMIA WGs will summarize insights from their WGs pertaining to how meaningful use criteria affect patient care in terms of safety, cost, quality, and practice; how these criteria may influence development and adoption of EHRs; and meaningful data collection for policy and research. We will focus on both "meaningful," and "use" by asking the overarching questions of: meaningful to whom? meaningful for what purposes? meaningful under what circumstances? In the process, we will suggest future changes to the criteria. Following the presentations, presenters and attendees will discuss meaningful use and identify additional issues, recommendations, and suggested actions for AMIA.

AMIA 2010 Panel

Pieczkiewicz DS, Patel VL, Kaufman DR, Kushniruk A, Thyvalikakath TP. Usability at the Research-Production Interface: What in situ testing can tell us

As health information technology (HIT) is increasingly deployed across institutions providing care, an understanding of the usability of such systems in actual production becomes crucial. While HIT usability studies exist, they often assess individual components of isolated, idealized systems, rather than the organic, dynamic, multi-vendor implementations present at many institutions. The recent mandates for  HIT deployment in the United States and elsewhere provide a window of opportunity for in situ usability studies of “live” HIT systems at the time of their implementation and beyond. This panel will present and discuss current work and thinking on the usability testing of live HIT systems, in clinical medicine, nursing, and dentistry. Participants will discuss such topics as the theoretical bases for in situ testing; challenges and lessons learned from prior usability studies; the generalizability of results; the relation of usability to the “meaningful use” of HIT; and the possibilities for the standardization of usability testing in the health care domains.

AMIA 2010 Posters

Humberto Torres-Urquidy and Titus K Schleyer. Use of context and synonymy to improve terminology development

We investigated the use of context and synonymy to measure the adequacy of concept construction as part of the development of a dental diagnostic terminology. We substituted terms in 147 sentences, with each term belonging to a concept previously identified in the sentence. The adequacy of the substitution was then evaluated by a dentist. Overall, 70% of the substitutions were considered identical, 27% were similar and 3% were different. Using context allows for terminology improvements.

Tanja Bekhuis, Marcos Kreinacke, Heiko Spallek, and Mei Song. Using the Natural Language Toolkit to reduce the number of messages for in-depth content analyses: a case study

To understand the information needs of practicing dentists, we intend to analyze peer communications posted to the Internet Dental Forum over the last two years. However, we first need to reduce the data set (N=14,476 messages) to ensure feasibility of in-depth content analyses. We use the Natural Language Toolkit v.2.0 and Python v.2.6 to discover clinical topics consistently of interest and then point to a manageable subset of messages with relevant content for further analysis.

Amit Acharya, Thankam P. Thyvalikakath, and Titus K Schleyer. Conceptual clinical information model for a general dental record

Currently there are no standard information models available for the domain of general dentistry. This lack of a standardized information framework has lead to the development of various electronic dental record designs supporting different level of patient information coverage. This project developed a clinical information model consisting of 155 classes with 986 information items with two primary types of relationship. The entire clinical information model was represented under 63 subject areas.

AMIA 2010 Paper

Danielle Hyunsook Lee and Titus K Schleyer. MeSH term explosion and author rank improve expert recommendations

Information overload is an often-cited phenomenon that reduces the productivity, efficiency and efficacy of scientists. One challenge for scientists is to find appropriate collaborators in their research. The literature describes various solutions to this problem of expertise location, but most current approaches do not appear to be very suitable for expert recommendations in biomedical research. In this study, we present the development and initial evaluation of a vector space model-based algorithm to calculate researcher similarity using four inputs: 1) MeSH terms of publications; 2) MeSH terms and author rank; 3) exploded MeSH terms; and 4) exploded MeSH terms and author rank. We developed and evaluated the algorithm using a data set of 17,525 authors and their 22,542 papers. On average, our algorithms correctly predicted 2.5 of the top 5/10 coauthors of individual scientists. Exploded MeSH and author rank outperformed all other algorithms in accuracy, followed closely by MeSH and author rank. Our results show that the accuracy of MeSH term-based matching can be enhanced with other metadata, such as author rank.

1st ACM International Conference on Health Informatics Paper

Danielle Hyunsook Lee and Titus K Schleyer.  A comparison of MeSH terms and CiteULike social tags as metadata for the same items

In this paper, we examine the degree of difference between two types of metadata for biomedical articles generated by different groups of people. The first type of metadata is social tags, which are assigned to articles by their readers in the absence of a controlled vocabulary. The second type is index terms, which are assigned by professionally trained indexers and domain experts using a controlled vocabulary. When the two kinds of metadata are assigned to the same item, we may expect that they overlap to a large extent and could substitute for one another. In this study, we compared social tags and index terms for a set of papers that appear both in CiteULike and MEDLINE, and assessed their differences. Due to the idiosyncratic nature of social tags, we preprocessed the tags through normalization, stop-word removal, stemming and spell-checking. Our results show that social tags and Medical Subject Heading (MeSH) index have little overlap and embody largely heterogeneous understanding of items.