Early PC games were packed with great soundtracks and technological developments, but the real advances were still to come. With sound cards and CD-ROMs becoming more mainstream the technical and artistic horizons of composers broadened considerably. Soon full orchestral scores would be the norm, but before then, technology was still catching up. The 90s then, were a period of transition bridging the MIDI age with what we know today.
Read More →
A personal history of game music: Part 4 – A Time of Transition
EBSE: Structured Abstracts
Anyone who has been through university has likely had to write an abstract at some point in their lives. For such a short piece of writing they can prove remarkably tricky to produce, and it can be difficult to compress the basis, goals, methods and conclusions of your research into as few as 150 words. Making it worse is the fact that after the title your abstract is the first point of contact with your work that a reader will have. If your abstract is poorly written a reader might skip over it completely. If your abstract doesn’t clearly or accurately describe your research the reader may dismiss it as irrelevant and move on.
In short, good abstracts are both hard and necessary.
This is where structured abstracts come in. Traditional abstracts are the familiar “blob of text” at the front of a paper or in the search results. A structured abstract divides the abstract into a series of small sections under headings. These headings are typically along the lines of Context, Aims, Method, Results and Conclusion. You can probably already see the benefits of a structured abstract.
Ease of Writing
Firstly, they are obviously easier to write as they force you to put one or two sentences under each heading. The structure discourages you from adding unnecessary or distracting information and ensures that you don’t miss out anything vital.
Ease of Reading
As mentioned above, abstracts are one of the first parts of your research a reader will see. The better your abstract, the less likely it is that it will be the last. If you are flicking through a set of studies trying to find something relevant, structured abstracts significantly improve comprehension. Does the area or goal sound interesting? Is it the kind of study you were hoping for? Are the conclusions interesting? Instead of wading through (or, more likely, skimming over) a wall of text the information is available at a glance.
Secondary Studies
Maybe not relevant to everyone, but anybody who has done a secondary study understands the horror of being faced with hundreds, possibly thousands, of titles and abstracts. Maintaining focus and discipline while reading through he seemingly endless papers, making informed an repeatable judgements of each one is extraordinarily difficult. Structured abstracts can make this process much easier. Far too often an abstract will neglect to mention whether the paper is an experiment, a case study, a survey, an opinion piece, or something else entirely. I’ve lost count of the times I found an abstract that sounded promising, then spending days acquiring the full paper only to find out that it was a technical demonstration and not a case study, or a secondary study, or in a depressing number of cases the wrong field entirely.
Structured abstracts aren’t a magical cure for poor reporting, but they do make it easier to write clear, complete and accurate abstracts.
A number of studies have been published on the subject, and below are a sample. The first one was published for the EASE Conference, which uses structured abstracts in its proceedings. You decide if it’s easier to read than the others…
Preliminary results of a study of the completeness and clarity of structured abstracts
Context: Systematic literature reviews largely rely upon using the titles and abstracts of primary studies as the basis for determining their relevance. However, our experience indicates that the abstracts for software engineering papers are frequently of such poor quality they cannot be used to determine the relevance of papers. Both medicine and psychology recommend the use of structured abstracts to improve the quality of abstracts.
Aim: This study investigates whether structured abstracts are more complete and easier to understand than non-structured abstracts for software engineering papers that describe experiments.
Method: We constructed structured abstracts for a random selection of 25 papers describing software engineering experiments. The original abstract was assessed for clarity (assessed subjectively on a scale of 1 to 10) and completeness (measured with a questionnaire of 18 items) by the researcher who constructed the structured version. The structured abstract was reviewed for clarity and completeness by another member of the research team. We used a paired ‘t’ test to compare the word length, clarity and completeness of the original and structured abstracts.
Results: The structured abstracts were significantly longer than the original abstracts (size difference =106.4 words with 95% confidence interval 78.1 to 134.7). However, the structured abstracts had a higher clarity score (clarity difference= 1.47 with 95% confidence interval 0.47 to 2.41) and were more complete (completeness difference=3.39 with 95% confidence intervals 4.76 to 7.56).
Conclusions: The results of this study are consistent with previous research on structured abstracts. However, in this study, the subjective estimates of completeness and clarity were made by the research team. Future work will solicit assessments of the structured and original abstracts from independent sources (students and researchers).
Presenting Software Engineering Results using Structured Abstracts: A Randomised Experiment
When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p < 0.001) and the clarity score by 2.98 (SE 0.23, p < 0.001). 57 participants reported their preferences regarding structured abstracts: 13 (23%) had no preference; 40 (70%) preferred structured abstracts; four preferred conventional abstracts. Many conventional software engineering abstracts omit important information. Our study is consistent with studies from other disciplines and confirms that structured abstracts can improve both information content and readability. Although care must be taken to develop appropriate structures for different types of article, we recommend that Software Engineering journals and conferences adopt structured abstracts.
Reporting computing projects through structured abstracts: a quasi-experiment
Previous work has demonstrated that the use of structured abstracts can lead to greater completeness and clarity of information, making it easier for researchers to extract information about a study. In academic year 2007/08, Durham University’s Computer Science Department revised the format of the project report that final year students were required to write, from a ‘traditional dissertation’ format, using a conventional abstract, to that of a 20-page technical paper, together with a structured abstract. This study set out to determine whether inexperienced authors (students writing their final project reports for computing topics) find it easier to produce good abstracts, in terms of completeness and clarity, when using a structured form rather than a conventional form. We performed a controlled quasi-experiment in which a set of ‘judges’ each assessed one conventional and one structured abstract for its completeness and clarity. These abstracts were drawn from those produced by four cohorts of final year students: two preceding the change, and the two following. The assessments were performed using a form of checklist that is similar to those used for previous experimental studies. We used 40 abstracts (10 per cohort) and 20 student ‘judges’ to perform the evaluation. Scored on a scale of 0.1–1.0, the mean for completeness increased from 0.37 to 0.61 when using a structured form. For clarity, using a scale of 1–10, the mean score increased from 5.1 to 7.2. For a minimum goal of scoring 50% for both completeness and clarity, only 3 from 19 conventional abstracts achieved this level, while only 3 from 20 structured abstracts failed to reach it. We conclude that the use of a structured form for organising the material of an abstract can assist inexperienced authors with writing technical abstracts that are clearer and more complete than those produced without the framework provided by such a mechanism.
Empirical evidence about the UML: A Systematic Literature Review
As part of my work with EPIC I worked on a systematic literature review of empirical evidence concerning the UML. This study recently appeared in the journal Software – Practice and Experience, published by Wiley.
The study intended to assess the current state of research into the UML – specifically in the areas of metrics, comprehension, model quality, methods and tools and adoption. We identified and reviewed nearly 50 publications and arrived at the conclusion that “[d]espite indications that a number of problems exist with UML models, researchers tend to use the UML as a ‘given’ and seem reluctant to ask questions that might help to make it more effective.”
Abstract:
The Unified Modeling Language (UML) was created on the basis of expert opinion and has now become accepted as the ‘standard’ object-oriented modelling notation. Our objectives were to determine how widely the notations of the UML, and their usefulness, have been studied empirically, and to identify which aspects of it have been studied in most detail. We undertook a mapping study of the literature to identify relevant empirical studies and to classify them in terms of the aspects of the UML that they studied. We then conducted a systematic literature review, covering empirical studies published up to the end of 2008, based on the main categories identified. We identified 49 relevant publications, and report the aggregated results for those categories for which we had enough papers— metrics, comprehension, model quality, methods and tools and adoption. Despite indications that a number of problems exist with UML models, researchers tend to use the UML as a ‘given’ and seem reluctant to ask questions that might help to make it more effective.