Spoken texts
Basic structure: spoken texts
The spoken material transcribed for the BNC is also organized into
texts, which are subdivided into
divisions, made up of w and mw
elements grouped into s elements in the same way as written
texts. However there a number of other elements specific to spoken
texts, and their hierarchic organization is naturally not the same as that of
written texts. For this reason, a different element (stext)
is used to represent a spoken text.
In demographically sampled spoken texts, each distinct conversation
recorded by a given respondent is treated as a distinct div
element. All the conversations from a single respondent are then
grouped together to form a single stext
element. Context-governed spoken texts, however, do not use the
div element: the stext element for a context
governed text is composed only of u elements, not grouped
into any unit smaller than the stext itself.
The s elements making up a spoken text are grouped not
into p or other similar elements, but instead into u
elements. Each u (utterance) element marks a stretch of
uninterrupted speech from a given speaker; (see section ). Interspersed within and between u
elements, a variety of other
elements indicate para-linguistic phenomena noticed
by the transcribers (see section ).
The methods and principles applied in transcription and
normalisation of speech are defined in a BNC working paper TGCW21
Spoken Corpus Transcription Guide, and have also been
described in subsequent publications (e.g. Crowdy 1994). The editorial tags
discussed in section above are also used to
represent normalisation practice when dealing with transcribed speech.
Utterances
The term utterance is used in the BNC to refer to a
continuous stretch of speech produced
by one participant in a
conversation, or by a group of participants. Structurally, the
corresponding element behaves in a similar way to the p
element in a written text — it groups a sequence of s
elements together.
The who attribute is required on every u: its function is to
identify the person or group of people making the utterance, using the
unique code defined for that person in the appropriate section of the
header. A simple example follows:
Mm mm. The code PS1LW used here will be specified as
the value for the xml:id attribute of some person
element within the header of the text from which this example is
taken. Where the speaker cannot be confidently identified, or where
there is more than one aspeaker, a special
code is used; see further discussion at .
Paralinguistic phenomena
In transcribing spoken language, it is necessary to select from the
possibly very large set of distinct paralinguistic phenomena which might
be of interest. In the texts transcribed for the BNC, encoders were
instructed to mark the following such phenomena:
voice quality
for example, whispering, laughing, etc., both as discrete events
and as changes in voice quality affecting passages within an utterance.
non-verbal but vocalised sounds
for example, coughs, humming noises etc.
non-verbal and non-vocal events
for example passing lorries, animal noises, and other matters
considered worthy of note.
significant pauses
silence, within or between utterances, longer than was judged
normal for the speaker or speakers.
unclear passages
whole utterances or passages within them which were inaudible or
incomprehensible for a variety of reasons.
speech management phenomena
for example truncation, false starts, and correction.
overlap
points at which more than one speaker was active.
Other aspects of spoken texts are not explicitly recorded in
the encoding, although their headers contain considerable amounts of
situational and participant information.
In many cases, because no standardized set of descriptions was predefined,
transcribers gave very widely differing accounts of the same
phenomena. An attempt has however been made to normalize the descriptions for
some of these elements in the BNC XML editions.
The elements used to mark these phenomena are listed below in
alphabetical order:
The value of the dur attribute is normally specified
only if it is greater than 5 seconds, and its accuracy is only
approximate.
With the exception of the trunc element, which is a
special case of the editorial tags discussed in section above, all of these elements are empty, and may appear anywhere within
a transcription.
The following example shows an event, several pauses and a patch of
unclear speech:
You gotta Radio Two with that.
Bloody
pirate station wouldn't you?
Where the whole of an utterance is unclear, that is, where no speech
has actually been transcribed, the unclear element is used on
its own, with an optional who attribute to indicate who
is speaking, if this is identifiable. For example:
....
...
Here YY's remarks, whatever they are, are too unclear to be
transcribed, and so no u element is provided.
The values used for the desc attribute of the
event element are not constrained in the current version of
the corpus, and more than a thousand different values exist in the
corpus. Some common examples follow:
A list of the most frequent values is given in .
As noted above, a distinction is made between discrete vocal events,
such as laughter, and changes in voice quality, such as words which are
spoken in a laughing tone. The former are encoded using the
vocal element, as in the following example:
, you'll have to take that off there yeah you can The shift element is used instead where the laughter
indicates a change in voice quality, as in the following example:
And erm and then we went and got my fruit and veg and then we went in Top Marks and got them so we never got we went through for a video really, never got round to looking for a video did we?
When no
new attribute is supplied on a
shift element, the meaning is that the voice quality
indicated reverts to a normal or unmarked state. Hence, in this example,
the passage between the tags shift new="laughing"/
and shift/ is spoken with a laughing intonation
A list of values currently used for the new
attribute is given below in section .
Alignment of overlapping speech
By default it is assumed that the events represented in a
transcription are non-overlapping and that they are transcribed in
temporal sequence. That is, unless otherwise specified, it is implied
that the end of one utterance precedes the start of the next following
it in the text, perhaps with an interposed pause element.
Where this is not the case, the following element is used:
The with attribute of an align element
may be thought of as identifying some point in time. Where two or
more align elements specify the same value for this
attribute, their locations are assumed to be synchronised.
The following example demonstrates how this mechanism is used to
indicate that one speaker's attempt to take the floor has been
unsuccessful:
Poor old Luxembourg's beaten.
You you've you've absolutely just gone straight over it
I haven't.
and forgotten the poor little country.
This encoding is the CDIF equivalent of what might be presented in a
conventional playscript as follows:
W0001: Poor old Luxembourg's beaten. You, you've, you've absolutely just
gone straight over it --
W0014: (interrupting) I haven't.
W0001: (at the same time) and forgotten the poor little country.