The Humanities, the new empiricism and the World Wide Web

Henry S. Thompson
ICCS/HCRC
School of Informatics
University of Edinburgh
 
World Wide Web Consortium
 
Markup Technology Ltd.
 
5 April 2007

1. A short history of computational linguistics

First closely parallel to, latterly increasingly separated from, the history of linguistic theory since 1960.

Situated in relation to the complex interactions between linguistics, psychology and computer science:

Image:source="diagram.png">Network relating the various disciplines, vintage 1980

Originally all the computational strands except the 'in service to' ones were completely invested in the Chomskian rationalist inheritance.

A corresponding commitment to formal systems, representationalist theories of mind/so-called 'strong AI'

2. The empir[icist] strikes back

Starting in the late 1970s, in the research community centred around the (D)ARPA-funded Speech Understanding Research effort, with its emphasis on evaluation and measurable progress, things began to change.

(D)ARPA funding significantly expanded the amount of digitised and transcribed speech data available to the research community

Instead of systems whose architecture and vocabulary were based on linguistic theory (in this case acoustic phonetics), new approaches based on statistical modelling and Bayesian probability emerged and quickly spread

"Every time I fire a linguist my system's performance improves" (Fred Jellinek, head of speech recognition at IBM, c. 1980)

3. Case study: Automatic Speech Recognition

Speech recognition, that is, at least the transcription, if not the understanding, of ordinary spoken language, is one of the major challenges facing Applied Computational Linguistics.

One of the reasons for this is masked by the fact that our perception of speech is hugely misleading: we hear distinct words, as if there were breaks between each one, but this is not actually the case at the level of the actual sound. For example here's a display of the sound corresponding to an eight-word phrase:

Image:source="mother0.gif">waveform, energy and f0 display for an eight-word phrase

Despite this evident difficulty, that fact is that people can easily wreak a nice beach. Sorry, . . . recognise speech.

This is not just a matter of getting the word boundaries right or wrong. The next problem facing a speech recogniser, whether human or mechanical, is that the signal underdetermines the percept:

Image:source="chart1.png">crude phonetic transcription: r ε k ə n a i s b ii ʧ

You heard:

Image:source="chart2.png">recognise speech

But I said:

Image:source="chart3.png">wreak a nice beach

And there are more possibilities:

Image:source="chart.png">Overlapping possible words

What's going on here? How do we do this?

4. Instructive versus selective interaction in complex systems

Biology has gone down this road first and furthest

The immune system was an early and revolutionary example

A simpler example (oversimplified here) is bone growth

How do we get the required array of parallel lines of rectangular cells?

The naive instructional view is that there's somehow some kind of blueprint, which some agent (enter Hume's paradox) appeals to in laying down the cells:

Image:source="instruct.png">Homunculus refering to diagram of bone while adding a cell

The truth appears to be selective: cells appear in new bone with all possible orientations, and the ones that are not aligned with the main stress lines die away:

Image:source="select1.png">Bone with random orientation of cells

Image:source="select2.png">Bone with non-aligned cells disappearing

Image:source="select3.png">Bone with pnly aligned cells remaining

5. What kind of selection for ASR?

So how do we select the right path through the word lattice?

Is it on the basis of a small number of powerful things, like grammar rules and mappings from syntax trees to semantics?

Image:source="head1.png">Cross section of skull with trees and rules

Or a large number of very simple things, like word and bigram frequencies?

Image:source="head2.png">Cross section of skull with adding machine and scales

The probability-based approach performed much better than the rule-based approach

6. Up the speech chain

The publication of 6 years of digital originals of the Wall Street Journal in 1978 (?) provided the basis for moving the Bayesian approach up the speech chain to morphology and syntax

Many other corpora have followed, not just for American English

And the Web itself now provides another huge jump in the scale of resources available

To the point where even semantics is at least to some extent on the probabilistic empiricist agenda

7. The new intellectual landscape

Whereas in the 1970s and 1980s there was real energy and optimism at the interface between computational and theoretical linguistics, the overwhelming success of the empiricist programme in the applied domain have separated them once again

While still using some of the terminology of linguistic theory, computational linguistics practioners are increasingly detached from theory itself, which has suffered a, perhaps connected, loss of energy and sense of progress.

Within cognitive psychology, there is significant energy going in to erecting a theoretical stance consistent with at least some of the new empiricist perspective.

But the criticism voiced 25 years ago by Herb Clark, who described cognitive psychology as "a methodology in search of a theory", remains pretty accurate.

And within computer science in general, and Artificial Intelligence in particular, the interest in "probably nearly correct" solutions, as opposed to contructively true ones, is dominant.

8. The future, the Web, . . .

The Semantic Web project is, intriguingly, stuck in a time warp.

It's busy reconstructing the Knowledge Representation systems and methodologies of the 80s and 90s, on a Web-scale.

But it has not taken the recent history of AI seriously, at least not yet.

The stated goal, and existing practice, of the SemWeb effort, is the rules, facts and inference story, not the Bayesian/machine learning/probably nearly correct story.

I suspect the Statistical Semantic Web will emerge pretty soon. . .

9. What about the rest of us

One can point to lots of examples of the new empiricism in the humanities more widely, from the LeRoy Ladurie and the growth of social history, to parallel developments in literary criticism

It's tempting to suppose that this is all just "theorem envy"

But that would be unfair to the genuine value that this kind of work has produced

But what it hasn't produced is much in the way of real explanation (as opposed to description)

10. The drunk and the lamppost

Just as we do well to always prefer "cock up" to "conspiracy" when seeking answers to "Why did things go wrong" questions, so the old joke about the drunk and the lamppost is often. . .illuminating with respect to "Why are they doing that" questions

The bottom line is that aggregation, correlation and description are always easier than explanation

The huge growth in the availability of data of all kinds has inevitably distorted the balance of effort -- the pickings are easy on the frontier, after all.

But the hard problems remain, and in time, as the growth curves flatten out, we'll get back to them.