Celebrating enthusiasm;
25 years of the teacher-cum-programmer

John Higgins, Stirling, July 2004

In 1985 John Self wrote a book, several chapters of which were a litany of hatred against people he called "teacher-cum-programmers" or T-c-Ps. These wicked people (myself included) used their spare time to write software for their students. Not only was it inefficient, using too much memory and running more slowly than professionally written software. Sometimes T-c-Ps “re-invented the wheel” because they had not studied the academic literature and did not know what had been done already. Their software was written in BASIC, which real programmers despise. Above all, it failed to support curriculum objectives or belong within the syllabus. Therefore it was pernicious.

I am proud to have been one of these T-c-Ps. We used the freedom given to us by the new cheap machines that started to become available in the late 1970s and 1980s, the Commodore Pet, the Tandy TRS80, the Sinclair ZX81 and Spectrum, the Newbrain and the BBC Micro, to play around and to try out ideas. Not all the ideas worked, of course, but some of them worked quite well. However they still seemed to annoy people like John Self because they were not the product of careful syllabus planning and integration.

The earliest CALL programs, dating back to the mid 1960s, took the form of drill/quiz programs, sometimes called the WRONG--TRY AGAIN approach. They were used only in universities or military training schools, since hardly anyone else could afford the software or the equipment. However unimaginative they were, they had the advantage over ordinary classwork of privacy. Many students liked the idea of having a place to practise without being shown up by their mistakes. They could repeat activities, treating them as a kind of game, trying to improve on their best score. It would be difficult to use written exercises in the same way.

Then, around 1979, a different kind of software started to be written. The pioneer of this “non-tutorial approach” was Tim Johns of Birmingham University, a T-c-P if ever there was one. He developed two kinds of program. The first, based on mutilating a text and challenging the student to re-create it by guesswork, he called “analytic”. Examples of these were his JUMBLER and MASKER, the second of which I elaborated into a program called ECLIPSE. The second type, which consisted of giving rules to the machine so that it could create language, he called “synthetic”. Neither kind of program needed a lot of memory, so both could be written for the new personal computers that were emerging, the smallest of which provided only one kilobyte (about one thousand characters) of memory. I remember a conference in 1981 at which we attended a presentation by a big American university team, who showed photographs of their large mainframe computer and printouts of sessions with students. In the next session Tim pulled his ZX81 out of his jacket pocket, attached it to a black-and-white television, and showed his S-ENDING program, which could supply the third-person ending to any verb, real or nonsense, which the user entered. The challenge for the learner was to see if it ever made a mistake, but it operated with a virtually complete set of rules for this particular inflection, all coded within 1K of memory. Learners loved trying out words, and would see if they could catch it out, and then wanted to know how the program worked.

I had the good fortune to meet Tim Johns in 1980 and to work with him for several years. At the time I was working for the British Council and was just starting to learn to program in BASIC. My very first effort consisted of five crudely drawn pictures: an elephant, a crocodile, a cat, a mouse, and a fly. The computer randomly generated questions of the pattern What has the cat eaten? Or What has eaten the cat? and the user had to pick a larger or smaller creature according to the word order. A right answer would lead to the eaten animal being temporarily erased, a wrong one to a “Try again” message. When I had got the program working, I started by arranging it so that there would be two example question-and-answer pairs, and then ten test items, the kind of structure that I was used to in language laboratory drills. Then I stopped and thought; what was I doing deciding in advance what the learner wanted to do? If a learner wanted twenty examples and no test items, then why not let them do it that way? The computer could go on making up random items for ever if desired. That led me to the distinction I have often made since then between the computer as magister, controlling the quantity and order of activities, and the computer as pedagogue or slave, providing an opportunity for practice or fun but not making decisions about what the learner does next. I have always felt that the computer works far better as slave.

Many of the programs we wrote, especially the text puzzles and simulations, were designed to be used by small unsupervised groups. The computer was something to be talked about rather than just “talked to”. One objection often raised was that students would use their first language, but I was reassured about this when I observed small groups who regularly turned up for extra lunchtime sessions in a computer room. They were working on small simulations which I had written, one called PHOTOFIT in which they had to type in descriptions of a face and the computer would draw it for them, and one called MURDER in which they had to investigate a murder by asking the suspects two questions, “Where were you?” and “Who was with you?” until they caught one of them lying. I was working at the back of the room and I am sure the students soon forgot I was there. They often came with friends who shared their language, and the first few minutes often was carried out mostly in Spanish or Chinese. But what happened was that when students repeated an activity and became more familiar with it, more and more English was introduced into their talk, and confidence grew. They were not unwilling to use English and would do so as soon as it became easy for them.

Tim Johns was responsible for another huge innovation in classroom computer use when he wrote with Mike Scott MICRO-CONCORD, a concordance program for learners rather than researchers. Concordancers are like giant indexes: you enter a word or phrase and the computer prints out every sentence containing that word so that you can study all the contexts. The first version of Tim’s program, written in the mid 1980s, ran on a Sinclair Spectrum. It could search a text bank of about 50,000 words, too small to be useful for vocabulary work but large enough if you wanted to search for common words like prepositions or modal verbs. Later versions for the PC could search much larger text libraries, and with a modern machine and with a program like Mike Scott’s more recent WORDSMITH you can search through millions of words in seconds. This was a real case of the computer as slave, reading a whole library and coming back with pages of citations, but leaving it to the user to work out the significance. Sometimes the significance jumps out at you. If you have ever had a student who has written: “In my country cars drive on same side of the road as in England”, ask them to do a concordance search for the word same.

I have made another contribution in this area of “computer as tool” with a program called FINDWORD which searches a dictionary for words which match a specification for spelling, pronunciation or part of speech. You can use it to make lists of all words which, for instance, contain the spelling <ph> but not the sound /f/, or all words which end in <ate> pronounced /eit/ to see if they are all verbs, or all words which can be used both as preposition and conjunction, and so on.

One problem of the pioneering days was the number of different machines that appeared, each with its own operating system and dialect of BASIC. Programs had to be re-written for each of them. In general the programs were short enough for this to be feasible, though tedious, but it was something of a relief when the IBM PC became a standard and one version of a program could be guaranteed a wide user base. Then, however, came Windows, and everything had to be re-written again, and then again for Windows 95 and Windows XP. Updating and maintaining programs became much more of a chore, and one rarely had the time or support for the task. But Martin Holmes in a recent contribution to the TESLCA-L newsletter points out that good programmers are often tinkerers, so perhaps we like it that way. Now that the Internet has largely replaced the stand-alone computer as the means of reaching students, I doubt whether there are many T-c-Ps out there programming in BASIC now. Instead they are all creating web pages in HTML using Javascript and Flash. They have the same problems as we did as new language versions appear, but they still have the enthusiasm and experimentalism to ensure that there are plenty of hits as well as misses, plenty of really engaging activities to help learners enjoy and apply the language they are acquiring.

There has always been a problem for those who create supplementary material such as puzzles, games, and even supplementary reading. Teachers would look at all these things and say, “Yes, it is good and will help my students to improve their language and enjoy speaking. But I will not have time to use it. I only have three periods a week, and I have to complete the course book and prepare the class for the exam.” They used to have a firm belief that any reduction in the amount of teaching they did was bound to lead to a corresponding reduction in learning, and they were sure that no student would ever do voluntarily work outside class hours or the prescribed homework tasks. I hope that now the work of the T-c-Ps and the growth of the Internet has shown that when resources are available, easy to find and fun to use, students are often ready to spend time and attention on them. Learning a language is learning to mean, and there is now an abundance of written and spoken meaningful language out there on the Internet and lots of things to do with it. Lucky learners!

REFERENCES

Higgins, John and Johns, Tim (1984). Computers in Language Learning. Collins. Pages 88 - 94.

Self, John (1985). Microcomputers in Education; a critical appraisal of educational software. Brighton, The Harvester Press. Pages 141 - 163.


I came across this article on a backup disk recently and remember receiving an invitation to write it for the launch issue of a journal in 2004, but have completely forgotten who it was from, what the new journal was to be called or whether the article was eventually published. If anyone remembers seeing it and can remind me of its provenance, I would be glad of the information.

Chief T-c-P and inspiration for us all was, of course, Tim Johns. The other T-c-Ps to celebrate included Chris Jones, Chris Harrison, Vance Stevens, Scott Windeatt, Philippe Roquigny, Burkhardt Leuschner, Joseph Rézeau, Tony Williams, Graham Davies, Peter Roe and Martin Holmes. I may have forgotten some other names. There are not many of that generation left now (2025).