Error Analysis

Errors were considered to be a wrong response to the stimulus, which should be corrected immediately after they were made. Unless corrected properly, the error became a habit and a wrong behavioral pattern would stick.

If learners made any mistake while repeating words, phrases or sentences, the teacher corrected their mistakes immediately. Errors were regarded as something you should avoid and making an error was considered to be fatal to proper language learning processes.

Error Analysis: Its Roots And Development

Contrastive analysis

Two languages were systematically compared, identifying points of similarity and difference between native languages (NLs) and target languages (TLs).
"The most effective materials are those that are based upon a scientific description of the language to be learned, carefully compared with a parallel description of the native language of the learner."

The importance of contrastive analysis in language teaching material is :
  1. Individuals tend to transfer the forms and meanings and the distribution of forms and meanings of their native language and culture to the foreign and culture.
  2. Those elements that are similar to his native language will be simple for him, and those elements that are different will be difficult.
  3. Where two languages were similar, positive transfer would occur; where they were different, negative transfer, or interference, would result.

Corder: Introduction Of The Concept "Error Analysis"

Corder claims that in language teaching one noticeable effect is to shift the emphasis away from teaching towards a study of learning.

For learners themselves, errors are necessary, since the making of errors can be regarded as a device the learner uses in order to learn. Corder claims that the errors of a learner, whether adult or child, are

  • not random, but are in fact systematic.
  • not ' negative' or 'interfering' in any way with learning a TL(Target Language) but are, on the contrary, a necessary positive factor, indicative of testing hypotheses.


Errors have played an important role in the study of language acquisition in general and in examining second and foreign language acquisition in particular.

Errors are believed to be an indicator of the learners' stages in their target language development. From the errors that learners commit one can determine their level of mastery of the language system.

The discovery of errors has thus a double purpose:

  • It is DIAGNOSTIC because it can tell us the learner's at a given point during the learning process.
  • PROGNOSTIC because it can tell course organizers to reorient language learning materials on the basis of the learners' current problems.

Boundary Between Error and Non-Error

Errors deviate from what is regarded as the normه. The problem however, is that sometimes there is not firm agreement on what the norm is. Languages have different varieties or dialects with rules that differ from the standard. Native speakers of a language sometimes have different rules, and their individual codes are called IDIOLECTS. This amounts to saving that there is not always a clear-cut boundary between errors and non-errors.

The difference between native speakers and foreign language learners as regards errors is believed to derive from competence. Foreign language learners commit errors largely because of the paucity of their knowledge of the target language, as slips of the tongue or slips of the pen.

Relation Of Errors To Tasks

Control is a term introduced in second and foreign language acquisition literature to account for the discrepancy between competence and performance. That is, learners may well have acquired certain forms of the target language, but they may not be able to produce them correctly because they have not mastered their use.

Learners may have more control over linguistic forms for certain tasks, while for others they may be more prone to error. krashen's Monitor Model is suggests that tasks which require learners to focus attention on content are more likely to produce errors than those which force them to concentrate on form.

Compared to effortless speech, planned discourse allows for greater use of metalinguistic knowledge and results in fewer errors. Time seems to play determining role. Poor learners need more time to produce speech material because they have little control over their linguistic awareness.

Learners' monitor—their capacity for modifying utterances under three conditions:
  1. time.
  2. focus on form.
  3. knowledge of the rule.

Relation of errors to context

Certain linguistic environments have a facilitative effect, prompting learners to produce target-like forms, while others are debilitating, and inducing error.


There are two kinds of errors:

GLOBAL ERROR: is one which involves "the overall structure of a sentence".
Local Error: is one which affects "a particular constituent."

Examples of global and local error.
For examples:
Global error. I like take taxi but my friend said not that we should be late for
Local error. If I heard from him I will let you know.
The first sentence is the kind of sentence that would be marked by a language teacher as erroneous, and in the second sentence only heard would marked as erroneous.

Error fall into four main categories:

  1. omission of some required element
  2. addition of some unnecessary or incorrect element
  3. selection of an incorrect element.
  4. misordering of elements.

Strategies for Curriculum Materials Development

There are three strategies to develop curriculum materials, Now we are going to talk about them: 

Adopting materials in a rational manner is not as easy as it might at first appear.
1) First, it is necessary to decide what types of materials are desirable.
2) Second, all available materials of these types should be located just in case they might prove useful.
3) Third, some form of review/evaluation procedures must be set up to pare this list down to only those materials that should be seriously considered so that filial choices can be made.
4) Fourth, some strategy for the regular review of these adopted materials must be set up to make sure that they do not become irrelevant to the needs of the students and the changing conditions in the program.

Deciding on Types of Materials

To adopt the materials, the curriculum developer must take decisions concerning which types of materials are suitable. Materials can also be based on many different approaches and can be organized around a number of different syllabuses. Materials can also be presented on a number of media and take many physical forms on any one of those media.

The following list of possible media for materials may help with these deliberations:
books                              teachers books
Workbooks                       magazines
journals                           pictures
maps                               charts/graphs/diagrams
cassette tapes
genuine                           video / disc / computer combinations
video tapes (language, authentic, computer software

Locating Materials
Three sources of information immediately spring to s mind that can help in finding existing materials that might be suitable: publishers' catalogs, books received (sections of journals), and teachers' shelves.

Publishers' catalogs 
include addresses for some of the most famous publishers of ESL materials. Many of these publishers also produce materials for other languages, so catalogs' list should provide at least a starting point for any language teacher looking for published  materials.

To make even a short list of candidates for materials that might be adopted, hands-on examination is necessary. Most publishers are happy to send teachers desk copies of their materials. A desk copy is a textbook, manual, workbook, or other form of material sent free Of charge for consideration by teachers who might adopt the material in their courses. The teacher may usually keep a desk copy even if student copies are not subsequently ordered.

Examination copies, also called review copies, are also sent so that they can be considered for adoption in courses. However, examination copies are only free of charge if the teacher subsequently orders the material(s) for his or her students within a certain number of days (usually 60 or 90 days).

Remember that publishers' catalogs are designed to sell language teaching
materials. Hence they will best be used as a source list of available materials, not as the definitive word on the quality or those materials.

Another source of relatively up-to-date information on language materials is the "Books Received" section that is found in many of the prominent language teaching journals. These "Books Received" are usually listed near the back of a journal/Such listings are usually fairly current. However, since such lists include only the author, title, and publisher, sending for desk or review copies will still be necessary.

One last source of information about materials should not be overlooked. The teachers' shelves within the program may be full of materials that could prove interesting and useful. More to the point, teachers are more likely To have experience with materials they already own.

Evaluating Materials

Whether materials are found in publishers' catalogs, "Books Received" sections of journals, or teachers' shelves, firsthand examination will eventually be necessary to
determine the suitability of the materials for a particular program. This process might safely be called materials evaluation.

The "reviews" in professional journals and newsletters typically reflect only the views of one individual. If possible, seek out two or three reviews or a book or other materials. One review can be helpful, but a number of reviews will offer a more comprehensive picture of the book or materials under consideration. It is also a good idea to establish a file of reviews that might be of interest to program faculty and administrators.

Firsthand review of materials is clearly the' most personal and thorough method for evaluating them. Stevick  suggested that materials should be evaluated in terms of qualities, dimensions, and components as follows:
a)  Three qualities: Strength, lightness, transparency (as opposed to weakness, heaviness, opacity)
b) Three dimensions: Linguistic, social, "topical"
c)  Four components: Occasions for use, sample of language use, lexical exploration, exploration of structural relationships.

Brown suggests a checklist that contains more detail. It considers materials from five
perspectives: background, fit to curriculum. Physical characteristics, logistical characteristics, and teachability. All of these judgments can be made only with the materials physically in hand.

the checklist materials background refers to nation about the author's and the publisher's credentials. It considers also. the amounts and types of experience the author has had in teaching and administration, as well as in curriculum and materials

Logistical characteristics might include such mundane (but important) issues as the price and number of auxiliary parts (that is, audiovisual aids, workbooks, software, unit tests, and so forth) that are required, as well as the availability of the materials, time that it will take to ship them, and the like.

Finally, the teachability of the materials should be appraised. This decision may hinge on whether there is a teacher's edition; an answer key, annotations to help teachers explain and plan activities, unit reviews, and so forth. It is also important to ask the teachers if they think the set of materials will work and is otherwise  acceptable to them.

Ongoing Review of Materials

Even after a set of materials is in place for each course, the materials evaluation process must continue while they are being used, as well as after each implementation period. Teachers can keep notes on their reactions to the materials as they use them. Such notes can be as simple as scribbling in the margins of the teacher's edition, or as formal as typed reviews of the materials in question.

Best E-Learning Software to Learn For a Career Transition into Instructional Design

I find the same few coming up again and again Adobe, Techsmith, etc. There's also open source (free) alternatives available that are pretty good - lists open source alternatives alongside the closed source versions. E-learning doesn't have to break the bank!

The 30 day trials are a good introduction and offer tutorial programmes for the most popular packages - They'll get you started much more quickly than going through the standard documentation. Also, a lot of the tutorials on Adobe software on are produced by Adobe employees and are available on their website,, for free.

This is just so that you know what there is and what can and can't be done with what's available. It allows you to make more informed decisions when planning your learning interactions and designing courses. It may also inspire you with some fresh ideas. As with any new venture, you'll probably spend a lot of time re-inventing the wheel and trying to transfer what you "already know" onto the new medium but I think that's just part of the process.

I'd also recommend having a look at what other people have done with the software. Screencasts are great for "How to..." tutorials for software and but I think you have to get a bit more creative to teach/present stuff that's outside the "electronic environment". Oxford University Press "give away" their on-line e-learning EFL/ESL resources:

Multimedia-wise, a digital camera and a portable audio recorder and microphone are also essential in my experience. If your budget allows, experimenting with video is also very quick and easy. Of course, to get professional results you need to make some wise investments in equipment. It doesn't have to cost a fortune and the technical support staff at your college/university could probably give you some great advice on that front. For a quick introduction, I've written some blog articles about audio and video for e-learning: and

Also, whatever you produce in terms of learning interactions, has to exist within a context. I'd recommend getting used to a variety of learning management systems (LMS) and their idiosyncrasies. For example, SCORM is supposed to be a cross platform standard but you'll find that there are different versions of SCORM that work more or less well with particular LMS's. Again there are always good open source options available so you needn't break the bank, in fact, one of the most widely used LMS's is open source:

I'll also second what Melissa said in this discussion - the same rules apply to e-learning as to any other kind. Keep learners and their learning objectives firmly in focus with everything you do.

I hope this helps!

Collaborative Learning – Is it changing the face of e-learning?

The training industry, especially the e-learning industry has evolved from ILT, online courses, blended courses to rapid e-learning, audio/video and a range of instructional simulations and interactivities. Earlier, companies would convert manuals and instructor materials into slideshows for training purposes. Now, e-learning programs offer engaging, interactive and virtual experiences. A year or two ago, when recession affected industries, people focused on learning to retain their jobs.

Recent times have seen learning happen through social media tools. From YouTube to blogging, podcasts to micro-blogs, social news and bookmarking to wikis, social media tools have taken e-learning to another level.

The shift towards social learning is mainly because organizations have started recognizing the tremendous need to build, manage and formalize their social and collaborative learning programs.

Organizations are rethinking their training strategies and models to accommodate learning programs under ‘learning environments’ that offer collaborative learning and built in social media tools. According to Wikipedia, collaborative learning refers to various methodologies and environments where learners engage and actively interact to learn or attempt to learn something together.

A collaborative learning environment in an organization enables learners to converse with contemporaries, present as well as defend ideas and perspectives, exchange diverse beliefs, question other conceptual frameworks and get actively engaged. Learning in a collaborative environment can take place at any time. It can happen when individuals are in discussion in a group or over the Internet.

Some organizations may offer ILT training on a need basis, but over 70% of learning happens while reading, watching and listening or simply by talking with one another.

There are many new tools and platforms similar to LMS to manage, track and facilitate people to learn and work together. It’s a matter of time when collaborative learning will happen on the move through mobile phones, Blackberry phones and related mobile devices.

While Google has Google Wave, Microsoft’ SharePoint and Live Services, Adobe’s Connect, a few companies such as Saba, Plateau and Taleo are creating new tools and platforms to facilitate communication and knowledge-sharing.

What are your thoughts on Collaborative Learning? Will organizations be able to create learning environments to enhance informal and collaborative learning? Please comment and share your knowledge.

key elements for effective Localization

When you go global with your business, it is important that the product you market blends with the intended country. Suppose you create courseware for Company X, based in Egypt in French. Company X also has its presence in Dubai and Saudi Arabia and needs course material in English or Arabic. The company wants to train its employees on the same course across various locations. Will the courseware created in Egypt help Dubaian or Saudian employees? Obviously not. So, how will Company X train its employees in Dubai and Saudi Arabia on the same course?

One option is to translate the French elearning courseware to the target language. Translation simply means changing the source language of the software, documentation, learning material, user manual, etc into a target language of the intended country. The disadvantage faced during word-for-word translation is that it yields many funny and offensive literal changes.

The other alternative is localizing the product for the intended country. So what is localization?

Localization, abbreviated as L10n, is the course of action of translating documentation, software, learning materials, user manuals, etc for a foreign market. It involves translating and adapting the text from the source language (including spelling issues and grammar) to the target language, semantic analysis of the source content, support of different character sets, as well as handling the formatting of the information such as date, time, local culture & habits, addresses, phone numbers, local colors and currency… By localizing the product, the company markets the same to the target audience by integrating both the culture and language of the intended country.

During the localization process, the linguist is the most important person to have onboard. He is the native speaker and regional expert of the proposed country. He/she must be aware of the verbal characteristics, cultural differences, language specific humor, forbidden subjects, etc of the targeted country and know how to deal with them accordingly.

At the end of the L10n process, the product should:

1- Be appropriate for the target business/country
2- Appear custom-designed for the end user’s cultural and linguistic background
3- Retain the original meaning of the course/product.

Though many companies claim to offer translation and localization services, Localization of content is best done by experts in linguistic services having years of experience and a stable team of cross-country expert linguists. Failure in accurate localization can have dire effects, such as insulting the culture of the targeted country and its people, apart from causing embarrassment to you.

Here are a few tips to avoid common localization pitfalls:

1- Write and/or create materials using simple terms and words, to render easy localization of the same.
2- Do not embed text in an image. While localization, the same image would have to be re-created with text superimposed on it. Create text and graphics on different layers.
3- Write properties for fonts in an external XML file like a style sheet. A CSS will allow you to define properties for font for individual languages in one accessible place.
4- Applications handling localizable content should support the character set of your target language.
5- As with fonts, do not embed text in script. Also avoid language constructions that combine text and numbers.

Minimize integrating content by using a mix of different technologies, formats and tools. The more complex the creation process, the more complex the localization process will be.

When a company localizes its content to meet the demands of the business abroad, it adds a personal touch and comforts the end user to read and interpret the product/courseware in his/her own language. The need to train a culturally and linguistically diverse workforce effectively is very important and using the targeted country’s own language as a medium is considered the best way.

Resistance to Instructional Design?

I've encountered significant resistance in my career with arguments ranging from "we don't have the time," "institutional research isn't real-world," to "you are a purist and that will never work."

In most cases I lost all the "fights" to put in place proven processes that are founded in real-world research because the experience of the superiors was limited and their leadership style was not conducive to servant leadership.

Eventually I elected to implement best practices in a grassroots method, whereas the details were hidden from the view of the superiors. Rather the superiors only saw that the work was getting accomplished.

Instructional design is very much a victim of this resistance, IMHO because those outside of the craft do not understand the skillset and the eLearning craze confused the craft of instructional design with computer programming. Even internal to the Learning & Development field, there is significant misunderstanding of the skillset of a well-bred instructional designer. Additionally, many in the Learning & Development field have not partnered with the operations and sales departments in a strategic manner. I have seen department after department settle into the back seat and allow the other departments to solely drive the car. It is rare to get the business objective information, such as, sales goals, service standards, and financial goals for use in the proper design of an instructional intervention or behavior change program. I have been told "we just don't need to know that because it is above and beyond our department." I couldn't disagree more.

However, I have not given up. Instead I have made my career path about diversifying my business experience and blending it with my advanced education and research of behavioral psychology to eventually find myself in the position to confidently link business performance to the true role of learning & development and the many skillsets found within.

Curriculum Testing

Now we will explore the most important types of decisions (tests) that must be made in most language programs: proficiency, placement, diagnostic, and achievement.

Making Decisions with Tests
The four different types of tests, proficiency, placement, diagnostic, and achievement are probably emphasized because they fit neatly with four of the fundamental types of decisions that must be made in language programs.

Teachers sometimes find themselves in the position of having to determine how much of a given language their students have learned and retained.
General proficiency is being used to describe what the student should have attained by the time they finish the program. It is a decision that must be made by the administrators, teachers, and contract negotiators involved.
For example, TOEFL is an overall English language proficiency test that is widely used to judge students for admissions decisions. The proficiency levels of students when they enter the program must also be measured.

To contractual issues, entry and exit level proficiencies are crucial for understanding the overall boundaries of a program. What level of overall proficiency do the students have when they come to us? And what level will they have when they leave us? Answering these two fundamental questions will help planners in making many different types of curriculum decisions.

Checking at the beginning of the curriculum development process to see if program objectives are set at the appropriate level for the students is far more productive than wafting until after the program is firmly in place, at which point costly materials, equipment, and staff decisions have already been made.
However, such decisions must be made carefully because proficiency tests are not designed to measure specific types of language teaching and learning, and most definitely not the specific types of language teaching and learning that are taking place in a particular language center.

In short, proficiency decisions  involve tests that are general in nature (and not specific to any particular program) because proficiency decisions require general estimates of students' proficiency levels. Such decisions may be necessary in determining exit and entrance standards for a curriculum, in adjusting the level of goals and objectives to the true abilities of the students, or in making comparisons across programs. Despite the fact that proficiency decisions are general in nature, they are nevertheless very important in most language programs.

Also relatively general in purpose, placement decisions are necessary because of the desirability of grouping students of similar ability levels together in the same classes within a program. Some teachers feel that they can do better teaching when they can focus in each class on the problems and learning points appropriate to students at a particular level.
Placement tests are designed to facilitate the grouping of students according to their general level of ability. The purpose of a placement test is to show which students in a program have more of, or less of a particular ability, knowledge, or skill.                               
The placement of students into levels may be based on something entirely different from what is taught in the levels of the program.

In short, placement decisions should be based on instruments that are cither designed with a specific program in mind or, at least, seriously examined for their appropriateness to a specific program. The tests upon which placement decisions are based should either be specifically designed for a given program (and/or track within a program) or, at least, carefully examined and selected to reflect the goals and ability levels in the program. Thus a placement test will tend to apply only to a specific program and will be narrower in purpose than a proficiency test.

Students' achievement is the amount that has been learned. To make any decisions related to student achievement and how to improve it, planners must have some idea of the amount of language that each person is learning in a given period of time (with very specific reference to a particular program).
To help with such decisions, tests can be designed that are directly linked to the program goals and objectives. These achievement tests will typically be administered at the end of a course or program to determine how effectively students have mastered the desired objectives.

The information gained in this type of testing can also be put to good use in reexamining the needs analysis, in selecting or creating materials and teaching strategies, and in evaluating program effectiveness. Thus the development of systematic achievement tests is crucial to the evolution of a systematic curriculum.

In short, achievement decisions are central to any language curriculum. We are in the business of fostering achievement in the form of language learning. In fact, this book promotes the idea that the purpose of curriculum is to maximize the possibilities for students to achieve a high decree of language learning. The tests used to monitor such achievement must be very specific to the goals and objectives of a given program and must be flexible in the sense that they can readily be made to change in response to what is learned from them about the other elements of the curriculum. In other words, well-considered achievement decisions are based on tests from which a great deal can be learned about the program. These tests should, in turn, be flexible and responsive in the sense that their results can be used to affect changes and to continually assess those changes against the program realities.

The last category of decisions is concerned with diagnosing problems that students may have during the learning process, This type of decision is clearly related to achievement decisions, but here the concern is, with obtaining detailed information about individual students' areas of strength and weakness.
The purpose is to help students and their teachers to focus their efforts where they are most needed and where they will be most effective. In this context, "areas of strength and weakness" will refer to examining the degree to which the specific instructional objectives of the program are part of what students know about the language or can do with it. While achievement decisions are usually centered on the degree to which these objectives have been met at the end of a program or course, diagnostic decisions are normally made along the way as the students arc learning the language. As a result, diagnostic tests are typically administered at the beginning or in the middle of a course.

In short, diagnostic decisions are focused on the strengths and weaknesses of each individual vis-à-vis the instructional objectives  for purposes of correcting deficiencies "before it is too late." Hence, diagnostic decisions are aimed at fostering achievement by promoting strengths and eliminating weaknesses.

The definition for a criterion-referenced test (CRT) is:
A test which measures a student's performance according to a particular standard or criterion which has been agreed upon. The student must reach this level of performance to pass the test, and a student's score is therefore interpreted with reference to the criterion score, rather than to the scores of other students.

This is markedly different from the definition for a norm-referenced test (NRT) given in the same source:
a test which is designed to measure how the performance of a particular student or group of students compares with the performance of another student or group of students whose scores are given as the norm. A student's score is therefore interpreted with reference to the scores of other students or groups of students, rather than to an agreed criterion score.

The essential difference between these definitions is that the performance of each student on a CRT is compared to a particular standard called a criterion level (for example, if the acceptable percent of correct answers were set at 70 percent for passing, a student who answered 86 percent of the questions correctly would pass), whereas on an NRT a student's performance is compared to the performances of other students in whatever group has been designated as the norm (for example, regardless of the actual number of items correctly answered, if a student scored in the 84th percentile, he or she performed better than 84 out of 100 students in the group as a whole).

In administering a CRT, the principal interest is in how much of the material on the test is known by the students. Hence the focus is on the percent of material known, that is, the percent of the questions that the student answered correctly in relation to the material taught in the course and in relationship to a previously established criterion level for passing.

In administering an NRT, the concerns are entirely different. Here, the focus is on how each student's performance relates to the scores of all the other students, not on the actual number (or percent) of questions that the student answered correctly.

In short, CRT's are designed, to examine the amount of material known by each individual student (usually in percent terms) while NRTs, examine the relationship of a given student's performance to the scores of all other students (usually in percentile or other standardized score terms).

The two types of tests also differ in:
(1) The kinds of things that they are used to measure,
(2) The purpose of the test
(3) The distributions of scores that will result
(4) The design of the test
(5) The students' knowledge of the test questions beforehand. Exploring each

Used to Measure
In general, NRTs are more suitable for measuring general abilities or proficiencies. Examples would include reading ability in Spanish or overall English language proficiency. CRTs, on the other hand, are better suited to giving precise information about individual performance on well-defined learning points.

Purpose of Testing
The purpose of an NRT must be to generate scores that spread the students out along a continuum of general abilities or proficiencies in such a way that differences among the individuals are reflected in the scores.

In contrast, the scores oh CRTs are viewed in absolute terms, that is, a student's performance is interpreted in terms of the amount, or percent, of material known by that student. Since the purpose of a CRT is to assess the amount of knowledge or material known by each individual student, the focus is on individuals rather than on distributions of scores. Nevertheless, as 1 will explain next, the distributions of scores for the two families of tests can be quite different in interesting ways.

Distribution of Scores
In other words, for an NRT to be effective softie students should score very low, and others very high, and the rest everywhere in between. Indeed, the way items for an NRT are generated, analyzed, selected, and refined will typically lead to a test that produces scores that fall into a normal distribution, or "bell curve.". For a CRT,
then, it is perfectly logical and acceptable to have a very homogeneous distribution of scores whether the test is given at the beginning or end of a period of instruction.

Test Design
NRT is likely to be relatively long and to be made up of a wide variety of different item types. An NRT usually consists of a few subtests on rather general language skills, for example, reading and listening comprehension, grammar, writing, and the like. These subtests will tend to be relatively long (30—50 items) and cover a wide variety of different test items.

In comparison, CRTs are much more likely to be made up of numerous, but shorter, subtests. Each of the subtests will usually represent a different instructional objective for the given course—with one subtest for each objective. For example, if a course has 12 instructional objectives, the CRT associated with that course might have 12 subtests

Students' Knowledge of Test Questions
Because of the general nature of what NRTs are testing and the usual wide variety of items, students rarely know in any detail what types of items to expect. The students might know what item formats they will encounter, for example, multiple-choice grammar items, but seldom will they be able to predict actual language points.

However, on a CRT, students should probably know exactly what language points will be tested, as well as what items types to expect. If the instructional objectives for a course are clearly stated and if those objectives are the focus on instruction, then the students should know what to expect in the test.


Test Qualities
Type of Decision / test
Detail of information
Very general
Very specific
General skills
prerequisite to
program entry

points drawn
from entire

objectives of
course or

of course or        

Purpose of decision
individual overall
with other

Find each

amount of
learning with
regard to

students and
teachers of
that still
need work

Type of comparison
Comparison with other institutions
Comparisons within programs
Comparison to course or program objectives
Comparison to course or program objectives
When administered
Before entry or at the end of program
Beginning of program
End of courses
Beginning middle of courses
Spread of scores
Spread of scores
Degree to which objectives have been learned
Degree to which objectives have been learned
Type of test

Many language tests are, or should be, situation specific. This is to say, a test can be very effective in one situation with one particular group of students and be virtually useless in another situation or with another group of students.

Other practical considerations include the initial and ongoing costs of the test and the quality of all of the materials provided. Is the test easy to administer? What about scoring? Is that reasonably easy given the type of test questions involved? Is the interpretation, of scores clearly explained with guidelines for presenting the scores to the teachers and students?

Clearly, then, a number of factors must be considered even when adopting an already published test for a program. Ideally, the program would have a resident expert,
someone who can help everyone else to make the right decisions. If no such expert is available, it may be advisable to read up on the topic yourself.


A. General background information
1. Title
2. Author
3. Publisher and date of publication
4. Published reviews available

B. Theoretical orientation
  1. Test family (norm-referenced -or-criterion-referenced
2. Purpose of decision (proficiency, placement, achievement, or diagnosis) 
3. Language methodology orientation (approach and syllabus)

C. Practical orientation
1.   Target population (age, level, nationality, language/dialect, educational background, and so forth)
2.   Skills tested (for instance, reading, writing, listening, speaking, structure, vocabulary, pronunciation)
3.   Number of subtests and separate scores
4.   Type of items reflect appropriate techniques and exercises (receptive: true-false, multiple-choice, matching; productive: fill-in, short-response, essay, extended  discourse task).

D. Test characteristics
1. Norms
a. Standardization sample
b, Type of standardized scores
2. Descriptive statistics (central tendency, dispersion, and item characteristics)
3. Reliability
a. Types of reliability procedures used
b. Degree of reliability for each procedure

    4. Validity
a. Types of validity procedures used
b. Do you buy the above validity argument(s)?

    5. Practicality
  1. Cost of test booklets, cassette tapes, manual, answer sheets, scoring templates, scoring services, any other necessary test components
  2. Quality of items listed immediately above {paper, printing, audio clarity, durability, and so forth)
  3. Ease of  administration(time required, proctor/examine ratio, proctor   qualifications, equipment necessary, availability and quality of directions for administration, and so forth)
  4. Ease of scoring (method of scoring, amount of training necessary, time per test, score conversion information, and so forth)
  5. Ease of interpretation (quality of guidelines for the interpretation of
  6. scores in terms of norms or other criteria)

Proficiency, placement, achievement, and diagnostic tests can be developed and fitted to the specific goals of the program and to the specific population studying in it.
That might mean first developing achievement and diagnosis tests (which are
based entirely on the needs or the students and the objectives of the specific program), while temporarily adopting previously published proficiency and placement tests.
Later, a program-specific placement test could be developed so that the reasons, for separating students into levels in the program are related to the things that the students can learn while in those levels. It is rarely necessary or even useful to develop program-specific proficiency tests because of their interprogrammatic nature.
Naturally, all of these decisions are up to the teachers, administrators, and curriculum developers in the program in question.

The purpose of adapting a test to a specific situation will probably involve some variant of the following strategy:
  1. Administer the test to the students in the program.
  2. Select those items that appear to be doing a good job of spreading out the students for an NRT, or a good job of measuring the learning of the objectives with that population for a CRT.
  3. Create a shorter, more efficient, revised version of the test that fits the ability levels of the specific population of students.
  4. Create new items that function like those that were working well in order to have a test of sufficient length.

A checklist for successful testing :

A. Purposes of test
1. Clearly defined (theoretical and practical orientations)" :
2. Understood and agreed upon by staff

B. Test itself

C. Physical needs arranged
1. Adequate and quiet space
2. Enough time in that space for some flexibility
3. Clear scheduling

D. Pre-administration arrangements
1 .Students properly notified
2. Students signed up for test
3. Students given precise information (where "and when test will be, as well as what they should do to prepare and what they should bring with them, especially identification if required)

E. Administration
  1. Adequate materials in hand (test booklets, answer sheets, cassette tapes, X pencils, scoring templates, and so forth) plus extras
  2. All necessary equipment in hand and tested (cassette players, micro-phones, public address system, videotape players, blackboard, chalk, and so forth) with 'backups where appropriate
  3. Proctors trained in their duties
  4. All necessary information distributed to proctors (test directions, answers to obvious questions, schedule of who is to be where and when, and so forth)

F. Scoring
  1. Adequate space for all scoring to take place
  2. Clear scheduling of scoring and notification of results
  3. Sufficient qualified staff for all scoring activities
  4. Staff trained in all scoring procedures

G. Interpretation
1. Clearly defined uses for results
2. Provision for helping teachers interpret scores and explain them to students
3. A well-defined place for the results in the overall curriculum

H. Record keeping
1. All necessary resources for keeping track of scores
2. Ready access to the records for administrators and staff
3. Provision for eventual systematic termination of records

F. Ongoing research
1.   Results used to full advantage for research
2.   Results incorporated into overall program evaluation plan