Link to What's New This Week Interval Measurement

Dear Habermas Logo and Link to Site Index A Justice Site



Basic Statistics

Mirror Sites:
CSUDH - Habermas - UWP - Archives

California State University, Dominguez Hills
University of Wisconsin, Parkside
Soka University Japan - Transcend Art and Peace
Created: January 17, 2003
Latest Update: January 22, 2003

E-Mail Icon jeannecurran@habermas.org
takata@uwp.edu

Site Teaching Modules Interval Measurement

Site Copyright: Jeanne Curran and Susan R. Takata and Individual Authors, January 2003..
"Fair use" encouraged.

On Friday, January 17, 2003, Scott Christiansen wrote:
Subject: interval measurement

I was hoping you could shed some light for me on the topic of interval measurements.

For example: If I was training company employees in a new and more efficient online accounting system, and I then tested everyone at the end of the training course by giving them five different sets of budget data to enter into the system.

How could I use this information to make an interval measurement?

Thank you for your assistance and time.

Scott Christiansen

On Friday, January 17, 2003, jeanne responded:

Hi, Scott. I don't recognize you as a student in either Susan's classes or mine, so we welcome you as a community learner.

This is a good question, especially because it's risen from something you actually want to do. That helps our students see where statistics fits into reality out there. Let me see if I can help.

First, you want to use interval measurement. What does interval measurement really mean? Let's try to situate interval in the kinds of measurements we generally use in the social sciences.

  • nominal data or measurement:

    Nominal refers to data that you can break out into categories, but there is no hierarchical relationship between the categories, just names. Consider Accounting Staff, Sales Staff, and Supervisors, and Clerical Staff, for example. You could count many of each category you had in your sample, but you couldn't say much about your sample except what proportion belonged to each category.

    You could draw a histogram and show the distribution of your sample over categories visually. But that wouldn't seem to help with what you seem to want to do. But you might want to consider collecting this nominal data anyway because it might help in an evaluation of whether you are successfully communicating the details of your program to each of those three groups, in the respective ways they might need to work with the program.

  • ordinal data or measurement:

    Ordinal data refers to data that you can break into categories that form a scale: high - medium - low, for example. With ordinal data you can tell which group scored highest or did best on your test; but you couldn't say by how much, just that one group of trainees did better than the others. But that still might not provide the evaluative information you're looking for.

  • interval data or measurement:

    Interval data refers to data that not only ranks on a scale, but also allows to measure how much higher or lower the score is. Interval data are data that can be measured along a scale with equal intervals. For example, one mistaken entry, two mistaken entries, three mistaken entries. With this data you can tell whether each trainee did better than others, and by how much. And if your scale of measurement starts from zero, no mistaken entries, you could even use it as ratio data to tell whether one trainee did twice as well or three times as well as another.

The actual measurement you would want to use here is the actual number of mistaken entries. You would collect that measurement for each of the five different sets of budget data. Then you could analyze the data by comparing how each trainee did on the five different sets of accounting data. That would measure the trainee's consistency over time and data sets.

You could also measure how well each group of trainees did on the different sets of accounting data.

You could generate a total score for each trainee over the five sets of data, and again compare trainees and groups.

You could put all this into a fifteen page report and look terribly impressive or maybe even more impressive if you did it with Power Point Slides.

Hope this helps. Good luck, jeanne

On Saturday, January 18, 2003, Scott responded:

Jeanne-

Thank you for your quick response back to me. However, I do have a question regarding your response.

Under interval measurement you said, "...and if your scale of measurement starts from zero...". It is my understanding that interval measurement has no absolute zero.

Would you please offer me further assistance in clearing this up? Maybe it is me that is confused, but I thought interval had no absolute zero and ratio did have an absolute zero.

Thank you for your help!

Scott

On Saturday, January 18, 2003, jeanne responded:

You're right, Scott. That's the difference between an interval and a ratio scale. I was just trying to avoid confusing those who didn't recall their intro statistics so well. If the range of possible incorrect or correct entries starts with zero, then you can satisfy the requirements of a ratio scale and say that those who get ten right as opposed to five right have done twice as well. But if your range of possible scores goes from 100 to 800, as it used to, and maybe still does, on the Law School Admissions Test, you can't use the ratio interpretation.

In your evaluation of how well your trainees have learned to enter data for the new accounting program, you could easily have a zero. Just remember that it matters little whether you call your data nominal, ordinal, interval, or ratio. It matters that you understand how to use the different measures to meet your needs in real life.

Knowing that your data is ordinal may tell you how to pick the appropriate statistical "test" to tell whether your results have statistical significance, that is, whether you could have gotten those results by chance when there really is no difference at all. In your example that would make no sense at all because you are testing your entire population so that you don't need to make any statistical inferences about a much larger population, so you don't need to "test" to see if your results could have been obtained randomly, without your program having had any effect at all.

Hope this answers your question. jeanne

On Tuesday, January 21, 2003, Scott Christiansen wrote:

Subject: significant question

Hi Jeanne-

Thank you for your assistance in clearing up my question regarding interval measurements. I do however have another question for you. A co-worker mentioned statistically significant. No one seems to know what this means though. Is this something that you are aware of?

Thanks,
Scott

On Wednesday, January 22, 2003, jeanne responded:

Hi, Scott. I liked the "significant question" subject line. And, yes, I've heard of "statistically significant." It's kind of fun talking to you and your co-workers. You sound just like I imagine some of our kids will sound one day, especially if we don't start making sense to them.

Actually, I snuck "statistically significant" piece in the last memo, but I didn't want to scare anyone by pointing it out. I said:

"Knowing that your data is ordinal may tell you how to pick the appropriate statistical "test" to tell whether your results have statistical significance, that is, whether you could have gotten those results by chance when there really is no difference at all. In your example that would make no sense at all because you are testing your entire population so that you don't need to make any statistical inferences about a much larger population, so you don't need to "test" to see if your results could have been obtained randomly, without your program having had any effect at all."

See it there in bright blue? I'm sure the reason I snuck it in was because I teach statistics, and I just couldn't bear to leave out the single most important term in that course. So let's see if I can explain it in words that will make sense to you.

  1. First, what does"significant" mean?

    Something is significant when it is important, when it has meaning that we can use in understanding other things, like the data we've collected. If your evaluation study showed that three people out of ten can now effectively use your new accounting system, would that be significant? There is no real "right" or "wrong" answer to that question. If the three people are the accountant supervisors who will then teach it to their staff, I reckon that would be an important fact. So you could say it was significant. But if this were a training session where your program was supposed to have done the training for the staff, I reckon the accountant supervisors wouldn't be so impressed, since they'd still have seven staff members they'd have to teach themselves. So maybe then your results are not significant. Well, they might be significant to the supervisors in deciding whether they want to buy your new program. So then you could conclude that your results were significant.

    You see how it's kind of a shell game? What your results mean depends a lot on what you want them to mean and how you present them. If you want the supervisors to buy your program, I'd reccommend that you not garphically display and show off those results that say you only succeeded in teaching three out of ten staffers. Statistics is as much common sense as it is numbers. So just use your head. And learn good presentation skills. Always be prepared to "explain" any results that don't give the impression you're trying to make. Describe the ten staffers as staffers who have "computer fear," then show that after just one session with your program, you got most of them to increase their comfort level and three to actually master input. That's called "salesmanship", not "statistics."

  2. Second, why don't we just say "significant?" Why "statistically significant?

    Now I'd like to go back to my question: why isn't "significant" enough? Why do we say "statistically significant?" Yeah, it really does mean something. Not just fancy professional jargon. Most of the time when we try to provide "scientific" measures of things in the social sciences, we take samples. We take a "sample" of AIDS patients, a sample of juvenile delinquents, a sample of soldiers, a sample of Republicans, a sample of staffers. But we actually want to apply our findings to all AIDS patients, to all juvenile delinquents, etc. or to AIDS patients or juvenile delinquents in general. To do that, when we didn't come close to having access to all those we want to apply the results to, we use mathematics to test the extent to which the results with our sample should be the same as if we had included all the people we want to apply the conclusion to.

    There are statistical "tests" that help us do this. No one, I hope, really calculates this stuff anymore. And even if you are forced to calculate, you're very likely to end up asking someone like me a few years later if I've heard of "statistically significant." What you really do, if you want to know if your data are statistically significant is you go to someone in your analysis department, or a friend who's still in school, and ask them to run the Statistical Package for the Social Sciences (SPSS) or one of the similar programs for you. For them to do that, you need to give them a table of your data, with definitions for how you measured the data, and explain to them whether the measurement is nominal or ordinal, or interval. For nominal and ordinal data, you can have them run a Chi-Square. And for interval data, you can ask them to run a t-test.

    Now, that sounds terribly important or significant. Actually in most social science settings, these tests are largely meaningless, but they sound good and can often get your work published. So you should all know how to get one if you want to be impressive. Start making contacts now, so you won't have to pull all-nighters when you really want to say you've got something statistically significant.

  3. What does the Chi-Square Test tell you?

    Chi-Square tells you the percentage likelihood that you could have gotten the results youi got with your sample by simple random chance. That is, the test tells you that you really found "something," but it doesn't tell you much more. If the Chi-Square Test result is "significant" at the alpha=0.05 level of significance, that means that there is only a 5% chance that you could have gotten those test results from any group drawn randomly from all those people from whom you could conceivably have gathered test scores, i.e. who were in your population.

    . . . . More tomorrow. Sorry. Gotta go. jeanne