Dot Physics

Thoughts on school science fairs

Last friday I volunteered as a science fair judge. It took half a day, but I did get free food and tons of things to blog about. There are so many things to say about science fairs that I don’t really know where to begin. I actually might not even address all the issues. Here is what I would like to talk about (in no particular order):

  • What is the purpose of a science fair?
  • How do you win a science fair? Tips.
  • What about judging? Are the normal methods reliable?
  • Data Analysis tips for middle schoolers
  • Creativity vs. the Internet vs. parents.
  • Social Science Fair posters? Er?

Why science fairs?

If I were in charge of science fairs (clearly, I am not), I would make the primary goal to be the promotion of science and to give students a chance to do some real science. I guess this is the goal of the actual science fairs. However, I really don’t think science fairs promote sound understanding of the nature of science.

My biggest two problems are the same format of “The Scientific Method” and part of that is the use and misuse of the word hypothesis. To me, science is about building models. Models can be physical (like a ship model), numerical or a conceptual model. A hypothesis is what a model predicts. If you go around and look at how the hypothesis is used on these posters, it is clear that the science fair is not helping.

What would I do differently? I really don’t know. Make the science fair less structured. Don’t focus on everyone following the same format, that might help.


At the science fair I was at, each judge was given a set of judging sheets with poster numbers. It seems like each poster was judged by two judges. I couldn’t tell exactly if the other judge to look at the same poster looked at all the other same posters that I did. However, you may be able to see the problem. Suppose that I am a hard judge. I try to be fair and accurately evaluate where the poster scores on what I think of as a universal scale. Another judge may judge much higher because you don’t want to discourage young scientists. So the problem is that our two scores don’t mean the same thing. Now suppose another poster was judged by two easier judges and no hard judges. Well, their project may not be that great but they could still get a high score.

Another problem is that judges look for different things. I like projects that may not have turned out perfectly, but were clearly unique and had data that might be heading towards answering a question. Other judges might like a good presentation. Perhaps this focus could be settled by having a judging sheet, but for me the judging sheet had too many different criteria to evaluate.

In the end, if you are a student with a science fair project and you don’t win, don’t take it too seriously. Really sometimes the winner is somewhat arbitrary (if you become a scientist, it will be good practice to be rejected – think: rejected papers, rejected grants). I really even wonder if what my own reliability is. Would I judge the same poster the same way if the order were different? Actually, this could be an interesting project to look at the judging of science fairs. Take a normal science fair and have way too many judges (but only some count). Do they all agree? What if the judges were “trained”, would this make a difference?

Data Analysis Tips for Middle Schoolers

A lot of these projects need some serious help. Here is an example of stuff you will see (I am making this up). Suppose I hit two different brand golf balls to see which goes farther. Here is the data I collect.


“So, my hypothesis was correct. Clearly Brand B is better.” I see lots of stuff like this. Students often go straight for comparing an average without regard for the spread in the data. Also, 4 hits is probably not enough. At this point, I don’t really have more to add. I am not sure what level of data analysis is appropriate for middle schoolers. Let me think about this and get back to it.

How do you win?

This is a tough question. It obviously depends on the judges and the scoring sheets that are used. Here are a few tips.

  • I know everyone thinks real scientist are messy, but you need to have a nice and neat poster. Have everything clearly labeled and organized. I hate to say it, but color and font probably matter also. I would go with something that looks nice but nothing too flashy. Just a guess there.
  • Follow the rules and the guidelines. If the guidelines say to have a notebook available, have one. If it says to clearly display your references, look up something more than wikipedia. Actually, it wouldn’t be a bad idea to have at least 1 non-internet reference. During my judging, I was secretly hoping to see someone reference (
  • Be creative. Nothing is more awesome to see a student with a great (and doable) idea. However, don’t get crazy creative. Just normal creative. Also, try stuff that you really think of. I think it is ok if everything doesn’t work out perfectly (it is science after all).
  • Data Analysis. Do it.
  • Don’t change a whole bunch of variables at one time. Ideally, just change one thing.

Note to parents

I know you want your child to win and I know you want to help. However, let your child make mistakes and try things even if you know it won’t work. That is how they learn about science. Hopefully, their grade won’t be effected by their performance – but who knows.


  1. #1 Dave
    February 11, 2009

    Oh, you’re really opened up a can of worms with this one. 😉

    I’ve judged a couple of science fairs, so I feel qualified to make some comments (not that I wouldn’t make comments anyway). 🙂

    As for the purpose of a science fair, it’s supposed to be about teaching kids the scientific method. But, I’m not sure it always succeeds. I’m afraid that, all too often, it’s interpreted as a “Do this in this manner.” type of thing.

    As for hypothesises, I think most science fair projects get the cart before the horse here. All too often, it seems that the students pick a project they’re somewhat interested in, and then try to come up with a hypothesis to test. I sometimes wonder if it wouldn’t be better for a hypothesis to be assigned (perhaps based on a student’s interest area, just to make it a bit more interesting for them?), and then allow the student to come up with a way to test the hypothesis (e.g, making a model, gathering data, analyzing the data, etc.). And, after all, most scientists don’t get to choose exactly what they’re going to work on, at least not in the commercial world.

    As for the judging, the two judge approach seems to be common. That’s probably not a bad idea, since it spreads the judges around a bit, which should average out the results somewhat. I wonder if there’s any technique which discounts judge scoring sheets which are predominately lower (e.g., take the best score sheet for each project?).

    One of the problems with judging science fairs, though, is that not all of the judges are scientists. Unfortunately, based on my somewhat limited experiences, most of the judges are either: teachers, (non-science) parents, etc., while few are engineers (I’m an engineer.). Thus, the scoring sheets try to make up for this by specifying specific criteria to be used when judging a project.

    There are a few things that I looked for while judging a science fair. One was originality. Of course, it’s pretty easy to pick up a project from the internet now, so it’s hard for a student to be original (or, to even determine if a student is being original). On the other hand, when you see a half-dozen versions of the same project, well, there goes most of the originality.

    One of the things I like to do, though, is, in addition to offering critique on why I scored the project as I did, to offer some encouragement, and to especially call out any creativity. One of my favourite quotes is:

     "Creativity, whether in hieroglyphics, algorithm, or suggested
     thought, is still creativity and should be recognized as such,
     lest the child be slapped for learning."

    Another thing is the level of enthusiasm expressed by the student during the interview phase. It’s pretty easy to tell which students were genuinely interested in a project, and which were just doing it because it was required. Some of this even comes through on the write-up, since it’s a lot easier to write about a topic you’re enthusiastic about.

    Along with enthusiasm, I really liked it (and gave proportionally higher scores) when a student added a section about possible future work. This, at least to me, showed that the student was really interested in the project, and was thinking of other things to test, or other ways of improving the experimental setup, or of finding/eliminating possible errors in the existing experiment.

    Yet another source of consternation is how much of the project was done by the student, versus how much was done by the parents. Unfortunately, that’s sometimes hard to determine, but it usually comes out in the interview phase. Don’t get me wrong. I’m not totally against parent participation. Actually, I think it’s a good thing. But, the student needs to be the “principle investigator”, rather than the parent (and, by this, I mean that the student needs to be the one designing the experiment, and giving direction to the parent as to what to do, rather than the other way around).

    As for data analysis, I’m not sure that detailed data analysis should be required. Obviously, more than one data point is needed. What I looked for more than detailed data analysis, though, is discussion of the errors in the data (which few students seemed to grasp, but, for those that did, I tended to score them higher). After all, it’s one thing to collect 10 readings and average them. It’s something else (and, quite wonderful) to collect the 10 readings, and then discuss why reading 8 was so far away from the rest of the readings and what potential reasons for that might be [1].

    [1] I like to refer to this as one of those “That’s strange…” moments, which usually lead to important discoveries, or, at least, an improvement in the experimental technique.

    As for the neatness of the presentation, I try not to be overly influenced by glitzy presentations (nor, for that matter, by slightly sloppy presentations, as long as all of the elements are there). Maybe the fact that I’m an engineer, and slightly sloppy, too, factors in there. I’d much rather see the data, in what ever form the student can present it in, rather than seeing too much time/effort spent on prettying up the presentation to the detriment of the experiment/data/analysis.

    I’m definitely a stickler for following the rules. Most science fairs are quite strict about the type of experiment and the danger level permitted. Any violation of those rules is an automatic disqualification before the judging even starts. But, once the judging starts, any missing materials cause me to mark a severely lower score.

    As for variables, there are dependent and indepentent variables. The student should know which to change, and which to hold constant for any particular run. It’s ok (and I rather like it) when they offer other possible experiments concerning changing some of the other variables than what they tested (e.g., for future experiments).

    I do like to see references and acknowledgments. I don’t count off for using a few wikipedia references [2], although I expect to see other, more authoritative references, too. And, I like to see acknowledgments of anyone who may have helped and their role.

    [2] Wikipedia, in my experience, tends to be reasonable accurate, but, as with any source, there can be errors, so it’s always good to cross check the facts with another source. Additionally, since wikipedia is not a “primary” reference, it can be (unintentionally) tainted (in addition to the intentional tainting that some articles experience), which makes cross checking it more difficult. Thus, I prefer that it be used more as a pointer to the authoritative references, rather than being used as the (only?) reference.

    That should be enough to get a discussion going. Feel free to disagree (or agree?)
    with any of my points. 🙂


  2. #2 Rhett
    February 11, 2009


    thanks for the insightful comment. I agree that it is important (and a great opportunity) to give some useful feedback and encouragement to the students. Particularly for the creative ones. The problem is still that even if I give such a student high scores, other judges could easily give it a lower score.

  3. #3 Stephanie Chasteen
    March 25, 2009

    One way that the scoring problem was handled well at our science fair years back was that in a set of projects (say, 15 projects for middle school physics), there were 4 judges and each visited 6 projects, for example). They arranged it so every project was visited by at least 3 judges (I think) and so each judge was responsible for scoring a subset of the projects. We would then get together and come to consensus on the winner. Even if we hadn’t each seen the projects that were being discussed, others had seen projects that we had and could argue whether it was better than our top-rated project. We could also talk to each other and determine whether there was one project we should make sure to visit while the kids were at their posters. Worked pretty well, lots of work for the judges, but a fair judging.

    I’ll never forget the girl who was too nervous to eat breakfast and fainted right there while we started asking her about her project. Heavens.

    As for data analysis for middle schoolers, I know that they are capable of understanding spread in data. Dan Schwartz has done some great work on teaching variance to middle school students by having them invent the idea of variance to understand some data with spread in it. You can see some of his work on my post here — — scroll down to the part about the pitching machine and the green people and the blue people. It’s really genius stuff.

New comments have been disabled.