Science Fair for Mere Mortals

After judging the science fair last week, I would like to revisit my tips for you the science fair participant.

Warning number 1

Some of the things I say here might go against what your teacher has told you. I am not sure what you should do in this case. Your teacher gives you a grade and I am just some dude on the internet. Proceed at your own risk.

Oh, and maybe you are a teacher. I think that is great that you are seeking more tips for your students. However, note that I have not read any science fair rules. I am merely thinking about science fair projects from a science viewpoint. There, I feel better.

A note about judging

Maybe you really, really want to win this science fair. I applaud your effort. However, there is something you must know. Science fair judging can (but is not always) sometimes a little bit arbitrary. Sometimes a project gets a higher score than another one, just because. It is not personal, it is just arbitrary.

Really, it is difficult to blame this one someone. Science fairs can just be logistically complicated. It can be difficult to find judges, so sometimes it is just some random responsible seeming adult (or college student). These judges mean well, but it isn't too uncommon for them to not really understand the basic ideas of science. There can be other problems with the judging. If there are not enough judges, every project may be judged 3 times, but all competing projects might not be judged by the same people. Suppose Judge A thinks a great project is 7/10 and saves a 10/10 for something above and beyond the ordinary. And maybe Judge B thinks a project that meets the basic requirements would get a 10/10. Sometimes there is no real strong guidelines how scoring, so this can happen.

Now on to the tips for students.

Not Enough Data

This is probably the most common problem I see with science fair projects. All too often students will just collect the data once (or maybe at most 3 runs). Why is this bad? How do you know you even have meaningful data? One run could just be a fluke. Also, it is really important to have multiple trials so that you can see how the data varies. This is extremely important when you want to compare two sets (which I will talk about below). But for now, let me leave it at this - collect more data. Here is a sample graph (that I made) that does not have enough data.


For this made-up example, a 'student' wanted to see which popcorn popped the best. The student popped three different brands of microwave popcorn and counted the number of unpoppped. The problem: every bag of popcorn (even from the same brand) is different. You need to repeat this experiment several times to see how this number changes. How many times should you do it? Generally, the more the better. I am going to say that if you could do 8 runs, that would be awesome.

No Control Variable

I am sure teachers focus on this, but maybe it is a difficult idea. Let me explain this with another example.

Example: Do video games affect heart rate?

For this experiment, suppose the student measured the heart rate for some people before and then during playing a video game. Since the student was just happy to get some people to participate, the student let the subjects pick which game to play on a Nintendo DS. Here is the data (I made this up):


Looks great, right? You are sure to say "hey - see the point above, not enough data!" Yes, that is true. However, there is another problem. First, what if Juke-daddy (that is not a real person, I made that up) picked Nintendo Solitaire while Jane played Grand Theft Auto 8 - the super hard version? That really wouldn't be the same kind of game would it?

Also, what if someone's heart rate increases? How do you know it is because of the video game at all? Maybe someone's heart rate naturally increases during that time of the day? Does it increase if they are passively watching TV? Maybe it was something they ate?

My Hypothesis is....

Maybe no one else cares about this - but it drives me batty. Here is an example of my favorite kind:

My hypothesis is that I can use Coke to brush my teeth.

In the end, my hypothesis was correct. I brushed my teeth with Coke.

So, what is wrong with that? What is a hypothesis anyway? Here, not everyone will agree. I will just give you my take on it. Basically, science about making models. A hypothesis is a prediction the model makes about real life. It is not a "guess".

You can say that you want to build a motor, but this isn't really any type of hypothesis. I know that building things can count as a science fair project, but that (at least not by itself) is an example of science. How about an example of a better hypothesis?

Example project: Suppose I want to see how long different brands of batteries last. My model is that the more expensive the battery, the better (and longer lasting) it will be.

Hypothesis: the more expensive battery will last longer.

I admit that this hypothesis stuff can be confusing. This leads to my next tip.

Talk to people

People are great to bounce ideas off of. If at all possible, talk to someone experienced in science. Your parents probably have good ideas about what you can and can not test and what kind of hypothesis you should look at.

Looking at data

Depending on your age group, this can be a tough tip. The real question is: how do you know if two things are different? Of course, if you only take one set of data for each trial, you will never know. Here is a sample. Suppose I want to determine if plants grow better with water or Coke (I keep coming back to Coke because that is the kind of stuff I see). Maybe this student does a great job and looks at 5 plants growing with water for 3 weeks and 5 plants growing with water. Also, just because, the student has 5 plants growing with plant food. The student then measures the growth of all these plants and gets something like this:


What kind of conclusion could you make from this data? Which is better, Coke or water? Well, it is not quite clear. At the very lowest level, you could calculate the average of all the plant growths. You could then compare the average. However, the average doesn't always tell you everything. Maybe the appropriate statical analysis is too complicated for these projects. But, there is a need to come up with some way of comparing data. I am not going into the details here (because this already longer than I wanted) but I proposed a data analysis method for science fairs (I call it the box-method). I really think science fair students need something like this.

Presentation and making graphs

I don't really have too much to say about the font and colors of the poster and stuff. Some judges may care about those things, but for me it is more about content. Also, I don't really like it if the student just plain out reads everything they put on the poster. I would much rather a very brief overview and then take some time for discussions (I think that can be very useful).

About graphs, the first thing I can say is "why are you making a graph?" At the first level, the graph lets you see the data and trends more easily. Suppose a poster had the following table without a graph:


Which box had the shortest length? You could look at the average, but then you would have the same problem as before. The biggest problem with this table is that it is difficult to even see that some of the data is quite off. If this were a bar chart, it would be easier to see.

Of course I have also seen the opposite of this problem. I have seen some projects that include a graph of something that didn't really need to be graphed (can't think of a sample for this case).

Ok, I think that is enough for now. I know it was a lot, but hopefully it will be some help. If you have any other suggestions, feel free to add them in the comments.

More like this

Last friday I volunteered as a science fair judge. It took half a day, but I did get free food and tons of things to blog about. There are so many things to say about science fairs that I don't really know where to begin. I actually might not even address all the issues. Here is what I would…
Previously, I talked about science fairs. One of the problems is that students don't really have a good understanding of data analysis. For me, statistical analysis is just something to do with data. It isn't absolutely true. So, it doesn't really matter that students use sophisticated tests on…
(These look like the "that's o.k. but I'll judge from way back here" type of science fair participants) In honour of the Intel International Science and Engineering Fair, which is being blogged about at, I was thinking about my own experiences as a science fair…
As background for the final project violations posted last night at 9:30 and the violation clearance taking place this morning from 7 to 10 a.m., we've interviewed Paula Johnson from the Scientific Review Committee (SRC) and John Cole from Display & Safety about why all those pesky rules and…

Hypothesis: the more expensive battery will last longer.


Hypothesis: price predicts lifespan in AA batteries.

Speaking of graphs: the axes need to have numbers and divisions that make sense. 11.25, 7.5, and 3.75 kernals - This is not helpful, either for students employing data analysis using graphs, or for presentaton of results for those evaluating the study (teachers, judges).

There can be other problems with the judging. If there are not enough judges, every project may be judged 3 times, but all competing projects might not be judged by the same people.

That probably happens a lot, but it needs fixing. If they have enough judges to get three ratings for each project, they have enough to get fair comparisons. A simple rule is that if Alice, Boris, and Doris judged Pat's project and Alice judges Sandy's project, then neither Boris nor Doris should be Sandy's other judges. If you then take the top rated project for each judge and get the judges together to discuss, it should be feasible to get a fair rating.

If you're near an ag school you might be able to get extra "real science" points by getting a proper experimental design from someone in the statistics department. They have to solve this sort of problem in field experiments.


Hypothesis: price predicts lifespan in AA batteries.

The original formulation was in plain English and was more precise, since it specified the direction of association.


Good point. Not a good graph. I was just trying to make something quick and I used Apple's Keynote. Maybe I should add that tip - don't use Keynote for graphs (unless you are in a business meeting).

This is a great post you have here, some excellent advice!

I remember when I used to carry out a single run of an experiment and think all is well. As we all know in science however one run of an experiment will produce an extremely different set of results to the next.

I've learned well though, in fact in most of my engineering labs nowadays, we are expected to give a thorough explanation of the variations in the observations that we have made.

It can be quite irritating and confusing to say the least when you find that you are not getting the results that your text-book otherwise state, but this is all part of science and the applied sciences I guess.

This is a great post you have here, some excellent advice!

I remember when I used to carry out a single run of my science fair projects experiments and think all was well. As I came to find out however the next run of the experiment would completely seemingly nullify my hypothesis.

It was very disheartening not to mention annoying but remaining open to the possibility that experimental results will often not support the hypothesis is an important part of the scientific process I guess.