RCP 2021-002
Enter NAR login credentials for access.
Please enter your username or email address. This should be the same login you use for your NAR membership
14 Comments
Leave a Reply
You must be logged in to post a comment.
Please enter your username or email address. This should be the same login you use for your NAR membership
You must be logged in to post a comment.
I would recommend adding a requirement that the R&D Judging Rubric have space for optional written feedback from the judges and be in an electronically editable format. This update should be made to the exact wording for rule revision and reflected in the proposed rubric.
Providing the judges an opportunity to supply written feedback, in addition to any oral feedback during presentations, could improve the quality of R&D projects. The scoring system without written feedback still has the potential to be vague without improving future projects or reducing miscommunications.
I personally donât have a strong opinion on the proposed scoring system. It is my understanding from judging state science fair projects, that the scoring system is intended to normalize scores across a large number of judges who may not judge every entry. Obviously thatâs not the scenario we have in NAR R&D competition.
I like the structured guidelines and the feedback for the contestants.Â
I have a couple of concerns. First, the rubric does not address the class of R&D reports that represent historical research. Those have been established as viable R&D projects and should have a rubric as well. Secondly, I think the rubric doesn’t fairly address the A and B Division R&D reports. The requirements for younger ages should be less stringent than that for C Div. and Teams, otherwise you will end up with unduly low scores for projects that are really reflective of the abilities of younger contestants. The result may well be either a more limited spread of scores in A&B divisions or adults “helping” the younger contestants to make sure that “all the boxes are checked.”
As someone who has only done one R&D project for a NARAM, the idea of having some kind of structured scoring system and especially some kind of feedback to the contestant required appeals to me. I have no idea how I scored (other than my placement) and no idea how the scores were arrived at for that one outing.
Also â I want to echo Don Carsonâs comment about provisions for historical research in the scoring rubric, if only because Iâm contemplating such a project in the future.
Unfortunately, there is a need for this ‘guide structure’ in scoring this event. Prior to NARCON 2017, there would have been no need for Matt to write this RCP. But it is a valuable addition to the rules for this event, even when it finally sees the day it no longer has a rule section number in front of it. That day can’t come soon enough…. 🙂
A similar proposal was co side red a few years ago. Â Although there might be a way to do this that can be flexible enough to handle all the different kinds of R&D, I see two major problems with the current proposal: Â First, the point allocation doesn’t seem that well thought out and doesn’t seem flexible enough to handle the potential variety of different kinds of R&D. Â Second, the most important reason for even having R&D as a competition event is to encourage real science and innovation, and so judges have to have a way of rewarding that. Â In any given year, you might have a project that is weak on some of the conventional categories the proposal identifies, yet provides such stunning and remarkable insights and innovation that it deserves first place regardless of certain flaws. Â A scoring guide is not in principle a bad idea, but it needs to be more flexible than this and be able to reward the right things. Â I just think it needs more work. Â — Patrick Peterson
As a past state and regional science fair judge, I agree a rubric helps foster consistency between judging panels and from one year to the next. Science fairs have as their goal to teach students scientific processes. NAR R&D was not put in place to teach the member how to set up a science fair project buy rather to advance the state of the art of model rocketry. (art in the sense of technology and innovation) By the definition of NAR R&D in the sporting code, “The purpose of this event is to stimulate new concepts, approaches, and ideas in: Advancing the state-of-the-art of hobby rocketry…” At the very least, the rubric should represent those desires in its judging. IMO there should be categories in the rubric related to the originality of the experiment, the usefulness of the results to the membership, and the relevance of the research to model rocketry. This rubric seems to focus more on the process of conducting a research project, than on the value of that research. While I believe the concept of a rubric is a good idea, I do not think I can support the limited scope of this particular rubric for this competition as it is defined.Â
Good points, Bob, I agree with you.
One other considerations is that this proposal would require an oral presentation for every R&D entry, even those token entries that were submitted to get some “flight points” (we all know that happens). Currently, only the top contenders (a half dozen or so) are invited to present orally in each division. That tends to fill two evenings as it is. Oral presentations for every entry will present a scheduling issue at a NARAM. Pity the judges.Â
Hi Don. Can you expand on what exactly you meant by “just to get some ‘flight points’ (we all know that happens).” Thanks!
Hi Chad, I think you are more qualified to expand on that particular topic. Thank you.
I concur that this is a big lift for competitors in A Division. It’s one thing to dumb down and event, another to set a bar that is unobtainable by many of this age. The best entries are those managed by the entrant, not the parent.
I am opposed to this, for several reasons.
A well-thought out scoring rubric constrains contestants to perform to its requirements, assuming they want to do well. If the requirements are narrow and well-thought out, the rubric may be helpful. Given that the stated purpose of NAR R&D competition is to stimulate new concepts, approaches, and ideas in:
the event may be too broad to impose a rubric like this. The proposed rubric appears to favor engineering projects and experiments over everything else, leaving no room for projects that advance the state of the art of hobby rocketry in other ways, let alone historical preservation.
Even if one were to accept that R&D needs a rubric, one would hope that it would be developed in a principled way. Why, for example, does a problem statement count as much as an engineering solution? Why, for that matter, do the weightings break down the same way for experiments as they do for engineering projects? Shouldn’t analysis count more for a study, and the solution count more for an engineering project? [And if there is disagreement, that demonstrates the point I’m trying to make: We can’t have a rubric without a well-defined set of requirements in a constrained problem domain. If we’re arguing over the weights, let alone the dimensions themselves, we aren’t yet ready to impose a rubric.]
Perhaps an R&D study is in order to evaluate alternatives for rewarding thoughtful contributions to the the state of the art of hobby rocketry!
[As background, I have spent decades developing rubrics and scoring methods and using them on everything from Lego Robotics to student rocketry experiments to TARC Presentations to Baldridge Award evaluations to a variety of interesting professional assessment activities. Not one of these used with four equally-weighted dimensions plus points for presentations.]
Oh man… I don’t have a lot of wisdom on this, just apparently some awakened painful memories of High School State Science Fairs judged by the Va Academy of Sciences. At that time my science teacher was very upset when she discovered gender bias in the judging and media attention given different presentations in 1975 due to a lack of standardized innovation and technology scoring criteria, in preference to social and media criteria. Incredibly, she intervened before the judges finalized judging and was able to sway the majority of judges to make award based on scientific innovation and level of difficulty, rather than “save the planet” media and gender-biased popularity contest. Without her seeking fairness and a standard unbiased judging criteria I never would have won the VJAS Regional Science Fair in Physics and won overall 1st place or gone to Virginia Tech AOE engineering by invitation of Professor Fred Lutz. It changed my life. What was most painful and emotionally damaging for years later is the memory of being forced to stand in front of a 2nd place project, have all the pictures taken and “assumed” winning credit given by name to another in the newspaper and in the printed event journal because of the media attention to promote and preference the other gender’s amazing ability to score so high in a chemistry project to save the forests which according to my teacher on face value was technically and innovation wise inferior and given scoring and media preference due to the unfairness of gender bias against mostly one gender dominated projects. The demographics of the NAR does not yet seem to favor one gender or race over another in terms of opportunity and reward, but I still wonder why we seem to have few young males interested in contests and clubs and admit it could just be my lack of facts. I do know that social justice “Intersectionality” points (a form of golf handicap) is slowly creeping its way into schools and universities to do away with standard grades and standard tests. The only way to prevent any judging bias which we see even supreme court judges making biased decisions to benefit whatever gender or race they advocate for (the well-known advocacy judges) is to have a fair and well thought out scoring system which rewards technology and innovation over social bias.  What amazes me is some of the most useful and newest TARC technology and engineering innovation which would beneft the entire NAR rocketry community, does not score well on the media scoring criteria, the presentation paper criteria, or the point scoring mechanism of altitude and duration (as unbiased and as fair as you can get). I think we do need a very fair, and simple scoring system for both Division A and C that minimizes any social factor influence or bias.  Â