Appendix 7: Using a Weighted Objectives Table

 

Why use a Weighted Objectives Table?

When building competition robots, designers are faced with many difficult decisions.  There are often several different solutions to the challenges presented, and there is usually no clear “correct” solution.  Each team must decide what strategy they will use to play the game and how their robot will execute that strategy.  On top of that, there are often a large number of smaller decisions which will also be part of the robot design.  This is not an easy process!  To further complicate things, each team must do this in such a way that the individual members all have “buy-in” to the decisions made.  One tool to aid in this decision making process is a weighted objectives table (also sometimes referred to as a decision matrix).

What is a Weighted Objectives Table?

A weighted objectives table (WOT) is used as a means of comparing several different alternatives by ranking them based on a list of criteria.  The way the table works is that the user pre-ranks the importance of each of these comparison criteria in advance then ranks each design option based on how well it fulfills each of the criteria. 

Using a Weighted Objectives Table

Step 1 – List Alternatives

One of the best ways to understand how a WOT works is to walk through the process of using one.  One can consider an example challenge that a team may face: “design an end-effector to manipulate a 0.25 meter diameter ball.”

To understand the process a design team would use to solve this type of problem, refer to Unit 1 – Introduction to Engineering, and the section on the Engineering Design Process.  At some point in this process the design team would brainstorm multiple options to grab the ball as part of the IDEATE phase of their process.  For the purposes of this example, three options would be a “roller-claw”, a “pinching-claw” and a “scoop”.  These different end-effectors could all be used to pick up the 0.25 meter ball. A weighted objectives table can help a designer or team determine which of these options best suits their needs.

Step 2 – Determine & List Comparison Criteria

The next step is to determine the criteria each of these options will be compared on.  To be successful, one must list all the comparisons important to the team.  Some criteria are more general and could be used in any number of comparisons. Some examples of general criteria include: Complexity (less is better), Reliability (more is better), and Effectiveness (more is better).  Some criteria are more specific to the comparison. For the ball object manipulator described above, specific criteria examples might include: Grip Strength, Required Driver Precision, and Speed of Grab.

The better the job the design team does in coming up with the comparison criteria, the more accurately the WOT can be used to evaluate the design alternatives.  This can refer to both quantity and quality of comparison criteria!

Step 3 – Layout of the Weighted Objectives Table

Once the comparison criteria are determined, the beginnings of the WOT can be constructed.  The beginnings of a sample WOT for the object manipulator example can be seen below. 

Step 4 – Weight the Comparison Criteria

This is arguably the most important step in constructing a WOT; it is also one of the most difficult.  In this step the designer (or design team) will rank each of the Comparison Criteria based on how “important” they are.  In some cases, it is a good idea to set a maximum total “cap” for the weights; using this cap will force the user to make difficult choices about the importance of each criteria.  In the example below, a cap of 50 was used.  Without this cap, the design team might weight everything artificially high (“everything is weighted at 1-million points!”). 

In the above example you can see that the design team values a mechanism which grabs the ball quickly and holds it tightly more than it values how much the mechanism weighs or how complex the mechanism is.

Step 5 – Gather Information

In order to effectively compare the different design alternatives, it is important for the design team to gather information on each of them to learn how well they fulfill each comparison criteria.  In an ideal world, each of the alternatives would be FULLY designed and produced, and then the best design could be chosen. Unfortunately, this is not always an option.  It is possible to learn about each alternative without finishing it.  For instance, to compare each design based on the comparison criteria of “complexity” it may be possible to construct a rough bill-of-materials and estimate how many parts would go into each design.  This number of parts won’t be perfect, but will likely be close enough to help a designer compare the options.

As discussed in Unit 1, one of the most useful ways to gather information on how these designs perform is through prototyping.  Build prototypes of each design alternative and test their performance.  Good designers will use the lessons learned from these prototype tests as they fill out the WOT.

Step 6 – Score the Design Alternatives

In this step, the designer or design team needs to score the different design alternatives on how well they meet the comparison criteria.  In the below example, each alternative is rated on a scale of 1 to 10 (1 being the worst score, 10 being the best score).  It sometimes works best to score all three alternatives at once, based on a single criterion so the designer can compare the differences between them. 

The example above highlights one such set of scored alternatives.  In this case, the design team feels that the Roller Claw and Pinching Claw are similarly neutral complexity, while the scoop is very simple and gets a high score when compared based on complexity.  Though this is only a hypothetical example, it is still possible to understand these scores; the roller & pinching claws would have more moving parts than the scoop itself.  The alternatives could be ranked based on the other criteria in a similar manner: 

Step 7 – Calculate the Weighted Scores

Once the scores and weights have been determined it is a simple matter to calculate the weighted scores.  Each weighted score consists of the alternative’s score multiplied by that comparison criteria’s weight.  For example the Roller Claw received a score of 5 for Complexity, and Complexity has a weight of 5; this means the Roller Claw has a weighted score of 5 x 5 = 25, as seen below: 

The other weighted scores are calculated in a similar manner: 

Step 8 – Find the Total Weighted Score

This is the last step: it is now a simple manner of summing the weighted scores to find the total weighted score for each design alternative.  In the below example, the roller claw is shown to be the winning design.

 

Analyzing the Results

Often the total weighted scores do not match the designer’s preconceptions of which design is “best”.  This is good!  A WOT allows for real comparison of the options without the designer’s bias towards a single design.  That is part of the “magic” of using a WOT to help with design decisions.  The fact that each comparison criteria is pre-weighted allows for a more unbiased analysis of how well each design alternative fulfills what is most important to the designer.  The results rarely lie (except when the weights or scores themselves are “fudged” by designers who are biased early in the process).

Finding Authentic Results

If a designer has a strong preconceived notion about which alternative “should” win they can rig the process by tweaking some of the weights or scores.  In order to make this an effective design tool it is important for all design team members to remain as impartial as possible, and follow the process correctly without any pre-planning.

One major way to prevent this from happening is by utilizing quantitative criteria whenever possible.  Quantitative criteria are those which can be measured and directly compared.  For instance, if the design team created a prototype for each of the concepts, they could directly compare what amount of force is required to remove a ball from the gripper.  This would provide a quantitative measurement of grip strength, and would make ranking each concept easy!

Variations in WOTs

The steps outlined above are only one way to utilize a WOT in a design process.  A WOT can be implemented in many different ways; there is no wrong or right way to use one.  In particular, the scoring numbers can be tweaked in a variety of ways. 

The weights in the example above utilized an open scale with a max total of 50 while the scores were based on a range of 1-10; these values could be done completely differently.  (For example, each Weight could be based on a scale of 1-10 and each score based on a scale of 1-3.)  Every designer needs to modify the WOT process to make it work for them.