Several years ago I wrote the below piece as a critique to the military’s operational assessment doctrine. It was written in two parts with the first portion aimed at this while the second portion cut at how information got displayed. The piece isn’t my best, copy paste in multiple chunks will likely introduce more errors, and it presumes a working knowledge of military operational planning or military operational assessment. I think that despite these shortcomings, it will be adequate to discuss Performance versus Effect with the relationship and differences between the two. That is my goal in hanging it here as I’m contemplating two pieces in the near future. One will look at Agile and possibly Lean practices with intent to show these are techniques, not strategies, and that they work technical and tactical problems. As such they cannot be used to develop strategy. Christopher Reeves ran into such a problem while a campaign tried to incorporate Six Sigma. The second piece percolating in my head is the Taylorization of The American Presidency — we expect work while not leaving time for reflection, thought, creativity. I believe one or both pieces could use reference to the difference of performance and effect. Hence my wanting a convenient reference. I’m also reflecting on a potential third piece that may or may not need such reference that will entail trade-offs in acute risk mitigation versus long term risk management in resilient or chaotic systems. Think working out creates near term risk of injury and death though it delays and/or reduces risk of heart attack down the road.
If you believe this does not apply to you, I have a story for you. Two programmers were discussing work to educate an Artificial Intelligence to screen resumes. As the work progressed, one mentioned he decided to test his own resume versus job description through the AI. It rejected him. The other programmer asked “what did you do?” The first replied that he went home and updated his resume. This discussion went further. As the work progressed, they’d experiment by running through resumes while comparing the AI’s picks to those that actually received job offers. That’s performance folks. Effectiveness would have compared the AI’s resume picks versus employees’ first year job reviews.
Bottom Line Up Front: Performance reflects how well we did our work while effectiveness reflects whether or not our work actually mattered. Performance tells us how well we did our jobs. Effectiveness tells us were those the right jobs. We may have done well but not achieved intent. Worse, we could have created blowback.
With that, hopefully you can follow the below. Could I clean it up? Sure, but I’m lazy and I believe it meets my intent:
——————————
This piece is being written as a critique of the Operational Assessment Process as described in Appendix G of the Naval Warfare Publication (NWP) 5-01 Navy Planning published DEC 2013 and thus is also a critique of Joint Publication (JP) 5-0 11 AUG 2011. The critique first looks at what is good though offers some fine tuning, then looks at a big issue followed by smaller ones. It ends with discussion of how to think about displaying results which is completely absent from the doctrine. The big issue to be addressed is that current processes exclude objective and end state while looking at effectiveness thus the whole loop never gets closed. As a note while reading this paper, intelligence is included in information as a square is included with rectangles. When you see the one, know it speaks to both.
The Good with Fine Tuning: At its most basic essence, assessment doctrine recognizes performance and effect as separate issues and ties performance to how we do our assignments while tying effect to the validity of assignments. This is the notion of “Are we doing things right?” performance and “Are we doing the right things?” effect and is paramount to assessment. Assessment occurs at all levels of war and command: strategic, operational, and tactical. Although focused at the operational level of war, the guidance in this appendix is also applicable to the high tactical level. Task group commanders will find the framework defined in this appendix helpful when developing supporting tactical assessments. Understanding not only how the subordinate plan impacts the operational plan but also how the task group task assessment impacts the operational level assessment will aid in a complete nesting of actions and ultimately a higher level of situational awareness across all echelons.
In looking at the NWP 5-01 Navy Planning DEC 2013 Appendix G Operational Assessments (OA), the Lessons Learned blocks are spot on. Additionally the doctrine describes planning OA from the beginning of planning itself. This is great. OA should be planned in parallel to and with the planning process. This allows for those using data to determine who will likely be best to collect various data points and then coordinate and interact with those providing data. This in turn helps ensure feasibility of measurement and relevance of data towards its intended purpose by allowing collectors to get an idea of what will be sought and to amend it should it not seem to them to fit the mark. This in turn helps ensure buy-in by the collectors.
Lack of OA planning through the planning process will lead to problems described in the first lesson learned. Collectors will not be invested in providing data sought and will not give time as they have other demands on that time. Failure with the first lesson in turn leads to failure in two of the remaining three lessons published in the NWP. Continual development and refinement of the assessment framework should be conducted during every step of the NPP.
Having involvement early will allow those who will assist in collecting to help cage the requests so as to be reasonable in quantity and in ease of means of measure. Getting their buy-in early helps you keep it simple while helping them to help you. Doing so also may allow for creation of measures that are actually already being measured for other reasons. Reducing work is a good thing.
Lesson Learned
Assessment within the HQ is a staff-wide responsibility, not simply that of the assessment group. Consider assigning staff ownership for the various aspects or lines of operation/lines of efforts that are closely associated with specific staff responsibilities, enabling more comprehensive and qualitative input into the process. This decentralization of assessment activities requires designation of one assessment lead to coordinate assessment actions across the staff.
- NWP 5-01 page G-1
Lesson Learned
There is a need to balance quantitative and qualitative approaches in assessment to reduce the likelihood of skewed conclusions and over-engineered assessment plans. Staffs should strive to avoid committing valuable time and energy to excessive and time-consuming assessment schemes and quantitative collection efforts that may squander valuable resources of the HQ and subordinate commands at the expense of the commander’s and staff’s own experience, intuition, and observations in developing a commander-centric, qualitative assessment.
- NWP 5-01 page G-13
Lesson Learned
Avoid flooding subordinate units and echelons with numerous data requirements. Often, higher HQ’s receive relevant information in normal reporting but struggle to exercise either tactical patience to wait for the report or fail to apply the discipline to look for it among the reports. Establishing a Request for Information (RFI) manager is a good technique to help referee RFI’s and monitor subordinate unit capabilities and task saturation.
- NWP 5-01 page G-13
The fourth lesson learned in the NWP on page G-5 suggests tying the OA Plan closely with Commander's Critical Information Requirements (CCIRs). This mirrors Joint Doctrine though I disagree. The mental process of assessment certainly will impact CCIRs, however, others assess too. Current Operations (COPS) & Naval Intelligence Operations Center (NIOC) really should be the ones in their own internal assessment catching CCIRs. If it takes getting to the Assessment Group to actually realize a CCIR has been tripped, then we're off the mark and very late. Regarding the Assessment group, their thinking should be trend and long term, working CCIRs could pull them into the day-to-day.
As commented above, integrating OA planning into the Planning Process is a strong point in the doctrine as seen with one table. Unfortunately, the writing doesn’t really explain this table for how to fold the two together and the table has some errors. For example, the table suggests gathering tools and assessing data during Mission Analysis (MA). This makes no sense as we won’t yet know what we need to measure. What would be good, however, is for those who will be doing this to participate in MA thus gaining a full understanding of what will need to be measured while starting to build a network of potential data collectors. After this, the table does very well.
As the objectives and intent with desired end state come out of MA, it is best to start the Assessment Planning after MA. In this way, the team will have readily available what they need to work upon determining Measures of Effectiveness (MOEs). After determining these, now they can socialize them to find what is readily measured or feasible and have others check validity of the measures to the desired effects. In Joint terms, feasible equates measurable and resourced while valid is analogous to relevant and responsive. Additionally the team should take a first whack at who should be responsible for collecting specific measures and socialize the measures with their respective collectors. The team could also take some time to think about potential undesired effects or unintended consequences that obtaining the objective might possibly create and make MOEs against these. As the Operational Planning Team (OPT) works Course of Action (COA) development, the Assessment team can begin to examine what tasks will be done and start to work Measures of Performance (MOPs). As the NWP suggests, some MOPs may come from the tasked organization itself as this lower echelon will want to announce its completion of tasks yet the team may still want some of their own measures. Continuing through COA analysis and COA comparison and decision, MOPs can be finalized. As tasks can also cause unintended consequences undesired effects, the team should update thoughts to such with MOEs at completion of COA analysis. With an inclusive list, the team may start reducing the numbers of points upon which to collect. Measures deemed not valid or not measuring what they are intended to determine should be dropped. Measures deemed infeasible or unable to be measured due to lack of tools, capacity, or time should be either dropped or made dormant. The remaining measures may be prioritized and a reasonable number selected for active measurement and tracking. The others should be reserved in a passive state should an alternate later need to be used. Now measures can be assigned to proper sources for collection. See figure G-1 and adapt per the above paragraph.
An alternate way to integrate is that given in the Maritime Operational Planners Course (MOPC) seen below.
The Big Problem:
In its current state with its current definitions, doctrine views effects as the results or consequences of tasks. It then sees the culmination of effects as providing the conditions for achieving an objective. While it is valid to view task completion as causing effects, and it is valid to see action attempting tasks may also cause effects even should the task not be completed, this view causes a problem in that the relevance of the objective itself never gets checked. We are objective focused. Therefore, shouldn’t we want an assessment to include the objective? On the other hand, should we be objective focused? Would it not be better to focus on the end state? While tasks are ways to objectives, aren’t objectives ways to end states?
While effects are results of actions, rather than viewing them individually as such, we should start at the end state. The end state should be viewed as the culmination of desired effects in the absence of undesired effects. In this way, effects are pieces of or avenues to end state. Viewing in such a way now makes the objective inclusive in MOEs. Further, with all MOEs taken together, viewing becomes end state inclusive.
Below shows the doctrinal flow of tasks to effects to objective to end state.
Beyond this shows the doctrinal way for feedback along this flow. Note the lack of objective within measures at any given level. Subordinates get these accomplished by Higher Headquarters (HHQ) but as we get to the top, note the lack of feedback upon objective and end state which are what should be most important to us.
Even were we to be working at lower levels, HHQ may be busy elsewhere so though theoretically they would close the loop on our objectives, they might not get to it.
Now let’s consider the alternate view. Start by breaking the flow into two lines, one of action and one of consequence. In this way, we can view the action flow as that coordinated by the OPT with tasks to objectives to end state. The OPT will likely have planned this backward to execute forward. Now view the consequence line in the same manner, backward to be later measured and assessed forward. Take the end state and break it into effects.
Now look at the effects and tie them to the action line. Which effects come from which objectives? Even if a specific effect ties to a specific task, look to which objective that task enables. Now we will be inclusive.
Little Issues:
Though never addressed in the NWP, supporting structures such as Maritime Staff Operators Course (MSOC) Staff Officer Reference Guide on page 59 states that MOPs and MOEs are not hierarchical. This is wrong. The MSOC Reference Guide on page 58 has a very nice depiction of doctrinal assessment planning and execution flow. In particular it shows backward planning and big to small in a manner very similar to the systems engineering “V” of system of systems to system to components to sub-components.
Then execution flows opposite small to big sub-components to components to system to system of system. This style of thinking works very well for OA. MOPs and MOEs are hierarchical in two senses. First, tasks assigned from HHQ constitute their basis of performance but drive lower’s objectives while HHQ intent drives lower’s end state. In this way, lower’s MOEs become HHQ MOPs while HHQ MOEs become their HHQ’s MOPs. Thus hierarchy is established. Our MOEs are more important as our HHQ is likely to be more concerned with the status of our MOEs versus our MOPs. This can be seen in both the doctrinal JP 5-0 End State, Objectives, Effects, Tasks above and the proposed flows correlation to the JP 5-0 immediately above.
Secondly, within one level, were one to accomplish all effects thus seeing positive MOEs, then MOPs become irrelevant. MOPs only have importance when either insufficient time has passed to see favorable change in MOEs or when MOEs are not showing favorable response. It is at this time that we need to know are we doing things right so as to know do we need to fix how we do things or try something else. This sort of thinking takes us to a deeper, more detailed, and more focused level of thought. Hence hierarchy as the MOPs are below the MOEs. As an example, imagine we are to secure freedom of navigation in a pirated region. We develop a plan and start to position assets to execute, however, prior to execution, a tsunami hits. Though we may now have different problems to solve, the pirates are wiped out. We have not yet performed in any way, but our desired effect has been obtained. Our assets are now free for the inevitable disaster relief operations coming.
Doctrine describes a measure as a criterion used to assess while it uses indicators as observable or measurable information requirements that give ground for measures. Functionally, doctrine’s indicators are raw data points while its measures constitute analysis upon these. This creates confusion both with an extra level of process in the doctrine and by confusing terms compared to how the rest of the world uses them. It would be better to delete the concept of indicator from the doctrine. Instead, the raw data to be collected should be the measures. What we have previously considered measures doctrinally is really analysis and is what the assessment group should be doing in their back room. It does not need to be spelled out in the doctrine. If it were spelled out, it should be called what it is, analysis. Reducing a level of process and making the terms of the process conform with what science and engineering processes use and as learned by most naval officers in their basic educations will make it easier for assessors to understand the process and thus free them to do the actual work of assessing.
- measure of effectiveness (MOE). A criterion used to assess changes in system behavior, capability, or operational environment that is tied to measuring the attainment of an end state, achievement of an objective, or creation of an effect. (JP 1-02. Source: JP 3-0)
- measure of effectiveness indicators (MOEIs). Observable or measurable information requirements that when compiled together, provide evidence of or gives grounds for a measure of effectiveness.
- measure of performance (MOP). A criterion used to assess friendly actions that is tied to measuring task accomplishment. (JP 1-02. Source: JP 3-0)
The doctrinal definitions are above per NWP 5-01 Glossary-13. Proposed new definitions are below with the notion of indicators removed. Again, analysis is the work to be done by the assessment group and is not spelled out.
- measure of effectiveness (MOE). An observable or measurable information requirement used to assess changes in system behavior, capability, or operational environment that is tied to recognizing attainment of an end state, achievement of an objective, or creation of effect. MOEs may apply to desired effects as well as unintended consequences or undesired effects.
- measure of performance (MOP). An observable or measurable datum used to assess friendly task completion.
We consider decision cycles and their time horizons relative to an opponent and we want ours to be faster. Faster Observe, Orient, Decide, Act (OODA) loops allow us to drive the show making us proactive not reactive. However, going too fast in OA can be detrimental. The opponent’s cycles are not the only factors to which ours compete.
High frequency noise will cause wrong assessments which in turn will cause inappropriate actions in the next loop. Phases 0 (shape), I (deter), IV (stabilize), and V (enable civil authority) operations are particularly susceptible to high frequency noise. I’ll use phases in discussion here as they have been institutionalized in military operational thinking. I disagree with the idea of phases, however, as “phases” implies regular cyclic events. As we can skip phases forwards and backwards and be doing operations in multiple phases at once, categories would be a better term. (See updates here for some phasing discussion.) To understand high frequency noise, imagine you are a climatologist.
Your goal is to assess the climate trend. Over top of this you must contend with seasons, weather systems, day and night. You may have some random events as well, such as a volcano. All of these constitute high frequency noise. Were you to measure at midnight in winter then again at noon in summer, you may conclude a trend that would lead to drastic action were you not to discount the noise. For many assessments, we need to slow our time windows or at least try to understand what may cause noise. And we need to look at more than just the last point assessed versus current point. Such concerns are not addressed in doctrine.
In addition to high frequency noise, delayed responses or lag effects could cause problems with assessments that have short report periodicity. Were one to conduct assessment too frequently and see no changes then that person may recommend changing actions possibly losing good actions or inducing bad ones. At a technical level, pilot induced oscillation is an example of this. Think of a heavy plane descending and a pilot wanting it to go level. The pilot pulls back and adds power, momentum keeps the plane going downwards, so the pilot pulls back further. The result is that the plane goes up so the pilot bunts resulting another descent and the pilot pulling back again. Similar situations can develop in tactical and operational conditions for not allowing time for responses to develop particularly should we be seeking secondary or tertiary effects.
Methods for updates need discussion. Updates for collection plans should be provided on a regular basis so as to provide any additions, deletions, or changes. Deletions of data collection requirements are important as doing so frees up resources. Additions or changes may occur if one measure previously thought to be sufficient to address an effect is not working and thus we need something else.
—————
Display of results: Presently navy doctrine does not provide examples for how to display results. This is good in that it is not proscriptive. Yet it is bad in that often we don’t know how to display information in quickly digestible formats to which the commander can visually and quickly grasp the points being discussed. The result of this is that personnel tend to stick with what they have seen done often using gradient sliders and stoplights. There are situations to which such devices work well, however, there are often times these suffer pitfalls. As such, it would be better for the doctrine to provide several not all inclusive examples of possible ways to display measures and analysis showing strengths and weaknesses of each display while specifically saying these are examples only and are not proscriptive and should not hinder the creativity of the team doing the assessment. There may be unique situations that require unique displays to share data. The two points of success are: can the reader quickly grasp what is being said from the display each time it is seen and is this intuitive in that little time need be spent educating the reader how to read the display the first time it is seen.
Tip: For any of the below displays, should it be difficult to show your desired concept, see if showing the inverse is easier. In the same grain of thought, should measuring a particular datum be difficult, consider if its opposite may be easier to measure.
The Slider uses a gradient typically in color with a reference point that moves relative to changes in conditions. Usually the slider is across a bar though it may be displayed in a speedometer or altimeter type fashion. Generally speaking green is good and red is bad with transitions through yellow between the two representing a range between good and bad. Actual descriptions of what constitutes good or bad may or may not be provided. Positions along the gradient are usually subjective though one slider may build upon other sliders or stop lights to inform this positioning. Most show where the OA considers conditions to be presently and at the previous report. Many include a forecast for next report. Sliders often show direction of trend movement. Such devices are usually good for many performance related measures and often good for phase II (seize the initiative) and III (dominate) effects. If the slider represents a performance measure, adding blue outside the green on the gradient as the color of completion is beneficial. Drawbacks for sliders are that they only show the current point and the previous point in time and thus are very susceptible to high frequency noise. Sliders lack detail that may be critical to proper assessment and for focusing a commander’s attention to where it needs to be. Sliders can oversimplify. Often they can show a problem exists to the commander but won’t show what is the actual problem. Some of these concerns can be alleviated by providing robust comments with the slider for reference should the reader desire to dig further. Building blocks of sub-level stoplights or sliders will help focus attention and will be discussed shortly as part of the reverse brain map display.
Should the slider be used for performance, reference position should relate to time and/or cost planned to completion versus actual progress at current time and/or cost. In this way, were one to be complete, color blue, ahead of time blue-green, on timeline the color should be green, behind timeline but with acceptable forward progression would be yellow, behind timeline and progressing slowly would be orange, behind time and stopped or reversing would be red. Similar schemes can be used with phase II and III effects though phase 0, I, IV, and V effects may never have actual completions and therefore never have anything wide of green. Such sliders should not include colors outside of red through green. Cost may be thought of in monetary terms or in other terms such as manpower, equipment resources, allied, host, non-governmental organization (NGO), third party resources, etc.
Stoplights are excellent for showing performance progress. They can be used with phase II and III effects as well as any effect that can be considered a singularity. Usually they will do poorly for phase 0, I, IV, and V effects as such effects are often enduring. Stoplights show only the present moment in time and thus lack trend. OA should always be aware of trend. Often stoplights have used three colors which can be bad for our commanders as all westerners have been conditioned to view yellow with caution. Yellow grabs the eye almost as quickly as does red. Such schemes use green for complete, yellow for on time and cost, and red for behind or failed. Better would be to use blue for complete, green for on time and cost, yellow for behind, and red for stalled, reversing, or unable to complete. Such colors rapidly put attention to where it needs to be assuming the stoplights are broken down into tasks and parts of tasks.
Completion Bars or Thermometers are a means of showing progress completed versus total progress required. Think of the Combined Federal Campaign or Navy Relief fund raisers and their completion bars as often seen inside base gates. These are classic completion bar examples. Such devices are useful in similar situations as stoplights and sliders. It is good practice to have the bar filled for progress made and then add a line displaying where progress should be at the point in time of assessment. Each completion bar display should be monochromatic in its fill color. Multi-color completion bars will cause confusion as the line dividing the full side versus empty side could be mistakenly viewed in a slider context. While keeping a device monochromatic, one could chose a stoplight color correlating with actual progress versus planned thus combining the completion bar and a stoplight.
Pie Charts can serve a similar function to completion bars though due to being able to have more than two pieces of pie, they can show progress by parts or phases of a task. Pie charts can also be used to show shares across multiple fields such as certain activities by regions, percentage of religion in region, casualties by units or areas, expenditures by units or areas, etc.
Age Outs utilize maps with measures depicted in some form of blots along the map broken into time windows such that they seem to fade out over time.
These have the advantage of displaying in location and over time thus giving trends in a manner readily digestible and providing intuitive analysis. Disadvantages are that keeping the maps up to date is labor intensive and the maps can get cluttered therefore care must be taken with scaling the size of map, numbers represented with each blot for a particular measure, and numbers of measures on a single map. The size of time windows must also be picked carefully so as to avoid high frequency noise while also keeping clutter in mind. The most recent time window may show blots of red on red or green on green. The next time window will show the red or green on yellow, followed by yellow on yellow, then yellow on grey, and lastly grey on grey. Points beyond this last age out may be dimmed, made more transparent, or removed completely from the map. So as to avoid the possibility of someone actually wasting time counting blots, it is recommended to put totals next to each blot in the the key describing what each blot shows. Realize some rounding to make blots fit can occur.
The previous fictitious example of an age out shows migrants interdicted traveling from Africa to Europe. The Straits are obviously the busy area though excluding weather or sea state acting as deterrent, our efforts are showing effectiveness and numbers are decreasing over time. Efforts in the East have also been successful. Southwest of Italy, however, no progress has been made as similar numbers are being intercepted over time.
Area Plots are similar to age out maps in that they display measures over a map though they cover less time.
They also don’t show quantity as does the age out display. Examples of area plots may be areas of denial or areas which an opponent controls, areas of free maneuver or areas we control, areas governed versus areas in anarchy, areas of wildfire and areas burned, etc. Such displays may contain a large amoeba like blob of green or red over the map representing current area within the measure extended or reduced by a gray or subdued coloring of area that was in the previous measure not in the current measure. Alternately, color gradients may show severity. The comparison of the subdued representing previous and the current gives a short term trend for those including such depictions.
The above area plot depicts a mining town in Poland that is suffering from sinkage of the ground level due to mining underneath. This is an unintended consequence that is causing buildings to collapse and threatening the town. This particular example uses color gradient to show severity. Though not showing trend, the graphic tells some trend as the measures are from time A to time B.
XY Scatters are excellent for trends over time.
They cover as much time as you choose to display so long as you have the measures collected within that range. Do not discount these as valuable displays. Such displays may be combined with column or bar type displays with the XY residing overtop in a manner so as to display large volumes of data very quickly.
In effect such would be a dual y-axis single x-axis chart. Note not all XY scatters need to be over time though for OA most will be. Color can be used for a z-axis.
Brain Maps are an excellent tool for thinking or brain storming in planning. They also make excellent progress trackers after the fact. Each component off the original piece can be colored in a stoplight manner combining as the small build back up to the overall big. Depending upon the aesthetics of a brain map, a display can be cleaned up in such a manner as to have the small pieces build up to bigger ones as columns such that the overall is a roof.
The strengths and weaknesses of such displays are similar to those of stoplights with an added advantage of having all the stoplights in one display in a flow that is representative of lines of effort.
Bubble Plots are XY scatter charts that add size of blot to convey extra information. Blots may be colored to categorize them adding more detail. Within OA, typically each blot’s size is determined by the weight of importance the OA team assesses that blot to hold and often distance from origin is based upon some factor of goodness such that items not going well are further from origin.
Commander’s attention can be adjusted by a combination of distance from origin and size of blot. These types of displays may show a small time trend by having a dotted blot for previous assessment position and line connecting to current solid blot. These displays when using distance from origin as showing goodness serve a similar function as sliders. Otherwise, they serve as enhanced XY scatters. Should color not be used to categorize, then it may be used by the assessment team to highlight which points they feel deserve more attention with red being highest attention required while blue needs no thought, green little, and yellow some attention.
The below is shown as an example of the creativity of the assessor. It is a display from the WHO that combines aspects of age out, area of impact, bubble, and column charts into one readily digestible and very informative piece.
Note the subtle changes when zoomed in. Regarding the blots, inner circle size represents cases within the last 21 days while outer circle is all cases over the entire time of epidemic.
The Assessment Allied Tactical Publication (ATP) 5-0.3 discusses radar/spider web displays. These consist of spokes representing categories of effects which have range rings connecting the spokes. Range values are typically based upon a qualitative assessment within each category thus quantifying the qualitative. Caution: don’t objectify the subjective.
These displays are generally highly subjective. Good practice is to create standards for range assignment for continuity between different assessors. The intention is either to fill in the entire radar or shrink it down to nothing. In this way, should the scaling of one spoke be negative, all must be negative to allow the shrinking from all directions or if one be positive all must be positive to try to fill the entire display. Mixing will cause confusion. A good practice is to show current assessment with a previous assessment dotted to show areas of progress and areas of regression. Each spoke could be colored to convey a scale of acceptability within category. Red would emphasize present condition bad, green or blue good with colors between representing varying degrees in a manner similar to stoplight coding. The advantage to these displays is that they focus discussion yet disadvantages are that they don't in themselves actually convey data nor do they show much trend. The trend issue could easily be solved on soft copies by having animation to show several reporting periods while on hard copies by having three displays stack vertically down a page column with the top display showing current and previous while the middle shows current and two periods ago and the bottom current and 3 periods ago. Note that the advantage of focusing discussion also carries risk as the display could be shown elsewhere without having the corresponding discussion. Discussion is particularly important to include when viewing subjective measures.
Conclusion: Overall the idea of Operational Assessment is spot on. The importance of seeing a distinction between performance and effect and tracking each to facilitate corrections is paramount. The process needs improvement, however, by making objective and end state inclusive with the tracking of effect. Once this is done, it becomes a simple matter of how to convey the information most readily to those who can make corrections such that they can quickly internalize that information and adapt as needed.
Wednesday, Jan 12, 2022 · 3:51:15 PM +00:00
·
Fffflats
A thought for you, “Vaccine Effectiveness is really a Performance measure.”
Consider vaccine effectiveness tells how well a vaccine does its job while what we really care about, the desired effect, is reduction of disease in society.