How can we organize our work and teams to achieve the highest-possible, sustainable output while also
  • shrinking start-to-finish time
  • nailing delivery date predictions
  • and knowing the most profitable hiring bet?

Comfortably maximize your output, profitability, and on-time track record with the PEAK PACE web appan always-on, "omniscient", global optimizer that finds the best decision to act on next.

Chart of cumulative card deliveries by date

When and how does a single step in a process or a job done by one person for $36k massively reduce sales that year?

(such as preventing a theoretically possible $12 million and dragging it down to $8 million, instead)

Answer:  when "Bottleneck-onomics" is in play (and it always is).

The following action is the only thing that can move the needle on profitability or production in big ways:   improvements to the most-limiting personal or business process constraint.

(Eli Goldratt, physicist-turned-consultant, and his bestselling book, The Goal, introduced this Theory of Constraints. It has been required reading for Toyota employees and Amazon execs.)

First, you have to know where that constraint is and how much flows through it. That's not obvious without data and proper analysis.

The Peak Pace app helps there by highlighting constrained process stages or actors, which then allows you to use Throughput Accounting, a constraint-based method of internal accounting, to evaluate project and hiring options in a way that will have realistic, maximal impact (on your chosen metric).

What we humans can do is amazing

Taking a closer look at the way we work, in order to boil it down into a process with a distinct set of stages allows us to leverage a sophisticated tool like the Peak Pace app to keep us both fully productive and creative.

The whole process includes everything, from early stages like

  • discovering customer needs
  • documenting promises, orders, and contracts
  • describing visions and designs
  • and specifying project and task details

All the way to crucial later stages, such as

  • actually building the product
  • doing quality checks
  • and making final delivery or going live

The Peak Pace web app brings focus to each person on the one task that takes the biggest step toward the overall goal

As individuals on our own, we really can't have a global view. Though we might try to do what is best for the organization, our human limitations usually lead us to make choices that are only locally or partially optimal.

When planning and scheduling our work operations, we actually have innumerable ways to prioritize and line up tasks.

To deal with the complexity of so many possible sequences, people often over-simplify the problem.

They use tempting, too-easy heuristics and mental models better suited to a game of checkers.

This is understandable, since efficient operations are hard to get right, since simple often works for other things, and since few people can do any better.

We're actually players in a game that's much more complicated than chess.

And we're playing on many boards at the same time.

For a small team of three people, how can the number of simultaneous projects cause steady output to drop?

This simple live simulation video uses pieces from a tabletop game, to show the answer.

Three Teams and Effect of Varying WIP Levels on Characteristics

For that small team, how many projects at a time can they work on and still have fast, frequent delivery?

As above, each 3-person team has the following average number of projects in progress: 4+, 3, and 1. Watch the live simulation video, in slow motion, to see where the following data came from. (Which team's results would you want?)

Three Teams and Effect of Varying WIP Levels on Delivery Statistics

Find, achieve, maintain, and increase the true maximum productive potential of your teams.

Productivity gets dragged down by non-optimal decisions involving many unmeasured factors.

Peak Pace optimizes workflows of teams and organizations using algorithm-driven recommendations based on your own data as work moves through each stage of your process.

It takes a tool like this to calibrate the volume and rate of work produced by each upstream stage to the capacity of downstream stages to process that volume.

It's even harder to make the best decisions, when people move around to cover multiple stages in a long process and when they come online and go offline randomly or at defined intervals with spurts of activity.

Don't leave decisions about what is the "best action, right now" to each actor's subjective intuition based on limited data.

Every time someone has several choices of what to do next, despite ability and good intentions there's a significant probability that they made a non-optimal choice for the process as a whole.

When your goal is to optimize entire organizational outcomes, then your process definition and collected data must be global enough to encompass all system variables that impact accuracy; otherwise, your decisions are most likely sub-optimal.

Can you prove it?

Both established science and actual data charts for users show the undeniable, visible improvements caused by simply operating at your target WIP Level (see further below).

Only a hyper-realistic simulation can truly model a real process in practice, so we built one.

Then we built AI tech that searches for the optimal process decisions to meet target outcome goals.

Both the simulator and the AI are constantly improved by continual research involving thousands of simulation runs and statistical measurements of results.

If you click the "Play vs. AI" button, you can try to defeat our technology in several challenges, to see which decision-maker is the safer bet. (As time goes by, we expect our AI to pull even further ahead.)

See our AI in action by playing the Peak Pace Workflow Game

Try to make the best decisions for the process as-a-whole over multiple challenges of increasing complexity.

It's not so easy.

Hire optimally, by knowing which next-hired specialty will have the biggest effect

Based on data, know which personnel shortages are critical

Know when a process or policy change can actually solve the current bottleneck (or constraint / limiting factor) on throughput, without needing to hire

This is possible when the process depends on the few people (or one person) that perform a certain role and they are spread too thin.

In that case, often the answer is to reduce that dependency upon them, where possible, either by training others to share the load or by reducing their responsibilities in other areas.

In Peak Pace you define skills for each person and process stage so the system can find the best next action by considering insufficient availability of the most-needed skill

Other apps are designed around direct assignments of a person to a stage or task, but such hard constraints take away their ability to find the most optimal work assignments and scheduling for the organization's goal.

Find the current limiting factor, bottleneck, or constraint in your process, to quickly focus improvement efforts

When your data is fed into the Peak Pace app, it can look backwards at the performance of historical process stages to highlight potential bottlenecks or constraints affecting throughput.

Then it looks forward using those insights as the starting point for thousands of rich-model simulations, to project the effects of small changes to the process, in order to confirm where the biggest bottlenecks or limiting factors are and also the size of their impact on the process metrics statistics.

This is possible when the process depends on the few people (or one person) that perform a certain role and they are spread too thin.

In that case, often the answer is to reduce that dependency upon them, where possible, either by training others to share the load or by reducing their responsibilities in other areas.

Efficient Bayesian A/B experiments on how work gets done at each process stage will increase task speed and throughput even further

After taking Peak Pace app advice when choosing tasks, you'll have much more optimal process throughput and start-to-finish metrics, but what else can you do?

At any moment, work on tasks at each stage is done using the current standards, techniques, and technology you have in place. Those can all be measured and improved, to boost your speed and output even further, without increasing team size.

Don't pause movement on backlogs just to evaluate new techniques, practices, or technologies.

  • It's rare, but sometimes it makes sense to stop most or all work to address an urgent process problem
  • However, in general, we can't afford to do complete, up-front research on the perfectly-efficient way to work on a task at a given stage.
  • In the short-term, the time to do that research project will quickly dwarf actually just doing that task in a less perfect way.

Do both at the same time.

  • The secret is discover the best way to work by amortizing small efforts over time by running randomized A/B experiments in the Peak Pace app.
  • Newer Bayesian A/B testing methods can provide statistically-valid, actionable feedback on the best choice much faster with far fewer data points than the traditional hypothesis testing statistical approach, whenever a clear winner quickly emerges.

The Peak Pace web app seamlessly creates Stage-Specific Bayesian A/B Tests for each standard practice that you define

Discover how much time you spend or save, with and without the standard, at the stage and for the whole process.

The composition of incoming work is process destiny

Product feature sets and the projects to implement them can grow arbitrarily into mountains of work by adding more tasks.

Such projects constitute high, built-in WIP and when only conceived of, chunked into, and measured as just a few large features or milestones, those don't provide enough data for sound or timely predictions of completion; and they take a long time.

Such large chunks implicitly devalue their constituent parts by not releasing them earlier.

There are many other ways to conceptualize large feature sets, such as breaking them down into a sequence of very small pieces of functionality.

To make comparisons and predictions, you need a "standard unit of work" for tasks, though that standard will not take the form of a direct, single value (for example, tasks that always take 2 hours).

Instead, that unit of work standard is set by using a consistent work definition policy that leads to defining tasks of a rough size that falls within a low-high range of completion time values (in terms of effort).

The best standard is generally one that leads the smallest time values over a tight low-high range.

By limiting tasks and product slices to the smallest possible useful size, the time to complete work will still have variability, but that time will fluctuate within a much smaller range.

That creates more data points and makes it possible to have tighter, more precise predictions of the future.

That means you can make promises you can keep, and quickly deliver them.

The Peak Pace app is built for prioritizing a business outcome, listing hypothetical product features to impact that outcome, then breaking down features into small slices that have a chance to move the needle on that outcome as each task gets delivered.

A wind-up toy with only one forward move can't be agile. Starting with an expansive vision then spinning out many short tasks that cover its most important and valuable aspects and directions is what allows us to step lively, and pivot.

Going tiny with small bets means we can move with a more rapid, granular sequence of iterations.

This opens up far more decision points each week, meaning we have a much better chance to get close to decision-outcome optimality.

This avoids the risk of steering the ship too far off on just a single course we already committed to and are stuck on.

Yes, capping task size to a very small slice of end-to-end functionality or a small step toward a larger project will bump up the average number of delivered cards

Rather than just some artificial "trick", in fact, making that transition to a higher count is essential to finding optimal decisions that maximize organization goals.

Small tasks are delivered faster, yield data earlier, produce more data, smooth the variability of process flow, and provide a sense of accomplishment rather than burnout, along with all the other benefits of agility mentioned above.

Chart of card effort hours by card delivery date

We do our best work in a deep work flow, but, by its nature, that mental state is unconscious of time.

That means perceived productivity is not necessarily the same as clock time.

We need a reliable external influence to channel our creativity and keep us on track.

The Peak Pace app can smooth the experience of work so everyone can feel more flow, focus, and time for deep work while channeling and allocating their efforts selectively to meet the slight constraints that lead to better outcomes.

The attention should be on the whole system, not individuals, yet optimizing that system requires data about our work day.

To balance those concerns, some data is shown only to you. Other aggregated data is carefully presented in a constructive way that emphasizes training and learning from each other, so the whole team can increase their individual productivity together.

You need to stick to an optimal
Work-in-Process (WIP) Limit Level.

What is a WIP Level?

"Work in Process" means the number of unfinished-but-started tasks located at some stage in your process (which could extend beyond the box you've drawn and considered to be your process, up to now).

The concept of WIP can be applied to everything from whole projects to the sub-tasks that make up the scope of a given task, since every element of work costs time.

Why use it? Because Pull is faster than Push.

A WIP Level is a traffic light that determines exactly when to start new work (i.e. "pull" it into the process).

WIP Levels have been the key to fast and highly-profitable operations, ever since the first innovations by Henry Ford and the Toyota family.

Pushing tasks into the process before the WIP Level signals it's ready, causes some stages and actors to over-produce, when they have more capacity than others. This adds WIP to the process and bogs it down, since other stages and actors have lower capacity and can't keep up, producing lower throughput and taking longer to do it.

So once you stop pushing, the actor's effective throughput might even increase whenever process efficiency had been degraded significantly below its maximum capacity.

What happens if you don't?

Start-to-finish times keep getting longer and longer.

People are overwhelmed unnecessarily and burnt-out, or sometimes they are under-utilized (if you operate below the WIP Level).

Ad hoc starting of new work and multi-tasking both flourish, causing overall throughput to drop below its full potential.

How do we do this, then? The Peak Pace app finds the optimal WIP levels, in two ways:

By running simulations, to estimate levels that are likely optimal, long-term.

In practice, by observing how experimental levels affect actual throughput.

Each stage of a process can have a WIP Level, including work items waiting to start there and items being actively processed.

Slight, tactical, short-term "cheating" with the WIP Level can sometimes be more optimal, so the Peak Pace app does that occasionally.

The app has a much better chance of getting that right than humans do, since it relies on thousands of detailed, realistic simulations.

Cumulative charts show long-term patterns (and meaningful changes) in process output productivity, minimizing noise from natural variability in the current process.

It's still important to address the causes of variability, when cost-effective and advantageous, but reality is variable and random by nature so our process will always have some variability. The Peak Pace internal data model considers that variability.

commit-to-delivery-time-historical-drop

Hold on, let's go back to the WIP Level thing:

"Isn't multi-tasking better than waiting to start something, when there's so much to do ?"

Well, those are not your only choices... and, generally, no; it depends.

Multi-tasking is better only when there isn't a more optimal choice for action in the moment (and the Peak Pace app will let you know when multi-tasking is needed and exactly which task is best to work on next).

The urge to multi-task is a signal that something needs to change, to get more optimal results.

To keep the process flow and its metrics stable and optimal (by honoring the WIP Level, rather than starting or continuing other work), "inaction" can be an acceptable choice when it avoids the over-producing problem and takes the form of further training, quality improvements, or (best-of-all) helping others to finish an already-started task faster.

The confusing thing is, some processes are operated so far from the ideal, and everyone is so used to that state of things, that multi-tasking can seem to be the best, most rational response to that non-ideal situation.

However, the real solution is to use something like the Peak Pace app and its best practices to bring the process in line with its true, full potential without resorting to sub-optimal workarounds.

Multi-tasking can be a reasonable temporary response to a non-ideal situation that takes time to change, but it should always prompt an investigation to find and act on ways to prevent it with an action plan, rather than accepting the status quo.

In summary, the Peak Pace web app keeps your team's work on the most productive path

Using a goal-maximizing algorithm backed by deep simulations of a realistic, rich model fed with highly-segmented team process data collected through a simple, focused web experience.

  • Keep your existing collaboration tools, profession-specific work tracking tools, whatever helps you get the work itself done.
  • Get the whole picture of your workflow's timing from the highest altitude to the smallest detail.
  • Inform your mental model of your operations based on a complete, realistic data model rather than on overly-simple, reductionist theories that can deviate greatly from actual practice.
  • Leverage rich, deep machine learning and computation to maximize output in a way that’s humanly possible, rather than sacrificing humanity by trying to fit people into simple, intuitively tempting, but flawed, mechanistic mental models of “the way things should be”.
  • Work within time constraints that achieve a desired work-life balance (sustainable pace) for everyone involved.