In Lean Enterprise, Ron Westrum‘s continuum of safety cultures struck a cord. His ideas resonate strongly when applied to teams making digital products where a strong link exists between the quality of what is made and the culture of the teams that made it.
Westrum identifies three types of organisational culture:
The true nature of an organisation reveals itself when things go wrong. Westrum writes (emphasis mine):
When things go wrong, pathological climates encourage finding a scapegoat, bureaucratic organizations seek justice, and the generative organization tries to discover the basic problems with the system.
Pathological culture is prone to overwork. Lean Enterprise describes overwork as wasted effort that does not deliver value to the customer, or their clients. HiPPOs are a cause of overwork in pathological setups:
We create an unsustainable “hero culture” that rewards overwork and high utilization (making sure everybody is busy) rather than doing the least possible work to achieve the desired outcomes.
On the flip side, bureaucratic culture is prone to underwork and inefficiency.
But generative culture responds to change as it happens, protecting itself against over- and underwork by allowing the doing of wrong things, provided that learning occurs, and new knowledge is shared and used to change course. Doing the least amount of work possible is misleading when viewed out of the lean startup context. Doing the least amount of work actually takes a lot of hard work to get right. When executives in pathological and bureaucratic organisations begin to understand this, things may start changing:
a high trust, generative culture is not only important for creating a safe working environment – it is the foundation of creating a high performance organisation.
There is no doubt that automation will continue to free people up from routine jobs, jobs that are dangerous, and jobs that can be done better by machines. Putting people out of work is not a good thing if they don’t have skills to find new, and more fulfilling jobs. But where does the responsibility lie to ensure that people have those skills?
I was surprised to find that the International Federation of Robotics predict that in the next five years robotics will be a major driver for global job creation. Many of these jobs will be in the robotics sector itself, and many of them will be knowledge based jobs requiring new skills. The intersection of robots, digital skills, and jobs raise many questions in a country like South Africa with its highly politicised and unionised workforce. What is the role of a union in this context? Should they protect the shrinking pool of industrial-era-type existing jobs, or provide its members with information and training to equip them to do the jobs of the future?
In Capgemini’s Digital Transformation Review No. 5, Per Vegard Nerseth, the Managing Director of ABB Robotics talks about the current state of the robotics industry and where it’s going. Four quotes stood out for me.
We need to find ways to make robots easier to use so that they do not require a highly skilled workforce to operate.
More robots will require skilled people to maintain and service them. But Nerseth says that we will increasingly see robots that can program themselves. Does this mean that for a robot to be dependent on a human – an error prone creature – is high risk?
A manual paint job for a car usually utilises 20-30% more paint compared to robotized painting.
Nerseth says that a single robot can replace many workers on a production line, work faster, and with greater efficiency, leading to dramatic cost reduction. Another driver to get more robots on production lines is the high cost of replacing people, which includes recruitment and training.
The industry is looking at ways to make robots work more closely with human beings, so that they can actually collaborate.
We will see more robots working alongside humans on production lines and beyond. Due to strict safety laws robots currently need to be caged in when they work alongside people. The challenge is going to be to design robots that are safe, and pleasant to work with.
The market for consumer robots have not taken off in the way it was expected.
This is not surprising. Tasks like cooking and house cleaning are both subjective and require the ‘human touch’. The areas where service robots are predicted to take off are in medicine and surgery.
The machines are racing ahead, but as a society we are not. Instead, they are posing questions that we’re struggling to answer. In 1900 no one would have believed that 69 years later people will fly to the moon. Are we at a similar juncture now where 50 years from now we’ll be hanging out, and working with, very smart machines and actually like it, or not? What concerns me is that the technology is moving faster than our ability to think about what it means for us. But that is not what technology wants.
This is my UX SA 2015 talk.
It’s about how to do effective UX work with Agile teams, and support a product manager. It contains:
Google is happy to announce its second UX Masterclass to be held in Cape Town.
Dates: 5th and 6th May 2015
Venue: To be announced
They have divided the course into 2 days. Choose the day that suits your team best.
Day 1: New Product Teams
Day 1 will equip teams who are working on launching a new product with fundamentals of the UX Process and tips on product launch such as:
Day 2: Post-launch Product Teams
Day 2 will focus on helping teams who have publicly launched products to apply the fundamentals of UX to their next product iteration. The sessions will include:
Want to know more? Ready to sign up?
I came across the Conant-Ashbey Theorem recently. It states:
Every good regulator of a system must be a model of that system.
It got stuck in my head longer than it logically should – I guessed that there must be something to it that I’m just not seeing. Then we worked on a gamification project, and it started making sense. In plain english it could mean: To tune a system you need a good analytics model of that system.1
During the design phase of a project we rarely think about designing control mechanisms that allow us to fine tune our designs when they are out in the world. Instead, we unleash variables in the world that run wild and soon disappear from sight.
Analytics not only tell us if our designs are doing their jobs – designing with a good analytics model in mind gives us levers to pull when they are not.
Good consultants spend more time thinking about their clients’ challenges than the clients themselves – and making models is a big part of this.
It follows that at the start of each project we spend a lot of time making models. It’s the best way to get under the skin of a project. Models are great at focusing a group. We do it to understand situations better, and it helps our teams visualise and think through a design challenge from all angles. Model making sits at the core of product discovery.
The models that I’m referring to here aren’t deliverables, but they could be if we polish them up, but in most cases this is a waste of valuable time. I see models as necessary discardables, their purpose is to move you along, without them you’re not going anywhere.
Systems Approaches to Managing Change highlights two interesting things about models that are worth considering.
Models are subjective.
As with any model, viewpoints are inevitably partial in the sense of being both incomplete and of being viewed from a particular or partisan perspective necessarily based on it’s particular purpose.
Subjectivity is good here, it gives you a place to start when faced with a blank canvas. The sole reason for making models is to change them. Good models change shape as soon as you start showing them to people because the partisan perspectives start breaking down when more minds start looking at a problem.
Models are wrong.
Constructing a model is a practical way of visualising the key elements of a problem. Statistician George Box once said, ‘Essentially, all models are wrong, but some are useful.’ Models are always wrong in that they don’t serve as detailed illustrations of the problem. This is also why they’re right. The simpler you can make the model, the easier it is to understand a problem.
Models lose their power when they grow too complex, and when they are ‘finished’ they no longer draw ideas out of us.
It turns out that the two weaknesses of models are also their strengths. But it takes courage to take the first step and risk being subjective and wrong. But if you think about it, when you start exploring, you can’t start out any other way.
Jared Spool’s Beans and Noses caused quite a stir when it re-surfaced and worked it’s way around our office. It’s the perfect metaphor to describe the consultant’s dilemma when working with big clients:
The idea is blindingly simple, actually. Every so often, you’ll run into someone with beans who has, for no good reason, decided to put them up their own nose. Way up there. In a place where beans should not go.
What do you do?
Lean Enterprise: How High Performance Organizations Innovate at Scale takes a more diplomatic approach; framing it as the planning fallacy:
Due to a cognitive bias known as the planning fallacy, executives tend to “make decisions based on delusional optimism rather than on a rational weighing of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns — or even to be completed.”
The planning fallacy is a means of managing uncertainty by spinning ‘scenarios of success’ at the outset of a project. It’s easy to fall for this delusion because we don’t like thinking about failure, or admitting that we may be acting before having explored different options sufficiently. What can we do about this? The Lean Startup process can help:
The Lean Startup process being relatively cheap, in an enterprise context we can pursue multiple possible business models simultaneously using the Principle of Optionality.
The Principle of Optionality simply means that by investing limited amounts of time and money in small experiments we can investigate more ideas simultaneously:
…the principles of constraining time and resources, thus limiting downside, and building a minimum viable product to test your value hypothesis as soon as possible with real customers should be applied at the start of every endeavor.
The Principle of Optionality: building and testing minimum viable products simultaneously. Most will fail, but the probability of a big win increases. 1
All roads lead to a minimum viable product, and testing a minimum viable product with real customers is a rational means of weighing up the gains, losses, and probabilities when things are uncertain. It is the antidote to delusional optimism, and will increase the likelihood that beans end up in the ground.
Diagram interpreted and redrawn from Lean Enterprise: How High Performance Organizations Innovate at Scale. Originally from Antifragile: Things that gain from Disorder by Nassim Nicolas Taleb. ↩
The epic list of game mechanics at work in Crossy road, the hit iPad game, might inspire you to gamify whatever you’re working on.
Crossy road is a simple and wonderful game, based on the arcade classic Frogger. It swept the iOS world in November and December. And it’s still sweeping.
There’s nothing to it – just and infinite sequence of roads and rivers to cross until you die. Why is it so compelling? Here’s how they encourage you to keep playing…
You can tell the makers love gaming and loved making their game. It’s ingeniously stylish to look at, and has a great sense of humour. Those are key: if the game was ugly and joyless, the game mechanics alone wouldn’t stand a chance. But the first couple of persuasion/gaming principles are there:
A good game needs a clear goal. It’s one of the key principles of Flow. There are several layers of goal in Crossy road, which will keep you motivated on the time scale of seconds all the way up to weeks.
Arguably the uber goal of the game is to build a complete collection of avatars. Collection is great because:
In addition, collections multiply the value of each individual gain. Adding one item to your collection makes the whole collection feel new, so you get the feeling of gaining something bigger: an improved collection.
This is the only place where the game attempts to monetise. The authors say that they wanted to take a different approach to monetisation: You can play quite happily without every paying a penny. But if you’re going to spend it on anything, they think you’ll spend it to complete your collection of avatars.
After a pre-determined time (often 6 hours) you get a gift of a random amount of “cents”. You can spend the cents on a random avatar to add to your collection. So we’ve got…
After a few goes, the game gives you a choice of a few avatars that are not in your collection and you can try one for three turns. After 3 turns, you have to “give it back”, or choose to buy it for 69p and get an extra 250c thrown in for free. So we’ve got…
You’re practising a (mostly pointless) skill and there’s that drive to “just try one more time because I was SO CLOSE”, that propelled Angry Birds to the top of the charts. The moment you die you can restart as fast as humanly possible, so that you can stay in the zone and master that skill. It’s that heady mix of dopamine and opiates that keeps us trawling Pinterest or Twitter, looking for info-gems.
In a funny way, each level is a skinner box: Play one more time and you might get lucky and set new personal best.
Packing this many persuasive design techniques into such a simple app is quite impressive. But I have a sneaking suspicion there may be more hiding in there. What have I missed?
Marty Neumeier’s Metaskills: Five Talents for the Robotic Age is well worth a read if, like me, you often wonder what it is exactly that you do, and how it has changed over the years. It turns out that we are locked in a race to stay ahead of the machines with creativity our competitive advantage… for now. Neumeier conjures up the Robot Curve to illustrate why cultivating creativity is the only way for us to stay ahead of the machines:
The Robot Curve is a waterfall of opportunity that flows endlessly from the creative to the automated.
As work becomes routinised, and then mechanised, its value decreases regardless of how complex the task is for a human to perform.
In Lean Enterprise: How High Performance Organizations Innovate at Scale the authors introduce the concept of friction. The idea first appeared in On War by Carl von Clausewitz. He wrote about the uncertainty faced by actors in rapidly changing environments acting on limited information about the environment as a whole.
Basically, von Clausewitz describes friction as the acclimation of unexpected events that prevents reality from unfolding as we expect it to. It is an excellent metaphor to understand the behaviour of any human organisation, including ourselves:
Friction is ultimately a consequence of the human condition – the fact that organisations are composed of people with independent wills and limited information. Thus friction cannot be overcome.