It is not surprising that quantity is an often used measure of productivity. Lean Enterprise describes how the productivity of software engineers are measured:
Individual productivity is most commonly measured by throughput—the time it takes to complete a standardized task under controlled conditions. This approach is premised upon a Taylorist view of work where managers define the tasks to be done and workers try to complete these tasks as rapidly as possible. Thus, old-school metrics such as lines of code per day and number of hours worked are used to measure individual productivity of software engineers.
But the problem with quantity in an information, or knowledge context, is that it adds complexity. In a lean generative culture the quantity metric gets flipped around. The focus shifts to quality, but quality is hard to measure.
The flaws in these measures are obvious if we consider the ideal outcomes: the fewest lines of code possible in order to solve a problem, and the creation of simplified, common processes and customer interactions that reduce complexity in IT systems. Our most productive people are those that find ingenious ways to avoid writing any code at all.
A History of the World in Twelve Maps is a tour de force if you are interested in maps. The final chapter, Information: Google Earth, 2012, is a highlight for me where Jerry Brotton writes about map making in the information age and the effects of Google on cartography. What I find interesting is the distinction between what we generally conceive a map to be vs. a geospatial application, as Google refers to Google Earth and Google Maps.
At the centre of Googlenomics are the company’s geospatial applications. As Adwords allows companies to target their advertisements more effectively, so Goolge Earth and Maps locate their product in both physical and virtual space.
He quotes from a Michael T. Jones lecture titled The new meaning of maps [PDF] where Jones defines the online map as a ‘place of business’, an ‘application platform’ where businesses trade ‘actionable information’. The alarmists are reminded that this is nothing new:
Google Earth is part of a long and distinguished cartographic tradition of mapping geography onto commerce …
But in terms of mapmaking, geospatial applications represent an important difference with what has gone before:
…for the first time in recorded history, a world view is being constructed according to information which is not publicly and freely available. All prior methods of mapmaking ultimately disclosed their techniques and sources…
Google Maps API allows users to reproduce Google’s maps, but not to understand its code; and like Adwords, by tracking the circulation of its maps, Google can simply extend its database on user’s tastes and habits.
Brutton writes that as the monopolisation of information continues, we, the information sources, may not be sufficiently motivated or organised to resist it. We are unable to see where the new maps, or geospatial applications, are taking us.
We are on the brink of a new geography, but it is one that risks being driven as never before by a single imperative: the accumulation of financial profit through the monopolization of quantifiable information.
In digital, we often hear that there is a skills shortage, we are either in it, or it is looming. We are told that schools and universities are not producing people with the necessary skills to flourish in the digital/information age. The burden is shifted onto companies to right this wrong. But developing people is something that few companies get right, hence the reason why companies aim to recruit the smartest people. Lean Enterprise quotes Malcolm Gladwell, he calls it the talent myth:
The talent myth assumes that people make organisations smart. More often than not, it’s the other way around…
Following this assumption we assume that companies are smart because the people in them are. There is truth is this, but it could absolve companies from the responsibility, and urgency, to transform into learning organisations. I’m reminded of stories about good people being snapped up by well known companies, either to disappear, or emerge disillusioned a few years later. W. Edwards Deming put it this way:
A bad system will beat a good person every single time.
If the skills can’t be found, get people that are hungry to learn and create an environment to support them, and the skills will follow. In the age of ubiquitous information the responsibility to grow lies with individuals, but equally, it is the responsibility of organisations to start thinking about the relationship between culture and performance:
Thus, organizational culture determines not just the productivity and the performance of the people working in it, but also their ability to gain new skills, their attitude to failure and new challenges, and their goals.
In Lean Enterprise, Ron Westrum‘s continuum of safety cultures struck a chord. His ideas resonate strongly when applied to teams making digital products where a strong link exists between the quality of what is made and the culture of the teams that made it.
Westrum identifies three types of organisational culture:
The true nature of an organisation reveals itself when things go wrong. Westrum writes (emphasis mine):
When things go wrong, pathological climates encourage finding a scapegoat, bureaucratic organizations seek justice, and the generative organization tries to discover the basic problems with the system.
Pathological culture is prone to overwork. Lean Enterprise describes overwork as wasted effort that does not deliver value to the customer, or their clients. HiPPOs are a cause of overwork in pathological setups:
We create an unsustainable “hero culture” that rewards overwork and high utilization (making sure everybody is busy) rather than doing the least possible work to achieve the desired outcomes.
On the flip side, bureaucratic culture is prone to underwork and inefficiency.
But generative culture responds to change as it happens, protecting itself against over- and underwork by allowing the doing of wrong things, provided that learning occurs, and new knowledge is shared and used to change course. Doing the least amount of work possible is misleading when viewed out of the lean startup context. Doing the least amount of work actually takes a lot of hard work to get right. When executives in pathological and bureaucratic organisations begin to understand this, things may start changing:
a high trust, generative culture is not only important for creating a safe working environment – it is the foundation of creating a high performance organisation.
There is no doubt that automation will continue to free people up from routine jobs, jobs that are dangerous, and jobs that can be done better by machines. Putting people out of work is not a good thing if they don’t have skills to find new, and more fulfilling jobs. But where does the responsibility lie to ensure that people have those skills?
I was surprised to find that the International Federation of Robotics predict that in the next five years robotics will be a major driver for global job creation. Many of these jobs will be in the robotics sector itself, and many of them will be knowledge based jobs requiring new skills. The intersection of robots, digital skills, and jobs raise many questions in a country like South Africa with its highly politicised and unionised workforce. What is the role of a union in this context? Should they protect the shrinking pool of industrial-era-type existing jobs, or provide its members with information and training to equip them to do the jobs of the future?
In Capgemini’s Digital Transformation Review No. 5, Per Vegard Nerseth, the Managing Director of ABB Robotics talks about the current state of the robotics industry and where it’s going. Four quotes stood out for me.
We need to find ways to make robots easier to use so that they do not require a highly skilled workforce to operate.
More robots will require skilled people to maintain and service them. But Nerseth says that we will increasingly see robots that can program themselves. Does this mean that for a robot to be dependent on a human – an error prone creature – is high risk?
A manual paint job for a car usually utilises 20-30% more paint compared to robotized painting.
Nerseth says that a single robot can replace many workers on a production line, work faster, and with greater efficiency, leading to dramatic cost reduction. Another driver to get more robots on production lines is the high cost of replacing people, which includes recruitment and training.
The industry is looking at ways to make robots work more closely with human beings, so that they can actually collaborate.
We will see more robots working alongside humans on production lines and beyond. Due to strict safety laws robots currently need to be caged in when they work alongside people. The challenge is going to be to design robots that are safe, and pleasant to work with.
The market for consumer robots have not taken off in the way it was expected.
This is not surprising. Tasks like cooking and house cleaning are both subjective and require the ‘human touch’. The areas where service robots are predicted to take off are in medicine and surgery.
The machines are racing ahead, but as a society we are not. Instead, they are posing questions that we’re struggling to answer. In 1900 no one would have believed that 69 years later people will fly to the moon. Are we at a similar juncture now where 50 years from now we’ll be hanging out, and working with, smart machines and liking it, or not? What concerns me is that the technology is moving faster than our ability to think about what it means for us. But that is not what technology wants.
This is my UX SA 2015 talk.
It’s about how to do effective UX work with Agile teams, and support a product manager. It contains:
Google is happy to announce its second UX Masterclass to be held in Cape Town.
Dates: 5th and 6th May 2015
Venue: To be announced
They have divided the course into 2 days. Choose the day that suits your team best.
Day 1: New Product Teams
Day 1 will equip teams who are working on launching a new product with fundamentals of the UX Process and tips on product launch such as:
Day 2: Post-launch Product Teams
Day 2 will focus on helping teams who have publicly launched products to apply the fundamentals of UX to their next product iteration. The sessions will include:
Want to know more? Ready to sign up?
I came across the Conant-Ashbey Theorem recently. It states:
Every good regulator of a system must be a model of that system.
It got stuck in my head longer than it logically should – I guessed that there must be something to it that I’m just not seeing. Then we worked on a gamification project, and it started making sense. In plain English it could mean: To tune a system you need a good analytics model of that system.1
During the design phase of a project we rarely think about designing control mechanisms that allow us to fine tune our designs when they are out in the world. Instead, we unleash variables in the world that run wild and soon disappear from sight.
Analytics not only tell us if our designs are doing their jobs – designing with a good analytics model in mind gives us levers to pull when they are not.
Good consultants spend more time thinking about their clients’ challenges than the clients themselves – and making models is a big part of this.
It follows that at the start of each project we spend a lot of time making models. It’s the best way to get under the skin of a project. Models are great at focusing a group. We do it to understand situations better, and it helps our teams visualise and think through a design challenge from all angles. Model making sits at the core of product discovery.
The models that I’m referring to here aren’t deliverables, but they could be if we polish them up, but in most cases this is a waste of valuable time. I see models as necessary discardables, their purpose is to move you along, without them you’re not going anywhere.
Systems Approaches to Managing Change highlights two interesting things about models that are worth considering.
Models are subjective.
As with any model, viewpoints are inevitably partial in the sense of being both incomplete and of being viewed from a particular or partisan perspective necessarily based on it’s particular purpose.
Subjectivity is good here, it gives you a place to start when faced with a blank canvas. The sole reason for making models is to change them. Good models change shape as soon as you start showing them to people because the partisan perspectives start breaking down when more minds start looking at a problem.
Models are wrong.
Constructing a model is a practical way of visualising the key elements of a problem. Statistician George Box once said, ‘Essentially, all models are wrong, but some are useful.’ Models are always wrong in that they don’t serve as detailed illustrations of the problem. This is also why they’re right. The simpler you can make the model, the easier it is to understand a problem.
Models lose their power when they grow too complex, and when they are ‘finished’ they no longer draw ideas out of us.
It turns out that the two weaknesses of models are also their strengths. But it takes courage to take the first step and risk being subjective and wrong. But if you think about it, when you start exploring, you can’t start out any other way.
Jared Spool’s Beans and Noses caused quite a stir when it re-surfaced and worked its way around our office. It’s the perfect metaphor to describe the consultant’s dilemma when working with big clients:
The idea is blindingly simple, actually. Every so often, you’ll run into someone with beans who has, for no good reason, decided to put them up their own nose. Way up there. In a place where beans should not go.
What do you do?
Lean Enterprise: How High Performance Organizations Innovate at Scale takes a more diplomatic approach; framing it as the planning fallacy:
Due to a cognitive bias known as the planning fallacy, executives tend to “make decisions based on delusional optimism rather than on a rational weighing of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns — or even to be completed.
The planning fallacy is a means of managing uncertainty by spinning ‘scenarios of success’ at the outset of a project. It’s easy to fall for this delusion because we don’t like thinking about failure, or admitting that we may be acting before having explored different options sufficiently. What can we do about this? The Lean Startup process can help:
The Lean Startup process being relatively cheap, in an enterprise context we can pursue multiple possible business models simultaneously using the Principle of Optionality.
The Principle of Optionality simply means that by investing limited amounts of time and money in small experiments we can investigate more ideas simultaneously:
…the principles of constraining time and resources, thus limiting downside, and building a minimum viable product to test your value hypothesis as soon as possible with real customers should be applied at the start of every endeavor.
The Principle of Optionality: building and testing minimum viable products simultaneously. Most will fail, but the probability of a big win increases.1
All roads lead to a minimum viable product, and testing a minimum viable product with real customers is a rational means of weighing up the gains, losses, and probabilities when things are uncertain. It is the antidote to delusional optimism, and will increase the likelihood that beans end up in the ground.