Note: Where available, the PDF/Word icon below is provided to view the complete and fully formatted document
Select Committee on the Future of Work and Workers
04/05/2018

NEWTON-THOMAS, Mr James, Private capacity

STROCCHI, Mr Jack, Private capacity

CHAIR: Welcome. I understand that information on parliamentary privilege and the protection of witnesses and evidence has been provided to you. I now invite each of you to make a short opening statement, and we'll have some questions for you after that.

Senator PATRICK: I'm wondering why there's no machine there making the statements for us!

CHAIR: It's coming.

Mr Newton-Thomas : Good morning, all. I have a masters degree in computer science. For the last 30 years I've been involved in automation, specifically the mining industry, more lately. My friend, Jack, when I first met him about 30 years ago, was—

Mr Strocchi : An economist in the Public Service. Now I have a real job.

Mr Newton-Thomas : He's done a lot of things since then. Jack's a bit of a polymath in social science fields. We are not, unlike the source of most of your other submissions, a special interest group. We have no particular axe to grind. Our motivation is a civic concern for our state, ensuring threats and opportunities this onrushing wave of computational will bring are either exploited or, where possible, avoided. The submission we gave you has a very simple hypothesis—that is, there is no future of work for humans. Job security is nil, and over the next 30 years, possibly 20, computers will become computationally substitutable for humans in all economic activity. That's not the same as saying they will be substituted for humans, but it is saying they will be computationally substitutable, meaning they will be able to outthink you in any economic activity.

This is probably a fairly hard pill for many people to swallow. Its basis goes all the way back to Alan Turing, who showed that a particular class of machine, which is now named after him, was able to turn any pattern into any other pattern, given a sufficient instruction set. Basically Turing proved in his paper on computable numbers that there is no pattern of activity that cannot be done by a machine. That's the bottom line. There is nothing that cannot be done by a machine. What it needs to do that is computational capacity. That is where up until recently we've been somewhat but not completely lacking. Computational capacity has been exponentially expanding since the 1950s and the first inception of computers—which, incidentally, took over a particular human job that had been around for a few hundred years, called computer. They took that job and the title, and then for a long time did nothing much but sit back and do accounts, economic forecasting et cetera.

However, now things are changing, because this onrushing tidal wave of computation is approaching as we speak. A lot of people think it's behind us. It's not; it's ahead of us. Every month 250 million new nodes—that is, computers—are added to the internet. Each one of those computers not only provides sensing capacity for some domain in the world—whether it be your fridge, your gym, the local traffic lights, or containers moving across the ocean—they also provide computational power, which can now be crowdsourced. We have this massive computational ability out there in the cloud, which can be taken and focused on any given task.

The core problem for us is one of comparison between biology and technology. We, biology, are bounded. We are a fixed point. Our skull size is pretty much the maximum it could be without causing too much damage to our mothers. Nature would've made us smarter if we could be. We can see a clear rapid growth in hominid intelligence related to head size, then suddenly it peters out and stops. This means is that we have a fixed point, which is us, and approaching that from underneath is an exponentially increasing wave of computation. Those lines are going to cross quickly. Computation is not aimed at your job, particularly; it is aimed at the tasks you do. There's no necessary economic incentive in doing everything that you do. A lot of the tasks you do, like walking upstairs or opening doors to get to work, are not necessary for machines; therefore they are much more task oriented. Companies increasingly are embracing this, because they do not want to pay for all those traditional things like superannuation; they just want to pay for what gets the job done, produces economic viability and makes them the most profit.

Human computational ability is around 100 petaflops. That estimate, plus or minus one order of magnitude, is currently about what the best supercomputers, mostly in China, are now capable of doing. It means we have a single cluster of computers able to do the same computation as a human brain. It doesn't mean we yet have the software—and I'll come to that shortly—but it does mean we have the ability to do that. We know we can achieve human level intelligence with such things because it has been done it to us through evolutionary algorithms. We are now living in a world very different to the one we evolved in. We evolved in a world of rocks, stones, cliffs and tracking animals, and now here we are in a society very different to that, still functioning well; our intelligence allowed us to adapt to this society. We got to our original society via evolution. I don't think there's any disputing that. We now employ those same evolutionary algorithms that developed us as we develop artificial intelligence systems. You've heard the terms deep learning, artificial neural networks. These all use evolutionary-type algorithms. We know they work. We can use those to get from a state of disorganised neural mass to something resembling us, so long as we have the three basic tenets of intelligence: speed, scale and connectivity.

Before I get much further, let's talk very briefly about the differences between humans and the technology out there. In our paper I described that we can get data from here to the Macquarie Centre in Canberra and back again faster than we can get data from one side of our brain to the other. It's a simple fact. Apart from the impressive speed of modern communication, it also shows us that all the computers in that area between here and Canberra can be concentrated onto a task faster than we as individuals can concentrate ourselves. We're talking about vast areas of computational ability that can be drawn upon. Unlike us, those computers can work on one particular job for 10 milliseconds, do your accounts for next 37 milliseconds and then play some computer game for a kid in China for the next 30 milliseconds. They can instantly switch between different tasks at any time. They don't sleep. They are constantly focused on the job at hand. They are coming. There is no doubt that they will be here.

When we start to look at teams of people—that is, going from individual humans to groups of humans who communicate via talking—the problem gets even worse, because speech is a very slow means of communication. Our entire submission could be transmitted around the world in fractions of a second. This talk I'm giving to you takes minutes. This simply means all that surplus communications and computing capacity in the world can be focused on tasks anywhere in the world far better than teams of people can do the same job—except where they've learned each other's behaviour, which assists and is why the military do drills and things like that. Where novelty is required, machines will outperform us simply because they have this ability to take vast computational resources, stores of data and experience from around the world and bring it to a task. And, as I say, that is increasing by 250 million more devices per month every month, and that number will increase until, probably by about 2020, there are going to be something like 50 billion devices out there. And that's not just the major computer centres et cetera; we're talking about smaller devices. All of those are being tied together in a thing eponymously called the Internet of Things. The Internet of Things is, of course, the internet, just like what we have now, except that the computers talk to each other and transact and run the economic show. The Internet of Things, make no mistake about it, is the future of automation in the near term.

There's an irony at play here because, when the Industrial Revolution kicked off 200 years ago, humans, of course, dodged a bullet at the time because we moved into more intellectual pursuits. We were able to get out of the way of digging ditches, unlike horses et cetera, which didn't fare so well. But this time, of course, it's the intellectual pursuits that are now directly under threat, and there is no obvious way to dodge that bullet. And there are going to be some serious economic consequences of that. I'm going to pass over to Jack who will start to talk about that.

Mr Strocchi : Thank you, James. James, I might say, has practised automation in the last 20 years and has, in fact, himself ended work for a lot of people, but he's a nice guy, unlike some of the people I'm going to talk about, which are the tech giants. But we'll get to that. I'm going to cover two areas. James has spoken about the technological doability of ending work at a level of an intelligent machine. I'm going to talk about the commercial inevitability of ending work at the workplace level and the economic consequences as that work automation pervades the entire economy.

Firstly, we're going to say that we are technological alarmists. We're here to scare you. We're scared, and we work at this for our living. James has been in the belly of the beast for a long time, so he knows what he's talking about. You should be scared, and that's what we want you to take back to Canberra. My own profession is full of people who I would call technologically complacent or technological sceptics—the economists. I'll deal with them a little bit at the end of my talk, but don't take any notice of them. They've been coming around to their senses recently, for example, at a couple of conferences last year and early this year, but they've been giving you a far too complacent view of things. We're going to talk firstly about the commercial inevitability of it and the tech giants. The name says it all. They're technologists. That is to say that they've got the technology side down pat, particularly the key thing about information. They've got big data and proprietary algorithms. They've got those; that's the tech side. And they've got the giant side. They're big. They've got the economies of scale. Most importantly, they have a business model that finally works. Everyone remembers that, when the internet came out, everyone said, 'The information wants to be free.' They hate that because it's bad for business. What's the point of free? There's no money in it. So they've worked out a way of monetising information. We give them our information for free—information is free from us to them—and then they modify it, monetise it, commodify it and sell it back to us or sell it to their customers in the form of marketing data initially on the demand side. Then on the supply side, they'll sell it back as AI service in the cloud. That's what the tech giants are positioning themselves to be, basically—the future of industry both on the demand side and the supply side.

In a way, what they're doing is going back to the original tech giants. Remember General Motors' vertical integration, if you can remember back that far in your economics classes, when I was falling asleep. Everyone said: 'That's no good. That's core business. That's not the core business.' But they're going back to that. And they can do this now because they have these economies of scale. They have economies of scale in two areas. One is production, obviously. They have big server farms. Everyone's seen those pictures of Google's server farms. No-one else can afford them. They're the size of cities, and there are bigger ones planned in China—whole city-size ones. Mom-and-pop stores are not going to be able to compete with that, and, if they try, they'll be bought out. Jeff Bezos, before he called his company Amazon—a nice, modest sounding name—he wanted to call it 'Relentless' because that's what they're like. They take no prisoners. They've got it on the production side, economies of scale, but they've also got it—and this is really important—on the consumption side, where they have this thing called network externalities, which means basically that you're locked in. It's a bit like a bank. No-one wants to change. I'm in Google now. I don't want to go to Microsoft because it's just too much like hard work. It's the same deal. People get locked in and they sign away all their data, which everyone does all the time. So there is brand lock-in and network externalities.

Once they've got that, they are going to do this thing—or they are doing this thing; they always wanted to do it but now they can do it—of getting vertical integration. It is basically a concept of the platform hub economy, which you're going to hear a lot about in a fairly short space of time, where they get downstream and upstream. Downstream they're already getting—search engines. That's the interface with the consumer—sales and distribution. They can get your data. They can also proactively get a commission when they sell you something. That's what those little targeted ads are in Google and Facebook. So they're getting money that way, but then they are using that. They are going to finance that to develop their production side of things, which means basically helping to actually physically automate the economy. As James pointed out, when we talk about automation now, everyone thinks about robots like in BMW factories. There is that—that's a big deal—but most of the value in automation is in the supply chain. It's like: 'Why do you rob banks, Harry?' ' Because that's where the money is.' Why are they going to automate the supply chain? Because there is about $10 trillion worth there—you name it. It's where all the money is, and a lot of it is white-collar work, which is going to be fairly straightforward to compute, where it's not already being computed as it is. In a way, it fulfils the two criteria for automation: it's cheap to compute and it's valuable to commodify. There is a lot of meat on the bone there.

How are they doing that now? How are they connecting upstream and downstream? It is through this thing called a virtual assistant, which I'm sure most people here have got and, if they're like me, they swear at them occasionally to try and get the right answer. They're not perfect yet. Alexa, Cortana, Siri—I think everyone's got one now. Those virtual assistants are doing two things. They're getting your data, again for free—the business model. They are getting free information about you and forming a very perfect picture of every single person on the planet who has a digital device, and so they'll be able to target you and get your money and get kickbacks from the people they make the sales to. They won't call it that, of course.

The more important thing that that's going to do is that it's teaching their machines to talk. This is what James pointed out. Turing invented this machine and he also conceived the Turing test, which is proof of concept, if you like. Obviously we're talking to them now, so we're not far off the Turing test. It's within sight. NLP, natural language processing, is the biggest deal in a lot of these companies. Ray Kurzweil is the guy who's managing that for Google at the moment. It's like the biggest deal. It's the gold standard. Once you get that, of course, computers will talk. When you can talk, that's the most important task—if you're talking about substituting—that humans do. It is talking. That's mostly what we do. What do we do now? We don't do much heavy lifting. We wave our arms around and talk, flap our jaws. Computers will do that. The first thing they will do is they'll talk to the past. They'll find out all the information that's ever been there, because they'll be able to figure it out, because they can read it. More importantly, they'll talk to the present. They'll talk to people and find out what you want to do. Most importantly, they'll talk to their owners directly about production plans and business plans, and that means getting rid of IT, which is the biggest headache for automation. Everyone knows it. Management say, 'It's going to save us lots of time and money,' and it ends up being the biggest headache of all time because you spend your whole life in IT tech support. That's what tech support is. There will be no more tech support once the Turing test is passed. That's over—no more housekeeping. That's the biggest speed bump for automation.

That's not far off. I don't know. We were talking before about the law which says that, firstly, technology comes off a low base but it's got a high-growth rate and people think, 'It's going to kill everything very quickly,' but, because it's off a low base, it takes a while to get to critical mass. But it's an exponential trend. So everyone gets disappointed, which is what happened with AI—AI winter. Everyone went, 'Oh, it didn't work. Where's our flying car?' But now we are reaching the inflection point where it is working, and then it goes much faster than people expected. Before, it went slower than people expected. Then it's going to go much faster. That's why you should be scared, because we don't have a lot of time. We liken it to global warming, where the risks are of a comparable nature in terms of damage to human wellbeing. But, unlike with global warming, the scientists are asleep on the job—the economists. I'll get to them in a little while, in a moment.

The other thing about it is that we can't mitigate it. There's no place to hide from Mr Relentless out there, with his robot dogs and his $100 billion cheque. There's no place to hide, because these companies are transnational. They can go to any jurisdiction. They can go to outer space, which is where they're going. They're going to outer space. Try to regulate that! Then, of course, agencies—government—are just useless. They can't even put a carbon tax on, so it's just useless. It sits on a bull—pardon my French. That's why there's no mitigation, nor is mitigation desirable, because automation can be a great opportunity. It's a catastrophic risk and what I call an anastrophic reward—there's the word for the day! 'Anastrophic' means the opposite of catastrophic. It means there's an amazing possible cornucopia of value out there.

But the problem is that we have to manage the transition. The transition's the hard part. When communism failed, they didn't manage the transition very well. Remember shock therapy. It wasn't good. Greece did not manage the transition to the EU. It's not good. Remote Indigenous communities haven't managed the transition to the end of work very well. This can really hurt people a lot if you don't manage the transition.

Now we'll talk about the economics. Once the tech giants have automated the workplaces, and then every workplace is automated, with the internet of things being basically a fully roboticised supply chain, we then come to pervasive automation throughout the whole economy. A lot of people are talking about the fourth industrial revolution. I can't even remember what the first three were, but it's just a continuation of the first industrial revolution. That's really all it is. It's just the same old same old, except now it's from dumb animals to smart machines. It always was. It's the same thing.

They call AI or intelligent machines a general-purpose technology. It means it goes through to everything, like electricity. Every appliance has electricity in it now. But it's more than that. It's more than a GPT; it's an innovation. It is an innovation, but it's an innovation that innovates. It's like The Office. Remember that joke: an app that makes an app. That's really what it is. That's what machine learning is. It's learning stuff. It's discovering knowledge. That's innovation, and it'll only get better.

We come now to the catastrophic risks. I won't focus too much on the anastrophic rewards. Keynes foresaw both sides of it, and we should really honour the man. He was a genius. He talked about two forms of unemployment in his life. In 1929, he talked about technological unemployment. That's on the supply side. That's where a business sees that, with the cost-benefit, it doesn't make sense to employ this worker, because they're just not good enough to compete with this machine. That's going to happen everywhere, so we're going to get technological unemployment and industrial disruption. It's just going to be a lot of little firms and workplaces imploding and getting rid of their workers. That's going to be micro-economic rationalism everywhere. That's already happening. Automation's already eating into a lot of workplaces. We already have lights-out factories. James has a lights-out mine; I've seen it. A lights-out mine means that there's no need for lights, because there are no people there, because computers can see in the dark. That's the industrial disruption side. That's the supply side.

On the demand side, Keynes also saw that. This was not caused by technology; it was caused by banking, as usual, as we can see. That's another problem. When everyone's income dries up, guess what: everyone's expenditure dries up. Labour is 65 per cent of income in society, so expenditure dries up. Then one person's expenditure is another person's income—the income-expenditure cycle. That's the basis of macroeconomics, so you'll get what I call financial unemployment. That's money unemployment, not just work unemployment or technological unemployment. That means that people won't be able to pay their mortgages. They won't be able to pay their bills, so you'll get a liquidity crisis, and that'll turn into a solvency crisis as all the assets just go south, which is what happened in the GFC. That's kind of like a pilot program for the end of work. That's not good. If you go to Greece now, it's a mess.

But it gets worse. There's more. That will hammer not only the output side but the income distribution side. The term that the guys used in this conference that came out late last year was 'distributional carnage'. Economists are not given to using rhetorical terms. 'Distributional carnage' means that, for capital biased technological change, which is what we're having now, the one per cent will get even more 'one percenty'; it will be like 0.01 per cent—Mr Relentless and his robot dogs. There'll will be not only a fall in demand for jobs—which means the price goes down, naturally, as any fall in demand causes—but also an increase in supply of jobseekers piling onto the remaining jobs, which is where the contingent workforce comes in. The future of work is the contingent workforce, such as it is. It means 'immediately'. Ultimately, there's no future, of course. Intermediately, we'll all pile into that gig economy. Wages will go down. They're using the term 'immiseration' now, which Marx last used. I'm not saying everyone is immiserated now, but they will be if this isn't stopped.

CHAIR: I'm just conscious of the time. Maybe if we just give you another minute or so then we'll have time for questions.

Mr Strocchi : That's fine. The final thing I want to do is hammer the economists, the technosceptics. In these conferences that we know about, basically, the future of work is as a tech trainer for machines and a stopgap tasker. That's not a good future. Economists are complacent about this. They're sceptics. There are two paradoxes that they've built their scepticism on: the Solow paradox, which is about productivity, and the Polanyi paradox, which is about instructing computers. I'll talk to those when we go to questions, but those two paradoxes have been resolved. That leads to there being two false ceilings that are really misdirecting us. One is that economists say humans have a much higher ceiling than they actually have, because we can keep accumulating human capital through education. That's not going to work—there are diminishing returns to education. The other thing is their machine ceiling is too low. Machines can do non-routine work; they can do creative work. Therefore, the economists are wrong.

Senator PATRICK: The premise of your thesis is that we are moving into a world where AI comes of age. Having been around technology for many years, from programing in 8080 in assembly language at some stage of my life, working through to multisensor data fusion, I've got a lot of experience in that domain. Mr Newton-Thomas you made the point that, whilst we're talking about processing power and memory and all of those things, the soul of the machine of course is in the programing. Trying to program something that replicates that AI function hasn't, in my observation—having been in the industry—come of age yet. Every AI person I've talked to over time has said that we're on the cusp and it hasn't actually happened. I get the automation thing, where it is easy to get a machine to replace a repetitive task or a dangerous task, but could you give me an example where there is some multisensor data fusion that processes, gives intelligence and can respond to environmental change and make sensible decisions?

Mr Newton-Thomas : Certainly; I'll give you one of my own examples. I had to automate some loaders—large rock-moving equipment—for the mining industry and we wanted them to be self driving down the tunnels et cetera. The problem with those is that all the machines are different. They suffer different wear and tear and change to the machines over time. So it is not practical to simply write a code and make the machine work. It will not work, because each machine is different and the environment will change. So we developed a system that learnt. As the machine drove around, it learnt how it responded to the world around. It learnt how its hydraulics working. It could identify changes in its hydraulics and thus suggest maintenance to itself. It could remember areas of the tunnel surfaces where the machine would slip. This was all done through learning algorithms. The system learnt to become a better driver over time. The programing was relatively simple. We programed the machine in a way that allows it to learn from its environment. This debate raised—

Senator PATRICK: Where's that system employed and which company is using it now?

Mr Newton-Thomas : It's employed all over the world—for example, BHP—and it is now sold by Caterpillar.

Senator PATRICK: So it effectively replaces a driver function?

Mr Newton-Thomas : Effectively it does. It was done for safety in the mines. About four people a year were being killed in Australia, because they would get off the machines and tele-remote them into dangerous areas. We were able to take it up to the surface and they could control those four machines from the surface. The operators would still tele-remote the machines to the loading cycle—because the machines at the time didn't have the sensors to do that—but the machines themselves would then do the tramming between the different jobs, which meant that you could technically have one operator running four machines and they would just tele-remote into those machines. That was primarily to save lives. It was originally developed to address a safety issue.

There was also an interesting result, in that the economic reward for such a system actually turned up in maintenance. Whilst we at the time—and I can't speak for the system now—could not compete with a human driver driving flat-out down the tunnel, after a month of operation, the autonomous machine was far less requiring of maintenance because it wasn't banging into walls and it was always driving to rules. So there was this terrific two-fold economic pay-off. Firstly, you didn't have to spend half an hour taking operators from the surface down to the mine, into the tunnels, which meant half an hour of productivity. Each one of those buckets in a large loader underground has about $5,000 worth of ore in it. So, if you can move another 10 buckets, that's $50,000 of movement saved by just not bringing those people down to the workplace.

Senator PATRICK: That's an automation system. I've differentiated that to something that is AI in that I presume there are probably some programmers still updating the code on that and there will be technicians that are doing—

Mr Newton-Thomas : There would be, but I think—

Senator PATRICK: The evidence we've received in the committee is that of course there are always technological changes, and they can replace jobs but they tend to create other the jobs.

Mr Newton-Thomas : That's the old Luddite fallacy again. Machines may replace a task. Machines aren't there to take any individual job. They look at a task and say, 'We can do this task better. We can now do it more precisely and we can follow the QC rules,' and therefore that job becomes part of that global jobs factor for that person who used the computer for that task as a tool. Increasingly, as machines are brought into the environment, they are brought into supplement tasks and a person's job description changes to accommodate that, because of the additional productivity associated with machines. You generally have the same number of people employed. You don't necessarily bring more people in, unless you're productive, in which case you can afford to expand your business and it makes economic sense to do so. So there's no surprise that bringing computers into the workplace actually increases employment in the short term. It only becomes a problem when machines reach the same computational level as humans, when they can do computationally what we do with the right software.

When Turing proposed the Turing test, he basically did so in response to a series of questions that he was given. He was asked, 'How do we know these things are thinking?' Turing basically said, 'If it acts in a way that we can't tell if it's a human or not then we can assume that it's thinking'—and so he devised the Turing test. Typically, most computed computer scientists, myself included for a long time, thought, 'The Turing test is a bit silly,' because there is no reason to make a computer indistinguishable from a human. There's no advantage in it. But recently what's become apparent is if or when in the future we have computers that can speak indistinguishably from humans, the entire IT industry immediately disappears—because that's effectively their role. An IT person basically looks at the business, analyses it, listens to what the domain experts of that business are saying and then converts it into computer language so the computer can now do it. It's software.

If the computer can directly speak to the domain experts and, better still, learn from those domain experts who may not be good at articulation, by looking at all their past data and evolving a solution itself, which is how the modern algorithms work—when you look on searches on Google these days you are, effectively, talking to an AI that is interpreting your request based on its own learning, based on its observation of millions of other people. So if you go in there after Donald Trump's done something on television and ask a question pertinent to that, it immediately knows the context of your question, based on what's happening in the world, even though you may not have explicitly spelt it out.

Senator PATRICK: I think that's because there's some program sitting behind there that works out an algorithm that factors in to the search engine some new piece of news.

Mr Newton-Thomas : In part. Let me explain. A neural network running on a standard computer is a program. There's no doubt it's just a series of repeated nodes that have a represented interconnection between themselves. But that neural network, which is a model of the world, learns from the world in the same way that our loaders learn to adjust their own hydraulic responses based on their performance or the way modern aircraft change their flight characteristics based on speed. They learn to do that based on the their observed performance.

Sure, there is a small amount of code that wraps around that and there are certainly other sets of code that stop them doing stupid things—code like safety overrides and interlock—but the core of the modern computer system is a self-learning system. We used to have program algorithms iD4 et cetera that did this, that were self-learning, but they were nothing like what neural networks are. Neural networks work in the same way the human brain works. They basically respond to the data that comes into them by reconfiguring themselves so that the answer best matches what gives them the best response. So they are constantly adapting themselves for the environment around them, and they can be employed in the same way that we can be employed.

At the moment, however, we don't have as many resources to task as a human being. We have enormous mental capacity, which we use all the time. At this stage there are physical limitations on the amount of computational resources we can bring to task, but that will soon change. Modern computer systems learn, and they learn through a process of evolution. It's not that they learn because we tell them what to do. We give parameters and they go off and sample the data. So when your Google at home is listening to you speak it is learning how to speak. It is learning how inflexions are done. Every time it answers one of your questions or you speak to it and it says, 'Oh, I'm sorry, I don't know what you said,' and then you come back and ask it another question, those algorithms go off and say, 'Okay, this is probably what they meant in that first one.' So they get better at doing that. There's no program involved in that. That's done by the system's self-learning.

The big resurgence in AI came about because Ng et cetera developed ways to take the graphics processes, developed by NVIDIA for games, and used them for the neural network. So suddenly we went from tiny neural networks, which we had when I was doing my work, to massive neural networks that we now have and massively expanding ones. So this self-learning capability is not to be underrated. That's why these companies are so data hungry. We have learnt via experience that to get an entity that can work efficiently in society it takes 15 or 20 years of exposure. That's what it takes for us. We grow up, we have massive experience and then we pop out at 15 or 16, and then we're productive members of society. If we want to get to the fringe of human knowledge it might take us 25 years. But that's how long it takes. And that's simply because that's the quantity of data required to get us up to our standard. It's no real shock why one of Google's first projects was to digitise every book ever written. It's done that as a training set for the AIs. It's no surprise why Watson is harvesting as much medical data as it can. It's because those AI algorithms are data-hungry.

CHAIR: I need to cut you off there, Mr Newton-Thomas. Senator Stoker has a question.

Senator STOKER: If you gentlemen are right—and you may well be—that this is what the future holds and it's going to lead to these potentially catastrophic consequences, what do you say needs to be done now to deal with it?

Mr Newton-Thomas : Firstly, people need to become aware of it. You need to get people who know what they're doing into a room to talk about what to do. We proposed in our proposal a couple of suggestions. One is that, given that the internet of things is going to be so huge—I would suggest that the economy is going to shift into the internet of things—you need to work out some way of transaction-taxing the internet of things. That would then give you a revenue stream that can come back to the less productive humans on the other side. We suggest blockchain technology as a way of using it—not so much for cryptocurrency; I think it's a bit overrated. But it would be possible to develop a kind of invisible transaction tax on the internet of things on a national basis for transaction within that country, which would then provide a revenue stream. That way, you can go out full steam ahead and let the economy expand as rapidly as possible. Of course, the larger the economy, the better you can start to implement things like a universal basic income or something of that nature and work out how to stop people feeling depressed.

Mr Strocchi : One point I would like to make is that James works in China, and I've always been impressed with the Chinese—they've managed to transition already. They've done this. They managed an ideological transition from communism to state capitalism, and they managed a technological transition from rice or whatever it was to building server farm cities. Deng Xiaoping and guys like that have already been here and done that. We could maybe learn from the oldest civilisation in the world, which has invented calculating machines and also a bureaucracy, instead of being very arrogant and saying: 'We like the end of ideology. We've got the best system.' As James pointed out, the Chinese don't change their leadership but they can change their system. We can change our leaders but we can't change our system. That's going to be a problem now, because the system will change under its own inheritors.

CHAIR: I'm sorry, but we are out of time and we've got other witnesses who are waiting. We're going to have to finish up. Feel free to add anything additional by way of a further written submission if you'd like to.

Mr Strocchi : We'll do a condensed version of this and we'll upload it.

CHAIR: That'll be fine. Thanks very much for your time.

Mr Newton-Thomas : Thank you.

Mr Strocchi : Thank you.