Very interesting read that Gulley. I'm not sure I entirely understood it all but a few bits will stick in my head. Its a frightening thought that we are now building algorithms that could go well beyond our control. The car not being able to discern a woman on her bike with shopping bags is exactly the kind of problem they have - humans are still unpredictable in that way, and the computer in that instance made the wrong assumption.
It is interesting stuff.
I think the problem we face could be described in a simplified way as that over the last 15 years or so, there has been a move towards what is called service orientated computing.
This has been an evolved solution developed largely to mitigate the problems of years gone by in IT and to take into account developments in technology that have allowed the once fanciful dreams of distributed computing to finally become reality and start to come of age.
But it's possibly all becoming a bit wild west in some respects and I think that's the jist of the article.
For example a service ( a process offering some clearly defined functionality in a computer and with a standardised way of conversation) was simply something that a client (an end user represented by a process or another service in often a different computer) connected to in order to get some required functionality performed.
These things would traditionally be clear cut business or logic processes, ie return a current share price, calculate VAT etc, calculate torsional strength, and easily defined in code by instructions to read this, do this to it and then output the results type of work, and also all easily managed, delineated and very predictable and invariant in their behaviour and with modern programming languages, very safely coded.
But over the last 10 years, these distributed services, often from 3rd party external providers, have sometimes independently evolved and become more sophisticated and can now vary their behaviour and responses by the implementation of self learning due to the incredibly powerful machines and their cheap processing power that we now have at our disposal.
That makes it more difficult to predict with any reliability the complete behaviour of a system in dare I say a holistic way. It's moving from deterministic to non-deterministic in predictability..
Now possibly 99.999% of the time it will still behave as expected and all will be well, no children will be harmed, but it's that 0.001% that should concern us and what we just don't know or have difficulty in knowing as these distributed services, or the behaviour of the components within, evolve through self learning algorithms.
As an analogy, I think there are indications we are possibly reaching the stage when these distributed services are behaving, or have been given the ability to behave, almost like independent countries in that while their name may remain the same, they can have changes of government and corresponding changes in foreign policies and thus their relations with other countries. We know how fraught that can be on the international stage with misunderstandings of intent.
So I'd go so far as to suggest that examination of the foreign policy behaviours and diplomacy of nations might possibly produce a basis of a model for the interactions of these new self learning systems though I also believe we should also examine the effect on international relations that employment of 'winning' strategies may result in.
You see I'm of a mind that if we always code 'winning' ( ie agression or perceived agressive behaviour) as the ultimate objective of any autonomous learning algorithm, while such behaviour would be attractive perhaps within the financial trading world and is employed, their will be trouble if we don't provide the means of scoping it, and for us all.
So we perhaps also need a way to codify such aggression or winning or whatever we may call it, but also test it with reliability for out of bounds behaviour and home in on that 0.001% and remove it.
But I'm waffling a bit here, so I'll go away and give it some more thought.