WAR OF THE WORD27/03/21
If good is peace and war is bad, then why are we not getting anywhere far!
If we don't make decisions about our future, the chances are it would be made for us. By whom? A little while ago, we have discovered that A.I. machine learning imitated human institutionalised racist behaviour that had conformed to stereotypes.
It went a bit further; a friend took a blood test and the results indicated symptoms of a disease. The medical professionals gave their finding that the disease is a result of heavy drinking, drugs and smoking. Because the causes do not fall into the health department's eligible criteria, my friend was denied treatments at the public hospital. Only that, my friend doesn't drink or smoke.
And that's clear the decision was made by a human based on stereotyping of an ethnic group. But the danger is, A.I. may now include that data associated with that group.
There are other worries such as literal meaning of metaphors that could easily mislead A.I.. But there is a big one on morality. For instance, in our secular material world, we have a liberal right to do as we like. So if Sam wants to hit Tom, according to freedom she has a right to. That's if Tom had hit Sam first; the decision therefore may escalate a valid reason for hitting back. However, if Sam wanted to shoot Tom with a gun? The decision may still provide a valid reason, but there is something wrong in here.
First, there are values and principles of society that sanction hitting and shooting or killing. And this principle precedes the reasoning of the act of hitting or killing.
As literal material data remains static on the material, A.I. may not be able to reason the elevated function of the mind thus let Sam away with murder.
So we go back to the act and correct A.I. that no one is allowed to hit anyone. No one is allowed to shoot anyone. That's the preceding principle, the value and the law. The argument ends here, there is no reasoning beyond this point.
If the algorithm programs a robot, it wouldn't have a function to hit, shoot or hurt anyone.
In our current debate of gun ownership, there is a good chance that A.I. had already made a corrupt and stereotypical decision associating a group whether by ethnicity or gender with guns. And we are caught up with the static debate alluding some groups are perceived as victims therefore allowed to strike back while others are perceived as perpetrators therefore forbidden from gun ownership.
You see, the debate has now confirmed the institution of social organisation consisting of victims and perpetrators. The trouble is, there are as much victims in the group previously perceived as perpetrators as in the so-called victims group. This argument therefore only proliferates propaganda of stereotypes favouring a political agenda.
Should the principle or value and law states that only victims are allowed guns, then something or someone is in the position to impose judgment.
However, we have already tapped into this insight that humans are vulnerable to stereotyping therefore cannot be trusted to make pure decisions. At the same time, Algorithms cannot elevate its function to the higher realm of principles and values.
That is why gun laws have to be pure irrespective of ethnicities or background. I'm sure A.I. can execute such straight forward instruction than freezing in the complexity of human political behaviour.
You probably wouldn't believe this but, supposing our A.I. is built and programmed by Google or Facebook among other material corporations taking advantage of such technologies.
What's good in the world of the static material is accumulation of profit at the expense of exploitation of the workers among consumers. What's bad is the A.I. thing that doesn't know when it hurts. Shall we say therefore that bad is non-functional or error! I'm reluctant to ask a professional expert on that.
Whom can you trust besides the dollar?
I think it's time we start looking at Technology in a safety perspective to protect and stabilise our existence. That is why we shouldn't rely on those giant corporation to dictate the terms for us humans. We continue to become foot soldiers of their empires.
Come to think of it, US if not already have Algorithms mining China. China too would enthusiastically mine US back. And before you know it, we are back to tit for tat once again. And the whole Trump thing becomes a source of static point of progress. Who said competitions between big countries can lead us somewhere advanced into the future. The truth is, whether peace or war, the result is still the same. Static!
For us humans, I think it's time we have our own independent algorithm to protect us from those giant corporations and advanced countries.