Responsabilidad y libertad en la robótica asesina. Responsibility and freedom in the robotic assassin.


50cf2e12c5a6b23f6205c5612059bc51_article

 

Algunos de los territorios fronterizos de Korea del sur están controlados por robots armados. De momento un controlador será el que decida si el robot puede matar o no, dependiendo del tipo de amenaza que identifique, pero Naciones Unidas ha comenzado a discutir sobre la posibilidad de dejar que sea el mismo robot el que tome la decisión de matar.  A su vez, ya existen robots que realizan las operaciones más complicadas de cirugía guiados desde otra habitación por el cirujano, pero se calcula que en breve será el robot, de modo autónomo, el que realice la operación. La sociedad parece que quiere abolir en sentimiento de responsabilidad y de culpa poniendo de intermediarios a los robots.  Para labores mecánicas, la perfección operativa del robot es inigualable pero ¿deberíamos ampliar su funcionalidad para labores sociales que impliquen decisiones vitales? Es decir, ¿es conveniente  la ficción de un Robocop hecha realidad?

Qué duda cabe que es más sencillo a nivel emocional, dejar que el robot tome la decisión basada en parámetros programables. De este modo la responsabilidad es compartida: empezando por el informático que programa, el diseñador del robot, el político que aprueba la ley y, en último término, el votante que apoyó al político. Pero cuando una persona se enfrenta a otra, ya sea en el campo de batalla o en una camilla de un quirófano, el peso de la responsabilidad cambia. El factor de la libertad y de la autonomía toma sentido. La pregunta es ¿qué clase de mundo queremos?

Some of the border territories of South Korea are controlled by armed robots. Currently a controller will decide if the robot can kill or not depending on the type of threat that identifies but the UN has begun to discuss the possibility of letting the same robot’s decision to kill. In turn, there are robots that perform the most complicated surgeries guided from  for the surgeon from another room, but soon the robot will do it autonomously. The society wants to abolish the feeling of responsibility and guilt  placing the robots instead of humans.For mechanical tasks, operational perfection of the robot is unmatched, but should extend its functionality to social work involving vital decisions?Is it appropriate that the fiction of a Robocop come true?

Undoubtedly, it is easier emotionally, let the robot make the decision based on programmable parameters. Thus the responsibility is shared: starting with the computer programming, robot designer, political approving the law and, ultimately, the voters who supported the politician. But when a person is facing another, whether on the battlefield or on a stretcher to the operating room, the weight of responsibility changes. The factor of freedom and autonomy makes sense. The question is what kind of world do we want?

 

http://elpais.com/elpais/2015/04/11/opinion/1428773409_735866.htmlhttp://actualidad.rt.com/actualidad/view/127699-debate-robots-asesinos-onu

 

http://actualidad.rt.com/actualidad/view/127699-debate-robots-asesinos-onu

 

killer robot BBC

http://www.bbc.com/news/technology-27343076

Diferencia entre distinguir un civil de un combatiente

Days for the free-thinking in robots

 

17 pensamientos en “Responsabilidad y libertad en la robótica asesina. Responsibility and freedom in the robotic assassin.

  1. Todo sabemos que matar una persona ya sea por obedecer una orden o por defensa propia trae una serie de secuelas. Sin embargo el uso de robots militares autónomos no es la solución.

    La decisión de arrebatar una vida humana debe de hacer lo una persona, ya sea directamente o por el uso de un robot no autónomo, ya que el ser humano a diferencia de una maquina autónoma presenta sentimientos y remordimientos. Tales pueden procesar compasión o perdón algo que una maquina al detectar su blanco no procesaría y solamente se limitaría a destruir lo sin analizar las consecuencias que aquello acarrea. Consecuencias como: que podría esa persona aportar a su sociedad, que vació dejará en su familia…
    Igualmente tampoco creo que se puede aceptar una persona que obedezca ciegamente una orden sin tener algún tipo de trastorno porque la vida al igual que la libertad es un derecho y como tal aquel que la arrebate debe vivir con ese juicio por muy duro que sea.

    Además las maquinas pueden ser fácilmente manipulables por ciertas personas como los hackers. Estas personas pueden interferir en la decisión de la maquina con lo que puede ocasionar graves consecuencias.

    En definitiva no podemos dejar en manos de maquinas decisiones fundamentales que pueden determinar el futuro de una persona sino que nosotros debemos asumir nuestros actos y sus posibles consecuencias.

    Fuera del apartado militar si estoy de acuerdo con el uso de maquinas para salvar vidas sin que ellas las quiten directamente, ya sea en la medicina u en otro campo.

  2. Instead of focusing on who does wrong,we must question ourself if leave a robot without human moral in a conflictive location , is the solution.
    I think that it’s very weird and I must say I’m disagree.
    Why?
    Because as much as the robot kills a terrorist who could have killed many people , kills a child which grabs a gun ( That child , within 2 seconds of holding the weapon would have dropped it and the he would have run away )
    The difference between a human and a robot at this time , would be that man would have also claimed the lives of terrorist without any doubt, but he would have stopped a moment in the case of the boy , watching , because he know the importance of a human life. (Ejemplo: https://www.youtube.com/watch?v=9XKH4bFBlJ4 ) I know that it’s just a film, but it happens a lot in the battlefield, not just with childs.
    Anyways a war it’s have zero humanity, but maybe, the human decisions can save people inside the war. Of course if the objetive is protect the people and not kill the people.

    (Sería interesante ver que clase de programaciones usan, dejar un robot asi por Estados Unidos, donde todo el mundo tiene una pistola en los calcetines, deberia ser un reto)

  3. Que un robot decida si una persona muere o no me parece demasiado irónico. Estos robots asesinos se han creado y se están intentando desarrollar por la simple cobardía del ser humano y por el sentimiento de culpa que matar a una persona puede llegar a producirle, y piensan que si un robot realiza esa función conseguirán sentirse más tranquilos al irse a dormir, pero creo que debería ser al contrario, un robot puede fallar y puede matar a personas inofensivas e inocentes por un error tecnológico e incluso si se producen fallos en su sistema podría empezar a disparar y no parar. Dándole el poder a un robot de asesinar o no a una persona creo que estamos otorgándole el máximo poder a un robot y que estos acabarán dominándonos e imponiéndose sobre nosotros. Por otra parte como se ha dicho hoy en clase un robot no tiene prejuicios contra nadie, y no va a atacar a alguien por ser de color por ejemplo, y en caso de la cirugía podría realizar los pasos de una manera más mecánica y perfecta de la que lo podría hacer un ser humano, pero me pregunto como habiendo gente con tantísimos estudios y sin
    trabajo se intentan crear robots para darles puestos que a los humanos les faltan.
    Un tema que se repite mucho en los artículos es que cada vez se presentan menos soldados y que después de una guerra quedan siempre trastornos psicológicos, desde mi punto de vista pienso que si utilizásemos robots en vez de humanos, la guerra tardaría una eternidad en acabar ya que los países continuarían fabricando robots y robots hasta que el que tenga más poder económico consiga ganar e imponerse sobre el resto de países con sus robots. A parte pienso que el problema de las guerras no tendría que darse debido a que las guerras siempre se producen por el rencor y la avaricia y ganas de quedar por encima de otros del ser humano, así que creo que es fallo de los humanos y que sin este fallo no se perjudicaría ni nos tendríamos que plantear estas cuestiones.
    Como podemos ver en la película de ‘Yo, Robot’ aunque sea ficción un robot puede estar programado para protegerse a sí mismo y si esto es así podría realizar miles de atrocidades si se siente atacado, y también un robot al sentirse furioso o al ver que muchos humanos van contra el puede empezar a realizar cosas que ni los mismos humanos que lo programan sepan.
    Para finalizar acabó diciendo y contestando que no, no deberíamos desarrollar las actividades de un robot, si se siguen desarrollando es para conseguir acallar la conciencia del ser humano.

  4. That a robot decides if a person dies or does not seem to me to be too ironic. These killer robots have been created and are trying to develop by the simple cowardice of the human being and for the feeling fault that to kill a person can manage to produce him, and think that if a robot realizes this function they will manage to feel calmer on having been going to sleep, but I think that it should be on the contrary, a robot can trump and to kill inoffensive and innocent persons for a technological and enclosed mistake if failures take place in his system it might start shooting and not stopping. Giving him the power to a robot to murder or not to a person I believe that we are granting the maximum to him to be able to a robot and that these will end controlling ourselves and being imposed on us. On the other hand since a robot has been said today in class does not have prejudices against anybody, and is not going to attack anybody for being of color for example, and in case of the surgery it might realize the steps of a most mechanical and perfect way of which a human being might do it, but I wonder as there being people with tantísimos studies and without work robots try to be created to give them positions that the human beings lack. A topic that repeats itself very much in the articles is that every time fewer soldiers appear and that after a war psychological disorders stay always, from my point of view I think that if we were using robots instead of human beings, the war would be late an eternity in finishing since the countries would continue making robots and robots until the one that has more economic power manages to win and to be imposed on the rest of countries by his robots. To part I think that the problem of the wars would not have to be given due to the fact that the wars always take place for the rancor and the greed and desire of staying over others of the human being, so I think that it is a failure of the human beings and that without this failure it would not be harmed we would not even have to appear these questions. Since we can see in the movie of ‘ I, Robot ‘ though it is a fiction a robot can be programmed to be protected to yes same and if this is like that it might realize thousands of atrocities if he feels irresolute, and also a robot on having felt furious or on having seen that many human beings go against can start realizing things neither that nor the same human beings who programme it know. To finish i would end up by saying and answering that not, we should not develop the activities of a robot, if they continue developing it is to manage to silence the conscience of the human being.

  5. In the first place , I have to say that the human evolution is inevitable and I believe that if now these robots are not used to fight , they will be used in our future.
    How I have heard , each time there are less people who want to participate in a war or to be a soldier so we would search a solution for this , and it would be the robots. In addition , humans wouldn’t die in the battles so it would be very good for us.

    Even so , I think that robots are not the best idea to fight and to protect our territories. They have been created by robotics specialists with all the specific mechanization to react to the enemy but I’m not very sure of that. One day , they could kill someone who doesn’t have to die because it was not the enemy so I think that it is very dangerous. Futhermore, if the robot fails , it can start to act without any type of control and maybe people will not know how stop them.
    And also , it would need to improve or to fix the behavior of the robot so it would takes a lot of time to do this , and it would very difficult to join a group of soldiers to fight while the robot has been modificated.

    In my opinion , the use of robots are not etic. They are machines which have been built to kill people. They can’t have any type of feelings of fear or compassion and he will kill without thinking before and without stoping if it is neccessary.
    It it not very sure for me if it is not controlled by any human, because it the robot is going to do anything that you think that it’s uncorrectly , humans will not can stop them!

    On the other place , robots will be the future for our world so finally only robots would be fighting between them and humans would be apart of this. They only would think in doing better robots to beat the others. So only would participate the countries which have a good economy to make and to maintein them.

    Finally , I am going to write about the robots in the surgery too. I have seen a documental about this , and the patiences who was to be operated with the robot was very calm. The articles have said that it is better because it can operate only with a small incision that will not leave a big scar in their bodies.
    Surgeons can controll the robots with a machine and they send the orders to operate while the robots reacts to them. It is easier that other way to operate but the problem that it has a high cost because the machine is very expensive and the process after the operation is more economic than the other that they do with their hands.
    The machines work better than a surgeon and in my opinion , if the transaction goes wrong , it could be with the robot or if you operate in the tradicional way.

    I believe that if these methods improve with the time , it will be the future of the medecin , and the same with the soldier’s robots.

  6. I personally think that let machines decide if an human being is able to be killed by a robot it’s not a good idea. Machines are technology and like all computers can be hacked. What about if a weapon mass destruction is created and the enemy hack it? Soldiers could do nothing because they can’t control that robot.

    I think automatic killing machines should be prohibited in wars like the cluster bomb a few years ago. That robots could have collateral damages, with a simple mistake it could shoot to anyone instead of just the enemies by the fact of carrying with a gun or a assault riffle.

    If technology still evoluting, next wars will be robots against robots, the winner team will destroy the loser team people without any possibility to make them think over, killing women and children, and no one will have any guilt in that action.

  7. These robots hace advanteges and disadvantages since with them at the time of killing theh haven’t got any charge of consciousness and this is something that people want to throw off so we live better.

    As stated un one of the vídeos these robots leave be manipulated of any person in war, so maybe for some technical foult they can reach kilo an innocent person it would become a death unwanted.

    These robots may be favorable too because they are intended to fight un a war as a fewer soldier diez but if this continued they remover many jobs so finally end up replacing al.
    In muy opinion these robots wouldn’t be good for humanity because they kill without consciente.

  8. Well, in my opinion, we aren’t aware about the consequences about robots that can decide if a person lives or dies. It isn’t correct to put it in them the responsability that, in fact, we want to avoid if a person dies in our hands. The reality of the self-suficient robots it’s that people wants to put their responsabilities when a life is in danger, to avoid the guilty feeling, and to don’t worry about to process their feelings. But I think, it’s not the solution. We are humans, we have to feel all type of emotions and feelings, and we can’t avoid feelings that, someday we have to feel. Why? It’s simple. Because all the feelings build us. It’s like we take away a part of us.
    Self-suficient robots can be used to other works in which live isn’t in danger, like a little operation, or to do housework, for example. But, think about it. A machine, without feelings, without emotions and only with a rational or logical brain, that can kill you in cold blood, it’s not good.
    It’s true that the technology is in development every day, and that one day, technology will be in the future in all things that now are made for humans, for example, the cleaning. But, it is not mean that robots can decide about our lifes. Human have to have responsabilities and rights, but those responsabilites are a very important part of us. If a person take a job, for example, a doctor that have to operate people, he/she knows that in some cases, he/she has the life of a person in his/her hands, and it build her personality, because this person will have a concept of responsability biggest than other people who works in the same hospital for example. (It’s not mean that his/her work is more or less important than other). If we diverted responsabilities in robots, we would lose the part that we have of responsability, and if we lose it, we will have several problems in the future for important decisions.

  9. El dilema de los robots asesinos que matan a personas es un problema muy preocupante. Existe una gran controversia en la ONU sobre permitir o no permitir a estos robots que en mi opinión son muy peligrosos. Los ejércitos de los países que están desarrollando estos tipos de robos o de drones intentan justificarse diciendo tres simples cosas:
    -Salvan la vida de los soldados que reemplaza
    -Es más preciso
    -Obedecen todas las ordenes
    El problema de estos robots es la ultima justificación que dan los ejércitos, el obedecer las órdenes. Los robots carecen de emociones, esto significa, que tampoco tendrán moral por lo que no determinaran si es bueno o mala la orden que le han dado.
    Si el robot tuviese problemas técnicos, la responsabilidad se puede derivar en varias partes hacia el creador del software, el creador de la estructura del robot y también hacia el general o soldado que ha dado la orden.
    Pero estos robots “asesinos” no cumple ninguna de las 3 leyes de la robótica que exponía Isaac Asimov en su libro y son las siguientes:
    -Un robot no hará daño a un ser humano
    -Un robot debe obedecer las órdenes dadas por los seres
    humanos, excepto si entran en conflicto con la 1º ley
    -Un robot debe proteger su propia existencia en la medida
    en que esta protección no entre en conflicto con la 1ª o la
    2º ley.
    Y que después el EPSRC (Engineering and Physical Sciences Research Council) publica los principios de la robótica derivados de las tres leyes de Asimov.
    Como conclusión, yo pienso que los robots asesinos no deberían ni desarrollarse ni construirse ya que como dice Christof Heyns, el relator especial de la ONU, la muerte por algoritmos amenaza el disfrute del derecho a la vida y la dignidad.

    The dilemma of murderers robots which kill people is a serious concern. There is great controversy in the UN is allow or disallow these robots which in my opinion are very dangerous. The armies of the countries that are developing these types of robots or drons try to justify saying three simple things:
                    -It save the lives of the soldiers who replaces
                    -It’s more precise
                    -They obey all the orders
    The problem with these robots is the ultimate justification given by the Army, the obeyment of the orders. The robots have no emotions, this means that either have moral therefore not determine whether it is good or bad the order has been given.
    If the robot had technical problems, responsibility can be derived in several parts to the software creator, the creator of the robot structure and also to the general or soldier who gave the order.
    But these murderers robots does not meet any of the three laws of robotics exposing Isaac Asimov in his book and are as follows:
                 -A robot will not hurt a human being
                 -A robot must obey orders given by beings
                   human, unless they conflict with the law 1
                 -A robot must protect its own existence as long
                  as such protection does not conflict with the 1st or the
                  2nd law.
    And then the EPSRC (Engineering and Physical Sciences Research Council) published the principles of robotics derivatives Asimov’s three laws.
    In conclusion, I think that robots murderers should neither be developed or constructed since as Christof Heyns, the UN special rapporteur says death threat algorithms enjoyment of the right to life and dignity.

  10. Primero de todo, quiero empezar mi comentario diciendo que no estoy de acuerdo con los robots militares o asesinos. Mis argumentos son simples, y es que estoy convencida de que la creación de este tipo de robots fomentaría el asesinato.

    Uno de los motivos para crear estas nuevas armas es la dificultad de encontrar soldados que quieran arriesgar su vida sin apenas entender el por qué, según el periódico El País que así lo afirma: Las sociedades avanzadas tienen cada vez más dificultades para encontrar soldados dispuestos a poner su vida en juego en guerras que se libran muchas veces en territorios remotos y que en ocasiones ni siquiera acaban de entender. Esto tiende a pensar que se perderán guerras debido a la falta de soldados, sin embargo si esta ausencia corresponde a todos los bandos y países también puede darse el caso de la disminución de guerras, o al menos la disminución de consecuencias desastrosas. No es lo mismo una guerra mundial, que una guerra civil, como tampoco será igual una guerra en la que participa un 75% de población, por poner un ejemplo, que una con un 25%. Como humanos que somos, imperfectos, las luchas son inevitables, pero las consecuencias pueden ser diferentes.
    Al tener armas letales capaces de matar (autónomas o no) sin sentir remordimientos, culpa o consecuencias para la persona, se efectuarán más muertes, y eso será así. Lo compararé con un ejemplo sencillo para que quede clara mi idea: Si un niño roba un caramelo en una tienda de chuches, pueden ocurrir dos cosas, la primera, que le pillen y por tanto se sienta mal, triste por haber hecho algo malo, le castiguen y en la mayoría de los casos no lo vuelva a hacer, o que lo haga menos, ya que recuerda como se sintió la ultima vez; la segunda opción es que nadie le pille y al haber sido tan fácil coger ese caramelo lo vuelva a hacer una y otra vez, estamos halando de un niño pequeño que aun no sabe que está bien y que está mal. Es decir, la opción fácil es tener robot sin sentimientos que pueda matar sin problemas, haciendo que se repita más veces de las que serían con personas que si tienen sentimientos y riesgos.
    También podemos pararnos a pensar otra razón para crear robots asesinos, y como han dicho mis compañeros en otros comentarios, es un acto cobarde, matar sin querer sentirte culpable ya que no has sido responsable de una forma directa, y darle la responsabilidad a un sofisticado trozo de metal que ni siente, ni padece.
    Como no estoy de acuerdo con esta sutil e indirecta forma de asesinato, no lo estoy ni aunque sea una persona la que se encarga, y por supuesto menos aun de que sea autónomo.

    Por último quiero compartir un sentimiento, una emoción. Los nuevos avances tecnológicos pueden llegar a conseguir que una máquina pueda repetir imitar cualquier comportamiento humano, incluso el de matar. Pero nunca, insisto, Nunca, una maquina podrá ser dotada de un software que le proporcione sentimientos, emociones y consciencia, lo cual genera un gran riesgo para la humanidad. También planteo la siguiente opción: si toda la inversión económica, tecnológica y humana en la investigación y desarrollo de maquinaria destructiva, se dirigiese hacia la creación de herramientas que posibilitasen la comunicación, la mediación y la resolución de conflictos entre los seres humanos, nuestra sociedad, nuestro mundo sería mucho mejor.

    (Ahora traduzco el comentario al inglés, pero no quería que se perdiese ninguna idea)

  11. First of all, I want to start my review by saying that I disagree with military robots. My arguments are simple, and I am convinced that the creation of this type of robots would promote murder.

    According to the newspaper El País, one reason for creating these new weapons is the difficulty of fnding soldiers willing to risk their lives hardly understanding why. In the spanish newspaper it says: For advanced societies it is increasingly difficult to find soldiers willing to put their lives at stake in wars often fought in remote territories and that they sometimes don’t even really understand. That leads us to think that wars are lost due to a lack of soldiers, however if this absence of soldiers was for all parties and countries there might also be the case of declining wars, or at least decreasing disastrous consequences. It’s not the same a world war as a civil war, nor will it be the same a war in which 75% of the population, for instance, are involved, as one with 25%. As humans are imperfect, fights are inevitable, but the consequences could be different.

    Having weapons capable of killing (autonomous or not) that don’t feel remorse, guilty or are aware of the consequences for the person, there will be more killings. I would like to put a simple example to make clear my idea: If a child steals a sweet in a candy shop, two things can happen: The first one, in which he´s caught and therefore he feels bad, sad for having done something wrong, he will be punished and in most cases he won´t do it again or at least less, remembering of how he felt the last time; the second option is that they don´t catch him and because of having been so easy for him to get that candy he will do it again and again, we are talking about a small child who does’nt know yet what is right or wrong. So, the easy option is to have a robot without conscientiousness of what it´s doing and that can kill without problems, which leads to be repeated more times than it would be the case with people who do have feelings.

    We can also stop to think about another reason to create murder robots, and as my colleagues have said in other reviews, it’s a cowardly act, killing without having to feel guilty because you weren’t responsible in a direct way, and give the responsibility to a sophisticated piece of metal that neither feels nor suffers.
    As I do not agree with this subtle and indirect form of killing, I’m neither with a person being in charge, and of course even less if it’s autonomous.

    Finally I want to share a feeling, an emotion. Due to the new technological advances we might be able to get a machine to imitate any human behavior, even killing. But never, I repeat, never, a machine can be equipped with a software that provides feelings, emotions and consciousness, which creates a great risk to humanity.

    I also want to point out my following opinion: if all economic, technological and human research and development of destructive machinery, were used to create tools that made possible communication, mediation and conflict resolution among human beings, our society, our world would be much better.

  12. From my point of view I disagree about the idea of using robots in military wars. Just because if developed countries use these robots against not undeveloped countries that use people for their wars, you’re only saving 1 life, the one from the developed country; and the money wasted for the person to be in that place is cheaper than a robot
    but what would happen if both countries are developed? Would robots fight against robots? I think that if this situation happened, the world would be absolutely silly.
    Also because we shouldn’t let the robot decide if a person die or not. Even it’s safe and created totally correct, one day it could act crazy and wrong and kill someone innocent (even killing people is not the solution for nothing but) so a consecuencie maybe will be make the human lose a part of their feelings (if we don’t feel culprits, persons in charge or simply brave, a human only will be a piece of walking meat) , but think for a moment, would you like to be killed by a robot? Me not.
    The last reason why i’m against military robots is because a person can be in his place, fighting, and not a machine which occupies his working place. You can say ‘but the people who created the robot aactually works’ and I’ll say ‘yes, but one day robots will be maked by other robots without the human intervention’

    So, finally, as everything, technology will be improved and who knows how we’ll live in a few years

  13. No estoy a favor de los robots asesinos, opino que una máquina no puede tener el derecho de decidir sobre una vida humana, aún detectando que el enemigo suponga una amenaza, siempre es mejor herirlo de manera que no pueda atacar, que matarlo.

    Además, un robot no tiene sentimientos de compasión, de forma que mataría sin piedad y cabe la posibilidad de que el enemigo a batir no sea tan enemigo. Por otra parte, creo que si en lugar de mandar militares a la guerra se mandaran robots se reduciría el número de soldados heridos. De la misma manera, los altos cargos del ejército no quieren tener sobre su conciencia la responsabilidad de haber matado otros seres humanos, por lo tanto preferirían dejar la decisión a un robot.

    Después de pensar sobre las distintas posibilidades, creo que la vida humana no puede dejarse en manos de robots.

  14. I am for the use of robots in vital decisions like killing, they re more efficient and cheaper.
    Its true that a machine has no moral, you can add army and moral laws in their programes and they will obey them, but they wont internalize them, they wont feel good or bad with them. But why would you need moral in a not moral task like obey an order form your superior in the army?
    Many peple say that we cant trust in robots beacuse they dangerous for the human being, but we everyday let many machines take part in vital decisions related with us. For example when we use an autopilot in our vehicles we trust in a machine than can kill us. But thats not point, This is not about humans and robots; we are alone, robots are just a developed extension of our knowledge: just a tool.
    Robots or AI are nor a hazard for the human being, they just obey and serve us. Humanity is its own enemy. We cant blame technology and we cant stop its development because its our evolution.

  15. The autonomy of the robot its so questionable and we must see the pros and the cons. In one hand, the robots are accurate and perfect, they wouldnt fail in their work and they are just tireless, so in some works like medicine they would be nice, in th other hand, they’re not emotional, they work just like they were programme so they can do real cruel things if they were programme for do it.
    For exmple: http://k46.kn3.net/E/5/C/4/A/6/A83.jpg or http://www.elperiodico.com/es/noticias/sociedad/bombero-aborta-desahucio-una-octogenaria-coruna-2321267
    In both cases they refuse to do a thing they think that would be cruel or unfair, but if they were robot, the cant be the capacity of refuse.

    The second question: in my opinion give the only responsability to a robot from the fails would be inmature. You cant ignore your own responsability in a robot for save yourself meanwhile the robot maake damage to the people. Its just stupid dont give the responsability to a human in my opinion.

  16. Must we give always more importance to the efficiency? It’s true that robots will act as the best way because they are programmed to do it and they only follow orders. Sometimes to be programmed to the perfection to act without committing any failure is the better thing, at least in situations as the life of a person. In these situations I think that there can not be deprived the human feelings that this produces. I agree with the use of robots in operations since to save a life the major possible precision is not sometimes sufficient, but that a machine has the aptitude to take the life from someone because this person has a few characteristics that it turn into threat without making before myself a deep analysis it does not look like the most intelligent thing because for example in this case estarámos forgetting the famous phrase: ” sometimes the appearances are deceptive ” And I at least think that there is no reason of weight for the one that could remove the life to a person, the only one maybe it would be to save the life of others but without knowing the intentions of the persons, only with an analysis of his temperature, his characteristics and what seems that it goes in the hand a person cannot know what he claims.The use of these machines is to take the weight from us of above, the feeling doubt of the human being and the feeling guilt. We leave these entirely human feelings in hands of machines without any moral analysis. Sincerely I think that of this form the guilt is in a minor level individually because it is distributed between more persons but in general it is a major position these persons are prepared mentally with that they do not have the fault but somehow they feel it but of some form it knows that maybe they might have made slightly better. In addition, does nobody think about the explanations that maybe there need the persons affected by the failures of these mistakes? The only response that maybe they could give him is that a machine that should credit operated to the perfection has failed. That will change this robot but later they will put other one with the same possibility of being wrong that the previous one. Maybe update this robot for improvements, but the one who does say that this it is the best solution? How we know if the better thing would not have been to return to the natural thing? The personas they are wrong because they are persons and are not programmed to the detail without possibility of failure but the robot must not be wrong, and of all forms they do it so, which is the reason by which an action with a great moral weight must be left in hands of someone without capacity of analysis apart from a few predetermined variables? Of a machine that it cannot feel, and maybe do not provoke that is wrong for be leaving to go for feelings but also to turn the life I humanize in a simple second in owed which a cantidad of information takes the decision that is supposed correct. In my opinion,killer robots aren’t avance,they are antinatural.

  17. Pienso que esto es un proceso imparable. Hoy día ya hay muchas situaciones en que las máquinas juegan un papel decisivo. Pero aunque las máquinas, en una situación determinada, puedan realizar una acción, siempre habrá una persona, la que la controla o la que la ha programado que será el responsable.
    De todos modos creo que sería una buena idea que como en la película Yo Robot, los robots fuesen diseñados siguiendo las leyes que allí se dicen, fundamentalmente que ni por acción nio inacción permitieran que un ser humano sufriese daño. Claro que esto no parece muy probable si se diseñan robots para la guerra.
    Por otra parte el uso de máquinas cada vez más sofisticadas en la guerra es un hecho que no siempre provoca más muertes. En las películas de la segunda guerra mundial podemos ver como los aviones arrasan ciudades enteras para eliminar un objetivo; en las películas más recientes un dron o una “bomba inteligente” pueden acabar con el objetivo con muchas menos “víctimas colaterales” que antes. Las guerras con robots no sé si serán más terribles que las anteriores con hombres a veces llenos de miedo, o de odio…

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s