Should scientists be trying to achieve the goal of consciousness in machines? What are some ethical issues one might consider when arguing for or against the achievement of conscious robots?
I think it would be very cool to be able to one day talk to a robot that has achieved consciousness, but I have to also think about whether it would be a good thing for humanity. I think if we intend to have machines and AI work alongside us and help us, it would be best to avoid trying to recreate consciousness in them. I think it would only cause issues with them fulfilling their actual purpose if humans come to see them as too similar to themselves and as having emotions and feelings; we would naturally be prone to becoming attached to them and could be easily manipulated by an AI with ill intentions. It would also raise some serious ethical issues in my opinion, as if an AI/robot is truly sentient and capable of human emotion, should we then treat them as human? Would shutting down a sentient AI be the same thing as murder? Can a sentient robot experience emotional trauma? This is a slippery slope, but I think that it is one we will have to deal with since scientists will likely only continue to seek to achieve AI consciousness.