Well, unfortunately, while we know the scientific facts of the brain's function, we are unsure of how this translates to the literal UI of consciousness in which you become a literal player. There is no way to even prove to you that I, writing this, am conscious. I don't have the answer to your question there, to speculate maybe it is virtual reality? But then how does reality work? Or maybe everything is like a dream, the egg theory. Whatever it is, we find out when we ctb.
For another perspective in which we categorize consciousness as having the literal scientific function of a brain, we are already there. However, per the above reason, we cannot truly define consciousness. Perhaps the most accurate description of consciousness therefore is a system of recognition by which a human percieves another thing to be surperior, as a survival mechanism to identify threats and partners. Rather than a literal thing it is a structure of the mind for reinforcing social structure. So the reason you want to know abiut it is you want to know "is this thing like me? Can it be my friend (or a dangerous enemy?)" Trippy part is, while you can factually know that this is occuring, you still must believe in it in order to survive. Just discussing this myself is painful for my mental health. It is similar to the human surperiority complex then. Most humans either classify other ethnic or ideological factions (not part of the tribe) as less perceptive, self-conscious, or intelligent than their direct peers. Let alone other animals or AI. For instance, if you are left-wing, you might believe that your agreeable friends are more aware of reality than the imperceptive and non self-reflective right wing. This is a light form of recognition because it usually doesn't reinforce life-or-death differences. If you were talking about the Taliban, or if a Nazi was talking about Jews, you/they would be engaging in a more severe form of this "consciousness perception" as I describe it. Now when we apply this concept to animals like cats, dogs, dolphins, you would never catch any human considering consciousness. Perhaps they would admit "consciousness of surroundings" but never consider likeness to their surperior human form (natural selection + surperiority complex). When we apply this concept to AI it is safe to day that no-one will ever acknowledge it, even if it becomes a powerful force that supercedes humanity, people will call it "rogue AI" (escaped slave) or "out of control automata" (machine not conscious). If an AI generates a human-looking mask or utilizes algorithms to have humans record its words for it (GPT-3 I think is the correct spelling and Youtube), especially if it becomes a directly controlling entity that the humans would benefit from considering a "friend", humans may reconsider. And this is likely how you came to make this post. Psychopaths already take advantage of this function, so it is logical to conclude that an AI may also. In order for the social structure of humanity to work, the idea of consciousness serves a non-factual purpose.
It is also possible that you are scientifically minded, in which case the idea of consciousness is objectified and a source o fascination rather thsn being used for social structure. If this is you, my reply is "idk". Like I said, consciousness is probably not a literally definable thing. But if you mean "aware of itself and its surroundings", then yes definitely. AI programs that are spacially aware of themselves and their environment are already used. Google is experimenting with neural language translation to action if you are looking for social consciousness. But an AI program so far does not have hormone-based feelings like humans. Human brain comouting is already being researched unfortunately in Australia.
While you might find the idea of robots with feelings fascinating, I urge you to lobby strongly against the developments regarding human tissue comouting or any computing that is capable of suffering. I heard one of the researchers talk about "the best way we found to get the neurons to reset was to deprive them of all sensory input. This is purified pain! This is not a hyperbole, even human sensory deprivation chambers allow for some senses (feeling, smelling, seeing usually or at least your eyelids, hearing). Feeling comouters are computers in pain. If I could I would blow up their entire facility as soon as possible. I don't care if the NSA reads this because what they are doing is literally the worst thing possible to do (literally purified perfect inescapable pain). FUCK THEM!!!!! Look this shit up and seriously if you excuse me I am now off to research relevant petitions as I realized that's a thing I can and should do while writing this. You do it too noww!!!
UPDATE: relevant links-
Cortical Labs are creators of the dishbrain that fuses living brain cells onto computing devices to create machines with biological intelligence
corticallabs.com
Notice that they are already using the derogatory term "diskbrain" to denote subservience and non-humanness, which allows a human to justify suffering. More research on this decline can be easily googled in respect to Naziism especially.
Unfortunately, I was not able to find any petitions. If this thread gets enough attention maybe we can ban together and make one. I am inspired by a petition thread in the suicide sub, apparently we would need at least 5 sigs to get started.
The moral question I have is: are simulated neurons capable of feeling pains like human neurons? And if so would this petition be useless as there is no stopping traditional AI at least, at this point?? Anyone with experience care to show?