Yrs , but no
What I am looking for is a compendium. This (to me) isn't a subject that IN YOUR FACE===but only if you
I was thinking you were looking at 'bionics' and AI.
Bionics have gone from the 'Six Million Dollar Man' to 'The Borg' in pop culture, but yes, there are people working on deeveloping everything from prosthetics run by nerve or neural impulse to complete human like robots with the ability to think for themselves.
This isn't the relative grunt work of the average computer or surfing the web, but a different field, affecting everyone from accident victims to wounded veterans to people who work in hazardous environments.
Then the question comes in of whether a fully functional and independent thinking robot would have 'rights', too.
So there is a pretty broad spectrum of issues that go beyond mere technology.
F'r instance. A firefighter/rescue humanoid robot is developed, with AI sufficient to assess dangers, hazards, find victims and transport them to safety. Two immediate scenarios come to mind.
Structural fire, fully involved, but units are sent in to fight the fire, and look for survivors (it is possible that there are people in the building, still alive, despite how it looks from the outside). One of the robots finds a victim, but out of (for want of a better word) fear of its own destruction does not recover the victim and retreats. Would it be branded a "Coward" and dismantled or reprogrammed?
Another rescue unit, seeking to rescue someone, is caught in the collapse of part of the structure and damaged to the point it is not repairable. A human would have a hero's funeral, a medal, a pension for the widow. What about the robot? To the scrapyard? How would other rescue robots feel about that? (they'd have to have some programmed empathy in order to be effective rescue units). Would they go on strike? Would it affect their efficacy, 'knowing' that if they were destroyed no one would care? Can that be programmed out? Or would a machine capable of learning from its experiences develop
resentment?
Human intelligence (or what we decry as the lack thereof) is complex enough, with motivational factors which cover a full spectrum of emotions and logic. We aren't programming Little Miss Sunshine, here, but something which has to function in the real world without either taking a crowbar to all around it, or having all around it willing to do the same.
And that raises a lot of issues...
None of which fall under the nuts and bolts heading of just "computers".