fbpx
Vance Pravat

Coming Soon

MASTER OF GOLEMS

Red Earth Book 1

Dante of Newhaven belongs to the Psionicists’ Order, a shadowy people from far Antark who control terrifying machines with their minds. On an earth decimated by rising temperatures, the psionicists have carved out an empire—a realm held together by the iron rule of an emperor who is even more mysterious than the so-called sorcerers that serve him.

But Dante is no magician. Host to memories of people long dead, perpetually eclipsed by a talented sibling whose ambition knows no bounds, and unable to work the golems that ensure his order’s survival, he is practically an outcast. Now, on the eve of his apprenticeship, his imperfect-but-familiar reality is about to be shattered. War is coming; old enemies are mobilizing, seeking to retake what is left of the dying planet. And the siblings may well hold the key to their success.

Caught in a web of deceit and murder, Dante must go to extraordinary lengths to save himself and the ones he loves from certain doom. But will it be enough? Is the doom what he imagines it to be, or is it something far more sinister than mere men—a secret that lurks in the ancient heart of the Order… and perhaps his own fragmented self?

Deciphering the memories of his predecessors and unraveling the tangled thread of the past may be the hardest thing he will have to do.

Coming Soon

MASTER OF gOLEMS

Red Earth Book 1

*Stand in cover: Cover yet to be designed

ZEROGLYPH

A near-future technothriller

Zeroglyph aims to be more than just a good yarn. In the book, I try to answer the all too urgent question of whether we can create a truly intelligent being that understands the difference between good and bad.

Can a sufficiently advanced AI derive morality on its own, or do we need to program our own ethical values into it? If so, what kind of morality would this be? Should we teach our AI some version of Asimov’s laws—some deontology of fixed rules that, while amenable to codification, is prone to quirks of interpretation? Or should we make a utilitarian AI that looks to the greater good first, and to whom only consequences matter and not the intentions behind an act? Or is there a better alternative?

Furthermore, if an AI can be taught morality, can we then conclude that it is a person, deserving of all the rights and privileges reserved for our own kind? What would a courtroom battle over AI rights look like? What sort of arguments would be put forth? Is it even advisable to give such beings rights paralleling our own?

All this, and more.

ZEROGLYPH

A near-future technothriller