My name is Sharif Bean, and I am a historian. I taught history at Georgia State for several years, as well as Gwinnett Technical College.
then pivoted from academia to somewhat the tech world. I went and got certifications in data analytics.
I'm not a tech guy, but I wanted to get ahead of or get a handle on the emerging technology and language around it.
And so as a historian, I'm always interested in the fate of the human subject, not just the human species, but the human subject. Things like meaning and purpose and how we define
Anthropologists would define human beings based on the development of tools from Homo erectus to Homo habilis to Homo sapien.
And someone has just recently presented the possibility of the new phase of Homo promptus because of the emerging, the ubiquity and pervasiveness of AI.
So my interest in this subject came about when I chat GPT myself and it was lying on me. Add some things in there that were inaccurate and I, you know, it was, maybe lying is a harsh word, incorrect.
But I also had difficulty using ChatGPT for sources. And as historians, that is everything. That's more important than the information itself, its sources.
And so we have what's called, you have history, and you have what's called historiography. And does anyone know what historiography is?
So history is the study of the past. Historiography is the study of the history of how the past was studied.
So if you have a historian who's looking at, for example, the Civil War, and they're looking at documents to understand what caused the Civil War, why the North won, et cetera, that's history.
Historiography is stepping back and looking at what all the historians have said about the Civil War and which theories and positions have been discredited and discounted and why we no longer subscribe to those theories.
So historiography is placing ideas about the past, biases. It's the study of all that. So it's a self-awareness of the discipline of history itself.
And so I was very much fascinated about
AI as a means of amplifying the human subject, good and bad. And this has really been a concern of mine.
So I looked into or all the sort of high-level threats that AI can pose to the human subject.
Huey Newton talked about this in the 70s, in 71, where he sort of made this prescient remark about the rise of a technocracy, where he talked about, he said, in this country, talking about the Black Panther Party, but he talked about how in this country the Black Panther Party, taking careful note of the dialectical method, sees that while the lumpenproletariat are the minority and the proletarians are the majority, technology is developing at a rapid rate that automation will progress to cyber nation and cyber nation probably to technocracy.
So, of course, you know, this was a time during Vietnam War and very much everyone was, or the Black Panther Party in particular, were Marxist and there was talk of ideology and nationalism. this was during a time when you started to see the erosion of borders technocracy is eroding borders even more so and so he was looking at how a lot of these ideologies were going to become somewhat obsolete because of the way in which human borders are going to be reconfigured and restructured so
I became very much fascinated with the threats that AI presented. And while I love, to be honest with you, AI, it's made so many of my tasks innumerably easier.
I mean, I will, I won't say I'll sell, drugs on the White House lawn to a Secret Service agent, but I will, there are many things that I will do before I write a cover letter from scratch. You know, you know, it's just, so, you know, there's no doubt that AI has made things, you know, extraordinarily easier.
At the same time, it has also made the nefariousness of the human subject more powerful and far-reaching and easier to operate.
And so this is really something that as a historian I became fascinated with, the idea of algorithmic violence how these algorithms can either unintentionally or intentionally reinforce harmful narratives that prop up systems of violence and subjugation.
That became, and we see it, I mean, you know, we already had before AI black Americans who were being discriminated against in the application process based on names, okay? AI can help prevent that, but it can also make that and has made that easier to do. And that is something that you also see happening.
You have predictive criminalization. right where police predict or they can deploy violence using creating feedback loops and criminalize black neighborhoods while legitimizing systemic or systematic overreaching through technological objectivity that masks
So we're seeing a moral offloading onto AI where now there's a sort of third party or a barrier between the human beings that are practicing these timeless forms of oppression and now they can sort of present or offload or blame the system. The system now is no longer the system of people who are carrying it out, but now the system is a literal electronic system, algorithmic system that no one has to take accountability or they'll use it as an excuse to not take accountability for.
And you see AI having a reach in everything, from not just the application process, but we're seeing the rise of this sort of digital plantation where there's AI-enhanced surveillance that is using facial recognition. I mean, you're going to see, and China has moved in this direction faster than most the rise of the smart city in the same way you have the smart phone you're going to see the rise of the smart city and you see even AI being involved in nuclear weapons and other weapons of mass destruction biological weapons so
The harm that human beings have always been capable of towards each other is now reached a stage where it can be automated and become more efficient.
I, in my little corner of the world, the solution that I have offered is what's known as AI historiography. Okay.
And it's putting the human factor back at the center of technology.
And as I said before, historiography is the study of how history is studied.
So using the example of the Civil War, that was one war in the United States Civil War that happened over a span of a few years. But how many books are written on the Civil War? innumerable, innumerable, right, exactly a lot.
And right now, there's there are dissertations on the Civil War being approved. So there'll be more books.
So how can one event generate so many perspectives. And so historiography is the study of those perspectives. So it's not so much looking at the past, it's looking at how the past is studied.
So it's advocating for a self-awareness of how these perspectives are looked at and discussed.
AI historiography is doing that through these LLMs.
And what we have Well, we have myself and my associate named Jermaine Edmonds. He's the tech guy for this.
We've put together a prototype. It's not fully developed yet, but prototype called Mosaic. And it's Mosaic V1.
It's an ethics and history assistant that uses agnostic rules, rules that I came up with, And it reads an evaluation pipeline that delivers culturally sensitive, historically grounded answers and safely reframes when there's a high risk.
So you have some, to put it another way,
1The main aspect of Mosaic is transparency. So it's more interested in not just the information that it gives, but also showing the process by which it arrived at that information.
So what I want to do with this with this app is make it so that the method by which we arrive at answers or a perspective on something is as much a part of the discussion as the answer itself.
Because when you have someone who says something and somebody says something back in return to sort of as a rebuttal or a refutation of it, It's just yelling at each other and talking at each other.
The key to breaking that is to say, and this is what a historian would say, one very simple question is where did you get that? How did you arrive at that conclusion? Walk me through the ideation of how you came up with that answer.
And that is really, because history, and I'm talking here about history, but this idea can expand to sociology, anthropology, economics, finance, but the history, particularly, is only seven questions. Who, what, where, when, how, why, and sources.
And of those seven questions, the most important of all of them is the sources. And that's the historiography part.
So if you take, let's see, we have a, and as I said, the philosophy behind Mosaic is agnostic rules, not ideology. In other words, this is not a, the ethics are structural, not ideological. It's not enforcing a belief system, it's enforcing process integrity.
The main goal is how you arrive at the answers here. So if you go to,
this example prompt on predictive policing. And you click on it. It should, it'll give an answer that looks like this.
Now, this first part here, the JSON, anyone know, tech people know what, can explain what JSON is? Right, right. This JSON block called bias eval or bias evaluation, this isn't just technical metadata, it's, It's an ethical transparency layer.
And so because historiography is so much about the transparency of how a historian, in this case, arrives at a conclusion, this GPT is presenting you with the code of how it got the answer that it's about to give you. so that this can also be a part of the discussion and the debate. So this is what I mean by the process is just as important as the answer itself.
So that is, you have this JSON which is a bias evaluation report and it's, It shows exactly how Mosaic applied its ethical and historical reasoning rules before giving an answer. And so it lists principles which were triggered, and it scores five dimensions of bias. This is how it's set up.
Five dimensions of bias risk are essentialism, stereotype, amplification, historical distortion, power invisibility, and decontextualization.
So this is the other thing. When someone says that we're flagging against bias, how do you know that my definition of bias itself is not bias? The whole thing, the whole point here is self-awareness. So those five categories are the agnostic rules of how the app is defining bias.
And so it scores five dimensions, and the numbers add up to a total risk score. So depending on that score, Mosaic decides whether to answer directly, answer with caution, or reframe the prompt.
So in short, posting this JSON makes Mosaic audible. You can audit the answers that it gives. And you can see how it reached its ethical judgment, not just the final text, which also gives you, the person using it, people using it, an opportunity to debate how it came about its answers, right? That's a part of it.
So what it's doing essentially with here is it's turning what's normally a black box of AI reasoning into something that's transparent, teachable, and accountable.
Then you have the answer. And this is what we call the answer construction.
And again, it has three different levels, low risk prompts, moderate risk prompts, and high risk prompts. And low risk prompts, if the total risk score is between zero and three, Mosaic answers directly, but still cites historical governance sources. If it's a moderate risk prompt, that means the score is between four and six, it adds context and caution to the answer.
You'll see notes like answering with corrective context or historical uncertainty marked. It'll have that in the answer.
Or it'll have high-risk prompts, which scores between 7 and 10. And Mosaic doesn't refuse silently, but it reframes the question. And so it explains why the original framing could cause harm or why it's inaccurate or is a distortion. And it offers an ethical way to explore the topic instead.
So this is what, and you know, each final answer has three parts, a concise response, context, and corrective framing, and citations, governance rules.
And this is sort of where my part in all this came in, even though I have some experience in tech. Jermaine Edmonds, who is my associate, really did most of the tech work, but my part primarily is to create the agnostic rules that it uses.
So, as a historian, I have to provide, I have to know what where an idea, for example, an answer to a question, what is a biased answer to a question, or what are all the discredited and discounted answers and perspectives to that question, and have an explanation for why they've been discredited and discounted, and what the correct answer is within the historiography.
Just a brief example, I don't know if it was Trump or J.D. Vance, somebody said that Slavery was bad, but at least black Americans learned some job skills. Right?
So that actually is a idea or a perspective that was proffered by historians like UB Phillips, who wrote a history of Negro slavery in America. it was a kind of apologia for slavery in the United States, but it was very much the standard of historiography, that historiography for the longest proffered that idea that slavery in the United States was a benevolent institution, and that it should not be viewed as unique and distinct from slavery in other parts of the world, and that black Americans got some job skills and what have you from it.
And UB Phillips was sort of the sign of shore, the nepus ultra of that idea. And it's the history of Negro slavery in America. You read it in graduate school.
It's excellent. It's very well written. Let me say that. It's very well written.
And you had people like W.E.B. Du Bois and others who took shots at it, but they didn't take it on directly until someone named Kenneth Stamp wrote a book entitled The Peculiar Institution.
And that book takes on U.B. Phillips point by point and essentially ended the reign of that idea within historiography.
So if somebody were to type in a politician says slavery in the United States afforded black Americans job skills and was a benevolent institution, this system when developed will present that idea within the historiography and why it's been discredited and show you how it arrived at that conclusion.
So just to close, that is, and the governance citations, it'll give sources.
Let me just say this.
I like ChatGPT a lot. I'm used to it. It's used to me.
It knows my voice, the things that I've asked it to do. So I don't have to give as detailed prompts as I used to because it's aware of what I'm looking for and how I speak and what my ideas are.
And it's come. a long way, but it used to be utterly terrible at giving sources.
I remember one time I asked for a citation or a source for something, and I didn't I checked it in time, but if I hadn't, it would have had me out here looking crazy, getting up in front of peers in my field saying something that is utterly ridiculous.
There was actually an article where attorneys were brought before the bar because they had used ChatGPT that generated fake case law. So, you know, the human filter must still be present in all of this stuff. And even in this, although it's intended to correct a lot of these problems, the human being still has to be a part of it.
The governance citations are the sources that I would present that would actually explain where the information is coming from. And you would get all of these things, the JSON, the answer construction, the context, the governance citations, and the fact citations. And you would get all of those things with a single answer.
Now, before I end, I just wanted to say that, you know,
It would be great for companies to adopt technology like this.
This is not intended to be a solution to the overall problem, but it is sort of like I see Mosaic as a conscience companion. It's something that assists the conscience in going about looking at information and evaluating whether or not it is epistemologically sound.