Roman Ishchenko defined how an AI-driven recruiting system can perceive context, use advanced information fashions, and nonetheless protect a human-centered, honest method to evaluating candidates.
As synthetic intelligence turns into deeply embedded in on a regular basis work processes, one query grows extra pressing: how can we protect the human aspect of decision-making? Nowhere is that this rigidity sharper than in hiring, the place an algorithm’s mistake can straight have an effect on somebody’s profession. Automation can analyze huge quantities of data, uncover hidden patterns, and streamline workflows, however its objective is to not change human judgment. Its objective is to strengthen that judgment, making selections extra knowledgeable, honest, and context-aware.
That is the problem Raised AI is constructed to unravel. The corporate develops an clever hiring engine that helps organizations determine actually related, high-performing candidates by modeling roles, enriching fragmented information, and lowering biases that always distort conventional recruitment.
We spoke with Roman Ishchenko, the founding father of Raised AI and a technical and mathematical professional who designed the core structure of the platform — from the construction of the matching course of to the info fashions and the interplay between AI parts. On this interview, Roman explains the best way to construct AI that doesn’t lose sight of what issues most — the particular person, and why this stability between expertise and values is shaping the way forward for hiring.
Roman, why did you select recruiting because the trade to use superior AI? There are such a lot of fields the place AI might be used, why this one?
It truly occurred fairly organically. I began with a easy commentary: hiring is among the most information-heavy processes inside any firm, and but it has nearly no tooling able to understanding the complexity of that info.
A recruiter sees a résumé and a job description. However beneath that, there’s an entire world: the talent stack the candidate doubtless used, the trade they labored in, how groups at their earlier firm are structured, whether or not that firm just lately modified path, whether or not sure roles have a tendency to achieve sure environments, how profession trajectories sometimes evolve — and all of that is evolving continuously.
It’s nearly unimaginable for a human to mentally maintain and course of all these alerts. However for AI, particularly fashionable fashions, that is precisely the kind of downside they’re good at: messy, unstructured, high-context information with a lot of lacking items. As soon as I spotted that, recruiting turned a really pure area to give attention to.
Your system goes past studying resumes — what different info do you collect? What sort of information must be included for correct matching?
Lots of what we do is contextual understanding. We don’t rely solely on what a candidate writes. We mix that with details about the businesses they labored for, the applied sciences these firms use, their merchandise, their trade dynamics, and up to date occasions which may affect candidate conduct.
For instance, if we all know an organization is constructing a cellular app utilizing a particular stack, and a candidate was a part of the cellular group there, the mannequin can moderately infer what applied sciences they doubtless labored with, even when it wasn’t spelled out. Equally, if there are dependable reviews that an organization went by means of a reorganization or layoffs, the system can deal with that as a sign of potential job-seeking exercise.
A recruiter is likely to be conscious of some such issues. AI can course of 1000’s of those alerts concurrently. That’s the place the actual worth is: enriching the unfinished image that candidates and firms naturally present.
How does this truly work on the technical degree? What’s occurring inside?
We use a layered method. There’s a knowledge basis that retains increasing — résumés, job descriptions, shopper suggestions, recruiter notes, earlier hiring outcomes, and lots of company-level intelligence we constantly enrich.
On high of that, we run fashions that break down every job into elementary standards like duties, scope, seniority, technical expertise, area context, and so forth. For every of these standards, we fine-tune AI parts that rating candidates individually. These scores then feed right into a rating pipeline that produces an total match rating with explanations.
Over time, the system retains enhancing: as recruiters settle for or reject profiles, the mannequin adjusts. After a number of iterations, it begins to know the nuances of a selected function or a recruiter’s preferences.
So it’s not one mannequin — it’s an ecosystem of fashions, every doing a particular a part of the reasoning.
You’ve got a PhD in a really technical subject, and your tutorial analysis centered on graph idea and sophisticated methods. How did that background make it easier to construct an AI-driven recruiting system?
It helped greater than I anticipated. That’s surprisingly related to hiring. A hiring ecosystem can naturally be considered as a graph: a candidate related to expertise, expertise related to applied sciences, applied sciences related to firms, and firms related to industries. While you miss one half, the construction round it typically tells you what’s doubtless true.
So in a humorous method, the foundations I labored on throughout my PhD ended up mapping very naturally onto how we take into consideration the hiring ecosystem right now. It gave me a method of seeing hiring not as a static résumé with job description course of, however as a dynamic community of alerts you may analyze, reconstruct, and perceive at scale.
Once we discuss evaluating individuals with AI, equity turns into an important concern. How do you make sure that it doesn’t introduce bias?
We take a really strict method. The fashions by no means see info that would set off bias — no names, no photographs, no gender markers, no age hints, no addresses, no dates that would suggest age. All of that’s eliminated earlier than the info reaches any scoring mannequin.
The system seems to be solely at skilled standards: expertise, duties, degree of possession, applied sciences, scope of previous roles. This already removes a lot of the bias that exists in conventional hiring. People, even unintentionally, might be influenced by irrelevant alerts. AI might be structurally prevented from seeing them.
We additionally run equity evaluations often. If we ever see variations in outcomes between teams with equal skilled profiles, we retrain with constraints. Equity isn’t a advertising and marketing line for us — it’s an operational requirement.
What makes Raised AI’s information basis distinctive?
Lots of AI instruments rely solely on public info. That’s helpful, however shallow. We mix public information with a really massive proprietary layer: historic placements, recruiter selections, shopper suggestions, interplay patterns, anonymized communication information, interview summaries, and the outcomes of previous hiring cycles.
This offers the system a a lot deeper understanding of what “success” seems to be like in numerous contexts. Over time, the mannequin learns not simply to determine expert candidates — however to determine candidates who thrive in sure environments. That’s one thing you solely get whenever you shut the loop between matching, outcomes, and studying.
And past matching, does Raised AI additionally automate operational duties?
Sure, we constructed a communication layer that adapts messaging to candidates, drafts outreach, creates follow-ups, summarizes conferences, and even schedules interviews robotically when availability is confirmed. The objective is to let recruiters give attention to judgment and relationships, not repetitive duties.
You’ll be able to consider it as: the AI handles the execution; the human handles the choices.
What’s your broader imaginative and prescient for all this? What path do you see your organization going?
The long-term objective isn’t solely to automate duties or make hiring sooner. It’s to essentially elevate the extent of intelligence within the course of: to make it extra knowledgeable, extra honest, extra context-aware, and much more exact than it has ever been.
Hiring right now continues to be dominated by instinct. And instinct is efficacious — nevertheless it shouldn’t carry your entire weight of the choice. My purpose isn’t to interchange that human judgment, however to help it with an AI layer that understands real-world context, connects all of the hidden dots, and brings construction to one thing that has traditionally been very unstructured.
In the end, I see this platform turning into a number one hiring engine for firms — a core layer in how organizations perceive expertise, make selections, and construct groups. We’re attempting to outline what recruiting ought to seem like within the AI period: environment friendly, honest, deeply knowledgeable, and constructed round people doing the issues solely people can do. If we are able to pave that path, I feel we might help reshape how hiring works globally.