AI Projects Domain | NRIC 2026

Submit a focused AI project withclear evidence and governance.

NRIC 2026 accepts applied AI projects in healthcare, biomedical sciences, and medical education. Coding is not required for every track, but each submission must define a specific medical problem, a substantive AI pathway, and declared-track execution evidence. Submission alone does not guarantee presentation at the conference.

1
Max projects per individual
A / B / C
Declared track model
500
Word limit (incl. spaces)
TBA
Submission deadline (to be announced)
Eligibility Requirements

Projects must meet these criteria before scoring.

One project per individual

Each individual may submit one AI project through the conference profile portal. The submitter is treated as the presenting author.

Eligibility and authorship are checked

Presenting author must be an undergraduate MBBS student. Technical co-authors are allowed, but all members must be declared at submission time.

Evidence and integrity screening

All submissions undergo double-blinded review and screening for AI-generated scientific content and plagiarism. Required summary sections, declared-track evidence, and integrity disclosures are mandatory.

Track Declaration

Declare the track that matches your evidence.

Track A - Concept

Coding is not required, but the submission must provide a fully specified workflow with clear inputs, outputs, AI logic, and a credible validation and deployment plan.

Narrative ideas without workflow specificity do not meet the Track A standard.

Track B - Prototype

For no-code, low-code, or UI/UX builds. Submit functional evidence: clickable prototype, no-code build, or demo video up to 1 minute.

Static wireframes and screenshots without functional context do not meet the Track B standard.

Track C - Implementation

For programming or ML teams. Submit a functional model or tested pipeline with documented performance metrics and a demo video up to 1 minute.

Untested or undocumented code does not meet the Track C standard.

Scientific Governance Model

Framework for Abstract Triage, Evaluation and Harmonisation (FATEH).

AI projects enter the same The Framework governance pathway so routing, scoring, and committee adjudication follow a consistent scientific standard across the conference cycle.

The Framework logo
Triage to Evaluation to Harmonisation
Triage

Declared-track integrity, eligibility, required sections, and governance disclosures are validated before scoring begins.

Evaluation

Reviewers assess clinical value, AI methodology, execution evidence, and policy alignment using the scientific rubric.

Harmonisation

The committee reconciles reviewer outputs, resolves variance, and confirms final poster communication for the programme.

Mandatory Summary Structure

Every project summary must cover all 11 headings.

a) Problem Statement

Define a specific unmet clinical or healthcare workflow problem.

b) Target User and Deployment Context

State who uses it, where, and under what constraints.

c) Justification for AI

Explain why AI is appropriate compared with simpler tools.

d) AI Methodology

Describe framework/architecture with technical specificity.

e) Data Source and Input-Output

Specify data origin and precise system inputs/outputs.

f) Execution Evidence

Provide track-aligned evidence: blueprint, prototype, or metrics.

g) Validation Plan or Results

Tracks A/B require robust proposed validation; Track C requires empirical results.

h) Real-World Applicability

Address workflow integration, scalability, and infrastructure.

i) Limitations

Provide specific technical, clinical, and practical limitations.

j) Ethics and Data Governance

Cover privacy, consent, bias, misuse risk, and human oversight; Track C requires an approved ethical statement where applicable.

k) Originality and AI-Use Disclosure

Disclose originality and any generative AI tool usage accurately.

Scoring follows a structured review model.

Layer 1

Peer review score out of 30 across six criteria (1-5 each), judged against your declared track standard.

Layer 2

Execution bonus out of 10: Track A (+0), Track B (+5), and Track C (+10), granted only when execution evidence is valid for the declared track.

Track integrity check

If declared track and submitted evidence mismatch, the Scientific Committee can reclassify before scoring (for example, Track C with no functional model may be moved to Track A and lose execution bonus).

Submission Protocol

Review these requirements before upload.

Summary body limit: 500 words including spaces.

One optional figure is allowed and does not count toward word limit.

Submit via NRIC Conference Profile Portal only; email submissions are not accepted.

Abbreviations are allowed in text only when defined at first mention; do not use abbreviations in title.

Accepted file format: Microsoft Word (.doc or .docx) only.

Title must reflect AI methodology and clinical application; no abbreviations in title.

Presenting author must be an undergraduate MBBS student; technical co-authors are permitted.

Presenting author does not need to be first-listed, but must be bold and underlined in the submission document.

All team members must be declared at submission; post-submission author list changes are not accepted.

Include all author names and institutional affiliations in the submitted document.

Use Times New Roman, size 12-14, with 1.5 line spacing.

Filename format: AIProject-[Track]-[FirstAuthorLastname]-[Shorttitle]-[PresentingAuthorLastname].

Presenting author must complete separate conference registration regardless of project acceptance status.

Integrity and Disqualification

Misrepresentation may result in disqualification.

Generative AI policy

Language editing, literature discovery, and code scaffolding are allowed.

Intellectual content cannot be AI-generated in place of team reasoning. False or incomplete disclosures are treated as academic misconduct.

Grounds for immediate disqualification

Detection of substantially AI-generated scientific content.

Plagiarism of text, methodology, or conceptual framing.

Near-duplicate submissions with substantially similar concepts across teams.

Missing mandatory submission components.

Falsified authorship, affiliation, or AI-use declarations.

Submission through unauthorised channels.

Inability to explain or justify the submission during live evaluation.

Deliberate misrepresentation of declared track versus execution evidence.

Final Submission Route

Submit through the portal and prepare for live evaluation.

All projects are judged through NRIC’s Framework for Abstract Triage, Evaluation and Harmonisation (FATEH). Selected projects are assigned poster presentation by the Scientific Committee, and selected presenting authors are informed about poster printing fee instructions separately.

Operational reminder

Late submissions are not accepted under any circumstances.

Approval from all co-authors is required, and presenting author responsibility applies to any conflicts.