What the G.U.I.D.E. Framework Actually Is — And Why Ethics Needed a System
A practitioner's guide to the methodology behind the G.U.I.D.E. platform and why its philosophical foundations matter in practice
IntegrtyWrx Technology, LLC
4/1/20266 min read


Most governance frameworks are built backwards. They start with the regulation — a list of requirements an organization must satisfy — and work backward toward documentation and controls. They answer the question: what do we need to produce to demonstrate compliance?
The G.U.I.D.E. Framework was built around a different question: what does a leader need to think through in order to make a genuinely ethical decision about an AI or data system — and how do we operationalize that thinking so it happens consistently, not just when someone remembers to ask?
G.U.I.D.E. stands for Gauge, Understand, Incorporate, Decide, Execute. It is a five-phase ethical decision-making methodology developed by Terynn Hill, founder of IntegrityWrx Technology, LLC, and the operational foundation of the G.U.I.D.E. platform at guidesecurity.tech. This article explains what each phase means, why it is grounded in established ethical theory, and how the platform brings the framework to life for security teams, compliance officers, and executives managing AI and data deployments responsibly.
Why AI Governance Needed an Ethical Framework — Not Just a Checklist
The NIST AI Risk Management Framework (AI RMF 1.0), released in January 2023, represents the most comprehensive U.S. guidance on AI risk management to date. It is rigorous, well-structured, and maps risk management across four core functions: Govern, Map, Measure, and Manage. It is also, by design, non-prescriptive on ethics. It describes what organizations should achieve — trustworthy, fair, transparent, accountable AI — without providing a structured methodology for the ethical reasoning that gets you there (NIST, 2023).
The EU AI Act, which entered into force in August 2024, creates binding obligations around high-risk AI systems — including requirements for risk management, human oversight, transparency, and bias mitigation (European Parliament, 2024). Again, the obligation is described, but the reasoning process — how a leadership team actually works through the ethical dimensions of a specific AI deployment — is left to the organization.
This is the gap G.U.I.D.E. fills. It is the ethical reasoning layer that sits beneath compliance documentation and above technical implementation. It gives leaders a structured, repeatable process for asking the right questions — and for documenting that they asked them. The G.U.I.D.E. platform at guidesecurity.tech operationalizes each phase of the framework into assessments, scoring, and reporting that connect directly to the major regulatory frameworks organizations are required to satisfy.
Phase 1: Gauge — Identifying the Ethical Landscape
The first phase of the G.U.I.D.E. framework asks organizations to identify the ethical dimensions of an AI system or data practice before any other analysis begins. Gauge is the context-setting phase — and it is the phase most organizations skip entirely.
Gauging means asking: who is affected by this system, and how? What are the potential failure modes? Where does the data come from, and what does it encode? What happens to people if the system produces a biased or incorrect output? Is this system making consequential decisions about individuals — in hiring, healthcare, credit, education, law enforcement — where the stakes of error are high?
This phase draws on the virtue ethics tradition, which asks not just 'what should I do?' but 'what kind of institution should we be, and what does this decision say about our character?' (Aristotle, Nicomachean Ethics). The NIST AI RMF's Map function similarly requires organizations to identify AI risk context before any measurement or management takes place — including the social, legal, and ethical dimensions of deployment (NIST, 2023). On the G.U.I.D.E. platform, Gauge is operationalized through an initial project registration and ethical risk identification assessment that surfaces these dimensions systematically.
Phase 2: Understand — Evaluating the Ethical Implications
Once the ethical landscape is mapped, the Understand phase applies the three major ethical lenses — deontology, contractarianism, and virtue ethics — to evaluate the implications of proceeding as planned.
Deontological analysis (rooted in Kant's categorical imperative) asks: are there design choices here that are impermissible regardless of their outcomes? Does this system treat any group of people as a means to an organizational end rather than as an end in themselves? The EU AI Act's prohibition on AI systems that manipulate users through subliminal techniques reflects deontological reasoning in its purest regulatory form (European Parliament, 2024).
Contractarian analysis (rooted in Rawls's veil of ignorance) asks: would the people affected by this system's outputs agree that its design is fair, if they did not know in advance whether they would be advantaged or disadvantaged by it? This is a practical fairness test, not a theoretical one. ISO/IEC 42001:2023 requires organizations to assess the fairness implications of AI systems across affected stakeholder groups — a direct parallel to this contractarian reasoning (ISO, 2023).
The G.U.I.D.E. platform's ethical assessment module walks compliance officers and security teams through structured questions across all three ethical dimensions, generating a scored ethical profile for each AI project.
Phase 3: Incorporate — Connecting Ethics to Regulation and Trust
Understand identifies the ethical dimensions. Incorporate asks: what do we do about them? This phase is where ethical analysis connects to regulatory compliance, organizational policy, and the practical question of whether proceeding with the proposed design — or a modified version of it — is defensible.
Incorporate examines whether choices build organizational trust, align with applicable regulatory frameworks, and prioritize moral obligations over purely financial considerations. This is the phase where the NIST AI RMF's Measure function becomes relevant — where organizations assess their AI systems against quantitative and qualitative standards for trustworthiness, fairness, and security (NIST, 2023).
On the G.U.I.D.E. platform, Incorporate maps assessment outputs against SOC 2, HIPAA, CCPA, FedRAMP, the EU AI Act, NIST CSF 2.0, ISO 27001, and GDPR — giving compliance officers a clear view of which regulatory obligations are implicated by specific ethical findings, and what remediation looks like in regulatory terms.
Phase 4: Decide — Applying Constraints, Making the Call
Decide is the accountability phase. After the ethical landscape has been gauged, the implications understood, and the regulatory context incorporated, someone must make a decision — and that decision must be documented, defensible, and owned by a named individual.
The NIST AI RMF is explicit that AI risk management requires named organizational accountability — not diffuse committee ownership, but identifiable humans who are responsible for specific AI systems and their outcomes (NIST, 2023). ISO/IEC 42001 adds a management system layer, requiring that decisions about AI systems be made within a documented governance structure with executive oversight (ISO, 2023).
Decide applies regulatory constraints and ensures that decisions uphold privacy and security as fundamental rights — not as compliance minimums. On the G.U.I.D.E. platform, this phase produces a structured decision record: what was assessed, what risks were identified, what constraints apply, what decision was made, and who made it. This documentation is the foundation of the audit trail regulators and boards increasingly expect.
Phase 5: Execute — Implementation With Integrity
The final phase is where decisions become systems — and where governance disciplines must continue, not conclude. Execute is about implementing remediation with integrity, ensuring compliance obligations are actively maintained, and establishing the ongoing monitoring that responsible AI deployment requires.
The NIST AI RMF's Manage function recognizes that AI governance is not a point-in-time event. Models drift. Data distributions shift. Regulatory environments change. New failure modes emerge post-deployment that were not visible in testing. Ongoing monitoring — with documented processes for identifying and responding to new risks — is a core governance obligation (NIST, 2023).
On the G.U.I.D.E. platform, Execute is supported by the regulatory intelligence module, which provides AI-powered monitoring of regulatory changes across global frameworks, and by the executive reporting module, which generates board-ready reports with continuity planning recommendations. This closes the governance loop — connecting the initial ethical assessment to ongoing accountability.
Why the Platform Matters
The G.U.I.D.E. framework is a methodology. The G.U.I.D.E. platform at guidesecurity.tech is what makes that methodology operational at scale. Without a structured platform, even well-intentioned governance produces inconsistent results — some projects get thorough ethical review, others get a checkbox, and there is no audit trail to demonstrate due diligence when regulators or boards ask.
The platform integrates ethical AI assessment, network vulnerability scanning, risk quantification (SLE, ALE, and unified risk scores), compliance management across eight major frameworks, and executive reporting — all within the G.U.I.D.E. methodology. It is designed for the security teams, compliance officers, and executives who need to demonstrate not just that their AI systems work, but that they were built and deployed responsibly.
Because in the governance environment of 2026 and beyond, that demonstration is not optional. The EU AI Act is law. The NIST AI RMF is the U.S. standard of practice. ISO/IEC 42001 is the international management system benchmark. And the organizations that can show a structured, documented, ethics-first approach to AI governance are the ones that will navigate the regulatory landscape — and the trust landscape — with confidence.
References
1. Aristotle. (circa 350 BCE). Nicomachean Ethics. (T. Irwin, Trans., 1999). Hackett Publishing.
2. European Parliament (2024). Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the European Union.
3. International Organization for Standardization (2023). ISO/IEC 42001:2023 — Information Technology — Artificial Intelligence — Management System. ISO.
4. Kant, I. (1785). Groundwork of the Metaphysics of Morals. (M. Gregor, Trans., 1997). Cambridge University Press.
5. National Institute of Standards and Technology (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1
6. National Institute of Standards and Technology (2024). The NIST Cybersecurity Framework (CSF) 2.0. U.S. Department of Commerce. https://doi.org/10.6028/NIST.CSWP.29
7. Rawls, J. (1971). A Theory of Justice. Harvard University Press.
© 2026 IntegrityWrx Technology, LLC | G.U.I.D.E. Framework™ created by Terynn Hill | guidesecurity.tech | CyberScope
Address
2000 Park St Ste 101-1480
Columbia. SC 29201
Contact
contact@integritywrx.tech
