North Dakota Universities Pass State Version of FERPA

North Dakota higher education leaders have passed a new policy intended to protect student data. The State Board of Higher Education has implemented policy 503.2, the "Student Data Privacy and Security Bill of Rights." The text was initially created by the North Dakota Student Association (NDSA), which is calling the policy the "first of its kind" for the United States.

The goal of the new policy is to give a state spin on federal guidelines set under the Family Educational Rights and Privacy Act. FERPA establishes the rights of students 18 and older and parents of students under 18 to have access to the information maintained in education records. FERPA also allows students to challenge the accuracy and completeness of the data and to be notified about the use of personal information in directories.

There's also language pertaining to how students' personally identifiable information (PII) can be used by education technology companies or other organizations, to make sure PII isn't compromised through vendor agreements or the use of free software in an academic setting that captures more than a name or campus-issued e-mail address.

The resolution that ended up being passed in North Dakota outlined all of those rights. "This Policy reflects the reality that students are the owners of their PII and should control access to and distribution of their PII to the greatest extent possible, but many [North Dakota University Systems] programs and technologies require student PII to function for the students' benefit," the text stated. "This policy outlines student rights related to the privacy and security of their educational and personal data."

Among the stipulations is guidance to faculty that if they choose to use software that doesn't meet the terms set for vendors regarding disclosure of student information, they need to provide an alternative at no additional cost that won't require students to disclose their data. Also, no institution can sell or disclose directory information about students for commercial or advertising purposes.

"The policy was supported by all of the appropriate councils and committees and passed with the support of the full board," said Vice Chancellor Lisa Johnson, in a statement. "I believe this reflects the board and the system's support of our students' needs to address concerns about protecting their privacy of data."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • glowing blue AI sphere connected by fine light lines, positioned next to a red-orange shield with a checkmark

    Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems

    The Cloud Security Alliance has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.