Arkansas State, Credo Bring Information Literacy Resources to High School Students

Arkansas State University is partnering with information skills solutions provider Credo to deliver information literacy resources to Arkansas students in grades 7 through 12.

The "Literati High School Partnership," formally unveiled yesterday, will "provide research instruction to high school students to help prepare them for college success, even before they arrive on a campus," according to Credo. Under the terms of the arrangement, Arkansas students will receive access to Credo's School Core Content Collection. That includes "more than 400 e-book titles and millions of reference entries. The collection offers students in grades 7 to 12 a wealth of insightful content covering topics related to secondary school studies, such as biology, physics, mathematics, social sciences, world cultures, world history and more."

"This is the perfect way for the librarians at the Dean B. Ellis Library to expand our work with area high schools," said Jeff Bailey, Library Director at Arkansas State University's Dean B. Ellis Library, in am prepared statement. "At the high school level, the teachers and students are gaining access to all of the great Credo content and support that will help students earn higher grades, graduate and get accepted into college. At the college level, Arkansas State University will have incoming freshmen from those high schools who are better prepared, require less remediation, and are more likely to graduate. Plus, those students already will be familiar with using Literati to help them complete their assignments."

"Programs such as this one will help students hit the ground running once they arrive on campus so they're not overwhelmed by the college academic experience," said Ian Singer, Credo's chief content officer, also in a prepared statement. "We are excited about the potential here, plus we're looking forward to working with Jeff and the team at Arkansas State University. Of great significance, we will be interested in tracking how the students who had access to Literati performed versus those students who didn't."

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • glowing blue AI sphere connected by fine light lines, positioned next to a red-orange shield with a checkmark

    Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems

    The Cloud Security Alliance has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.