Resources
A curated collection of AI tools, policies, guidance, and training available to the Emerson community.
AI Tools & Software
Enterprise-licensed tools available to all Emerson students, faculty, and staff.
Google Gemini & NotebookLM
Google’s AI assistant for text generation, brainstorming, summarizing, and more. NotebookLM lets you upload documents and ask questions about them. Both are covered under Emerson’s Google Workspace privacy protections.
Adobe AI in Creative Cloud
Generative Fill, text-to-image, speech enhancement, AI rotoscoping, and more across Photoshop, Premiere Pro, Illustrator, and other Creative Cloud apps. Standard features are included in Emerson’s license.
Zoom AI Companion Meeting Summaries
AI-generated meeting recaps available to all Emerson Zoom users. Host-enabled, audio-only analysis, with privacy-focused defaults.
Standards & Policies
Guidance for safe and responsible AI use at Emerson.
AI Use Standards & Safety Guide
Practical rules for using AI at Emerson: what’s encouraged, what’s not allowed, privacy expectations, and disclosure requirements.
AI Tools Used at Emerson
A comprehensive table of AI tools reviewed by IT, including support status, access method, training data policies, and accessibility compliance.
Artificial Intelligence in the Classroom
Faculty guidance on managing AI use in coursework, including the AI Assessment Scale and syllabus policy recommendations.
Training
AI literacy resources for the Emerson community.
Emerson College Library: AI Literacy Guide
A curated library guide covering AI fundamentals, research tools, and critical evaluation of AI-generated content.
Video Series
A video training series dedicated to Emerson College’s AI principles, software, resources, and guidelines.
Chapter 1: What AI is and How it Works
Let’s talk about what AI actually is because the term gets used loosely and it helps to know what’s going on under the hood. When people say AI today, they’re usually talking about large language models or LLM, a form of generative AI. These are tools like Google Gemini, Chat GPT, and Claude. You give them a prompt, a question, a task, an instruction, and they generate a response in natural language. They’re trained on massive amounts of text, books, websites, code, and they work by predicting the next most likely word in a sequence based on your prompt and that training. That’s why they can sound very confident but still be wrong. They’re generating text that’s statistically likely to be applicable. In that sense, LLM are not search engines, even if they can perform accurate searches. If you need a specific verifiable fact like Emerson’s commencement date or a policy number, use a search engine or ensure that the LLM provides a link or citation that you can verify. If you need help writing, brainstorming, summarizing, or organizing ideas, that’s where LLM shine. They’re not great for retrieving real-time data, and they can hallucinate, meaning they can generate information or citations that sound real but aren’t. So the takeaway, think of AI as a conversation partner, not a search engine. You drive the conversation, you verify the output, and you stay in control.
Chapter 2: Guiding Principles for AI at Emerson
Emerson has adopted five guiding principles for AI. These are intended to organize the way we approach these tools as a creative institution. Principle one, story comes first. Human imagination should remain the origin of all creative work. We will strive to choose technologies emerging and legacy alike based on their suitability to advance the human creator’s vision. Decisions, discernment, and accountability remain ours. We accept full responsibility for the integrity of the final work regardless of the tools used. Principle two, critical engagement. While AI content generation is comparatively faster and easier, the value of human judgment remains paramount. High-V value output requires skepticism and rigor, including verifying sources, questioning biases, addressing ethical concerns, and refining output that is otherwise generic or incorrect. We commit to providing students the necessary context to engage debates surrounding its development, use, and governance. Principle three, transparency and integrity. We are transparent about the use of AI in our creative processes. We do not present AI generated creative content as entirely human-p produced work. Principle four, career readiness. Preparation for the job market increasingly requires familiarity and engagement with AI and emerging technologies. We commit to providing every student with the opportunity to develop AI literacy tailored to their respective professional fields. Principle five, protect privacy. We prohibit the input of sensitive, confidential or proprietary college data into public or unauthorized AI systems. College data is classified in the data governance policy. These principles are not static. We will review and update these principles to remain aligned with technological and professional shifts and ethical concerns. Questions and comments may be sent to a emerson.edu.
Chapter 3: Writing A Good Prompt
People using AI for the first time often have the same experience. They open a chat and don’t quite know what to say or how to say it. Let’s talk about how to write a good prompt. One of the most common mistakes people make is treating AI like a search engine. You type, “Write me a paragraph about our program and you get back something generic that sounds like it was written by nobody for nobody.” Think of it this way. If you gave that same vague request to a colleague with zero context, you would also get a bland result. Give it a starting place and a clear goal. Instead of write me a paragraph about our program, try I’m writing a description for our website. Here’s what the program does. Here’s who it’s for. And here’s a paragraph from last year’s version that needs to be updated. Give me a revised draft in a professional but approachable tone. You can upload screenshots, past drafts, meeting notes, past emails you like the tone of, rubrics, style guides, anything that shows it what good looks like for your specific situation. The more relevant context you provide, the less generic the output. AI performs dramatically better when you give it something to build on. Just remember the privacy rules from our other training. No confidential data, no student records, and no proprietary information in public tools. Let’s run through some quick dos and don’ts. While these are powerful tools, don’t overestimate their abilities. For example, let’s say you had two spreadsheets and wanted to compare them. An LLM will happily try, but the likelihood that it gets it exactly right is very low. You’d be better off asking it how to do that comparison in Excel and then doing it yourself with a formula that’s reliable and repeatable. AI is great at teaching you how to use your existing tools more effectively. Don’t accept the first response and move on. Push back. Say, “That’s too formal,” or, “Cut it in half.” Or, “Here’s what you got wrong.” The best results almost always come from the second or third exchange, not the first. Do give it a roll when it helps. You’re an admissions counselor. Writing to a prospective family works better than write something welcoming. Do tell it what you don’t want. Constraints are just as useful as instructions. Give it context, who you are, what this is for, who the audience is. Give it a clear goal, what you want back, and what format it should be in. And then treat the output as a starting point, not a finished product. The people getting the most out of AI know that good results require determination and detailed critique.
Chapter 4: Thinking Critically About AI Output
AI can produce impressive looking results, but looking impressive and being accurate are two different things. Let’s talk about the ways AI can go wrong and how to protect yourself. Hallucinations. LLM will sometimes fabricate facts, citations, or data that sound completely real. They’ll site a paper that doesn’t exist, give you a statistic with no source, or describe an event that never happened. They are improving over time, but it’s an inherent byproduct of how the technology works. You always need to verify. sick of fancy. Research shows that AI models have a strong tendency to agree with whatever you say, even when you’re wrong. In one study, when people described clearly problematic behavior, LLM told them they were in the right about 86% of the time, compared to only 39% from human reviewers. The models want to please you, which means they’ll reinforce your assumptions instead of challenging them. Writing with AI. Unedited AI writing tends to hedge, pad, and over format. It uses vague adverbs, forced parallelism, and avoids taking a position. It’ll recap what it just said. Use phrases like, “It’s important to note and treat every topic like a paradigm shift. If you’re using AI to draft something, read it critically. Does it actually say anything specific, or does it just sound like it does?” AI output is a starting point. you produce the final product, verify facts independently, push back when the model agrees too easily, and edit aggressively. That’s what critical engagement actually looks like in practice.
Chapter 5: AI Tools at Emerson: Supported and Unsupported
Let’s talk about which AI tools are available at Emerson, as well as what is supported and unsupported. Emerson maintains a list of AI tools that have been reviewed. You can find it on our support site, but not everything on that list is officially supported or licensed by the college. The tools we officially support, meaning they’re licensed under enterprise agreements with privacy protections, are Google Gemini, Google Notebook LM, Adobe Firefly, and Zoom AI Companion. These are available to all Emerson users through your institutional login. Your data in these tools is not used to train AI models. Other tools like ChatgPT, Claude, and Grammarly are listed because people use them, and we want you to know how to use them safely, but they’re not institutionally supported. That means there’s no enterprise agreement, and your data may be handled differently. If you use them, be extra cautious to not submit any confidential or sensitive data. And check the tools privacy settings. For example, in chat GPT, make sure you disable improve the model in your settings. The key distinction is supported tools have enterprise level privacy protections. Unsupported tools can still be useful, but you need to exercise more caution with what data you put in. When in doubt, use the supported option. Check the AI tools page on our support site for the full list, privacy details, and links to each tools documentation.
Chapter 6: Google Gemini & NotebookLM
Emerson provides two Google AI tools for everyone, Gemini and Notebook LM. Both are covered under our Google Workspace for Education agreement, which means your conversations are private and not used to train Google’s models. Let’s look at what each one does. Gemini is Google’s generalpurpose AI assistant. Go to gemini.google.com and make sure you’re signed in with your Emerson account. You can use it to generate text, brainstorm ideas, summarize information, draft emails, and more. You’ll see several available models. fast for everyday questions, thinking for more in-depth or complex tasks, and Pro for advanced math and coding. You will also see options to generate music and images. Video generation and other Gemini Pro features such as Google Drive and Gmail integrations are not currently available for the campuswide license. Gemini conversations are retained for 3 months and then automatically deleted. You can’t manually delete them before that window per Google’s policy. Notebook LM is different. It’s a personal research assistant. You upload your own documents, PDFs, notes, reports, and then ask questions about them. It’ll generate summaries, study guides, outlines, and FAQs based on the content you provide. It’s especially useful for working through long documents, or preparing for a project. If you need help getting started, check our support article on Gemini and Notebook LM or email the help desk.
Chapter 7: Zoom AI Companion: Automated Meeting Summaries
Zoom AI companion meeting summaries may be used by all Emerson Zoom users. This feature can generate a written recap of your meetings automatically, but it’s optional and under your control. Here’s how it works. When you host a meeting, you’ll see a prompt asking if you want to enable AI companion. If you turn it on, the AI will generate a summary of the meeting based on the audio transcript covering topics discussed, key points, and action items. The summary is delivered to you after the meeting through Zoom’s web portal, and you decide whether to share it with participants. We’ve configured this with privacy in mind. The AI only listens to spoken content. It does not analyze screen shares, chat messages, or files shared during the meeting. Live AI companion questions where participants could ask the AI questions during the meeting are turned off and summaries are automatically deleted after 30 days. You can adjust your personal settings at emerson.zoom us under settings then AI companion. From there you can turn off AI companion entirely. Disable the reminder prompts. Choose whether summaries are automatically shared and pick a summary template format. Zoom does not use your meeting content to train its AI models. Care should be taken with meetings involving sensitive topics. Summaries may not be appropriate in every situation, but for regular meetings, it’s a great assistant.
Chapter 8: Adobe Firefly and AI Features in Creative Cloud
Adobe has built AI capabilities directly into Creative Cloud, and Emerson’s campuswide license provides a strong set of generative features. Let’s look at what’s available across Photoshop, Illustrator, Premiere Pro, After Effects, Lightroom, In Design, Firefly, and Adobe Express. You have access to tools like generative fill, generative expand, text to image, speech enhancement. This was recorded with a bad microphone. This was recorded with a bad microphone. AI rotoscoping and more. They’re included in our institutional license and everyone may use 1,250 credits monthly. A credit is used when you use a generative feature. For instance, generating one image consumes one credit. There are premium features that do not come with the institutional license, including video generation, partner AI models from Google and OpenAI, and audio translation. If you want the newest features, install the beta versions of Photoshop, Premiere Pro, and Illustrator through Creative Cloud. The support article has a clear table showing exactly what’s included and what’s not. One important detail, Adobe’s generative models are trained on licensed Adobe Stock content, public domain materials, and openly licensed data. They do not use Emerson user content to train their models. Your files, images, and projects in Creative Cloud remain private. Check the Adobe AI tools article on our support site for the full breakdown and installation instructions.
Chapter 9: AI Use Standards & Safety
Let’s talk about the practical rules and safety guidelines for using AI at Emerson. These generally apply to staff using AI in their day-to-day business work, but the principles are relevant to everyone. AI can help you brainstorm, summarize, draft, and analyze, but it shouldn’t do your job for you. That means you should not fully draft, approve, and send communications without your input and oversight. Do not let AI attend meetings you weren’t in. And do not use it to substitute for the job duties that require your judgment. Do not enter personally identifiable information, confidential college data, or sensitive information such as passwords into any AI tool, even the supported ones. For Emerson managed tools like Gemini and Notebook LM, your conversations are private and not visible to Emerson administrators. But that still doesn’t mean you should submit restricted data. If AI contributed substantial original language or ideas, in other words, content you wouldn’t have been able to produce without it, disclose it. If you used AI the way you’d bounce ideas off a colleague, such as for brainstorming or refining your own writing, no disclosure is needed. Finally, use reputable tools. Stick to vetted platforms. If you or your department wants to adopt a new AI tool for broader use, please submit it to IT security and procurement for review, just like any other application. For more, review the full standards guide on our support site.
Chapter 10: AI in the Classroom
At Emerson, faculty manage the role of AI in their classrooms. One of the most common questions we hear is, “How do I set clear expectations for students?” The AI assessment scale gives you a framework for doing exactly that. The scale developed by Perkins and Furs has five levels. Level one is no AI. Students do the work entirely on their own. Level two is AI planning. Students can use AI for brainstorming or outlining, but the submitted work is theirs. Level three is AI collaboration. Students can use AI for drafting and feedback, but they must critically evaluate and modify what it produces. Level four is full AI. Students can use AI extensively, and the focus shifts to how well they direct it. Level five is AI exploration. Students use AI creatively to solve problems and generate novel ideas. You can apply this at the course level or assignment by assignment. A reflective essay, for example, might be level one. A research summary where students can use AI to brainstorm might be level two. A prompt engineering assignment where the grade is based on process and critique could be level five. The key is being explicit on your syllabus or in each assignment description so students know the expectations. The college encourages pedagogical integration of AI through career readiness and innovation goals but does not require it. Every discipline approaches this differently. Resources are available through the office of academic assessment, the Iwasaki library, ITG, the teaching hub, and IT. Our classroom AI guidance page has the full-scale studentfriendly language for each level. This video training was produced with the assistance of AI tools.
