Puerto de la Cruz, Tenerife — Spain
A practical guide for youth workers and young people navigating the digital world
Eight days of learning across five countries, distilled into one practical guide. Every chapter covers a topic that matters for anyone working with young people in digital spaces — or navigating those spaces themselves.
In today's rapidly evolving digital landscape, youth workers are increasingly required to guide and support young people as they navigate the complexities of online spaces. Social media platforms, online forums, and digital communities are not just venues for interaction — they are critical spaces where young people's identities, beliefs, and behaviours are formed.
Yet these spaces also present significant challenges. Cyberbullying, misinformation, and unethical online practices can have profound effects on young people's wellbeing. Many youth workers currently lack the skills and strategic understanding needed to moderate these spaces and foster positive digital environments.
This toolkit emerged from an Erasmus+ KA1 mobility project that brought together youth workers and young people from five countries — Malta, Spain, Latvia, Türkiye, and Romania — for eight days of intensive work in Puerto de la Cruz, Spain. The focus was practical: how to navigate digital spaces more responsibly, how to protect young people online, how to build communities that are genuinely inclusive, and how to create digital campaigns that actually change things.
What follows is not a record of those eight days. It is a standalone educational resource — structured so that anyone, whether connected to this project or not, can pick it up and apply it directly in their youth work, their organisation, or their own digital life.
Teaching the rights and responsibilities of online participation, promoting ethical communication, and guiding young people toward safe and responsible digital behaviour.
Developing the capacity to model and teach ethical online conduct — addressing cyberbullying, misinformation, and what it takes to maintain positive digital spaces.
Practical training in social media management, digital campaign development, and the responsible use of AI and digital platforms.
Strategies for conflict resolution, positive communication, and designing online spaces where every participant feels safe, included, and heard.
Forward-looking training on AI, algorithmic influence, and the ethical implications of emerging technologies — so youth workers remain effective guides in a fast-changing landscape.
Citizenship is one of the oldest ideas in political thought. To be a citizen is to belong to a community, to hold rights within it, and to carry responsibilities toward its other members. For most of history, that community was defined by geography. Today, most of us are also citizens of an enormous, borderless community: the internet.
Digital citizenship is the framework for understanding what it means to participate in that community well — with rights, responsibilities, and an awareness of how our behaviour affects others. It is not simply about following rules. It is about developing a conscious, active relationship with the digital world.
A common error is to think about digital rights without their corresponding responsibilities. For every right you hold online, there is a mirror-image responsibility.
Freedom of Expression — You have the right to express your views and contribute to public conversations.
↔ Responsibility: Not to use that freedom to spread hate speech, deliberate misinformation, or content designed to harm.
Privacy — You have the right to control information about yourself and limit who can access your personal data.
↔ Responsibility: To respect others' privacy equally — not sharing their information or images without consent.
Access and Participation — You have the right to access information and participate in online communities without discrimination.
↔ Responsibility: To make those communities welcoming for others, not create barriers through exclusion or gatekeeping.
Security — You have the right to be protected from online harm, fraud, and exploitation.
↔ Responsibility: Not to engage in behaviours that create those harms for others — phishing, harassment, impersonation.
These four scenarios illustrate the patterns that define good and poor digital citizenship in practice:
A young person sees a peer being mocked in a group chat. Instead of staying silent, they send a private message to the target offering support, then report the behaviour to a moderator.
Upstander behaviour: it costs relatively little but reduces real harm and signals community standards.
Someone shares a viral post with a fabricated quote because it confirms what they already believe — without verifying the source first.
Even without malicious intent, this spreads disinformation. Impact matters as much as intention.
A youth worker running a Facebook group enforces the code of conduct consistently and transparently when a member posts exclusionary comments — for all members, not selectively.
Consistent, transparent moderation creates trust and sets clear expectations for the whole community.
A teenager uses an anonymous account to post abusive comments, reasoning that anonymity means no consequences.
Anonymity does not remove real harm — and does not always protect identity. It also normalises harmful behaviour for everyone watching.
A responsible digital citizen is not just someone who follows rules — they actively shape a better online environment. Every click, share, comment, and silence is a choice. Make it count.
Digital citizenship is not a checklist. It is an ongoing practice of conscious, responsible participation — and it starts with recognising that the internet is a shared space we all have the power, and the responsibility, to make better.
Read each statement honestly. This is not a test — it is a starting point for reflection.
Online safety is often taught as a list of prohibitions: don't share your password, don't talk to strangers, don't click suspicious links. These rules are not wrong — but they miss the deeper point. Real safety online comes from understanding the environment you operate in, knowing what threats exist, how they work, and how to respond confidently when you encounter them.
Use a different, strong password for every account that matters — at minimum, email, banking, and primary social media. A password manager (Bitwarden, 1Password, or built-in browser managers) generates and stores these securely. A strong password is long (15+ characters), random, and varied in character types.
2FA adds a second verification step after your password — usually a code from an app or SMS. Enable it on every account that offers it. Even if an attacker has your password, they cannot access the account without the second factor. This single step blocks the vast majority of automated account takeovers.
Platforms default to sharing more than most users want. Audit yours at least every three months — platforms update their defaults frequently. Key questions: Who can see my profile and posts? Who can find me by email or phone? What am I sharing with third-party apps? What is the platform doing with my location?
Train yourself to pause when a message creates urgency, fear, or excitement before asking you to click or provide information. Legitimate organisations do not ask for passwords. Unexpected prizes are almost always scams. If something feels wrong, verify through official channels before acting.
Cyberbullying is any repeated, intentional harm inflicted through digital devices. What makes it uniquely damaging: it follows the target home, operates 24 hours a day, can spread to thousands instantly, and leaves a permanent record. The psychological impact — anxiety, social withdrawal, depression — is well documented and serious.
Harassment: Repeated hostile messages, comments, or posts targeting a specific person.
Public Humiliation: Sharing embarrassing images, videos, or information about someone without their consent.
Exclusion: Deliberately and repeatedly leaving someone out of online groups or conversations.
Impersonation: Creating fake profiles to pretend to be someone else and damage their reputation.
Most people who witness harmful online behaviour do nothing — not because they approve, but because they don't know what to do, or fear becoming a target. Online, where audiences are large and content spreads fast, silence reads as approval. An upstander chooses to act: messaging the target privately to offer support; reporting the content; adding a supportive comment; or involving a trusted adult when the situation is serious.
When we look out for each other online, we make the whole environment safer. The upstander choice costs relatively little — and means everything to the person on the receiving end of harm.
Online safety is not just a personal responsibility — it is a collective one. Technical tools protect your accounts; awareness protects your judgment; and the choice to be an upstander protects your community.
Ethics is not confined to philosophy seminars. In digital spaces, ethical questions arise dozens of times a day — every time you decide whether to share something, how to respond to someone, or what to do with information you have encountered. The digital world amplifies everything: the reach of truth and the reach of lies alike.
Misinformation is false information spread without intent to deceive — shared in good faith but still harmful. Disinformation is false information spread deliberately to mislead. Both cause damage, but they require different responses. Why does false content spread so effectively? Research consistently shows that emotionally engaging content — content that makes us angry, frightened, surprised, or amused — spreads faster than accurate but neutral information. Algorithms optimise for engagement, and outrage is enormously engaging. Understanding this mechanism is the first step toward resisting it.
Ask yourself: do I actually know this is true, or does it just feel true? Emotional resonance is not the same as accuracy. Slow down before acting on the impulse to share.
Most misinformation is shared many steps removed from its origin. Is the source a credible organisation, a named journalist, a peer-reviewed study — or an anonymous account or a screenshot of a screenshot?
If something significant is true, multiple credible, independent outlets will be reporting it. Use dedicated fact-checkers: Snopes, Full Fact, AFP Fact Check, or your national fact-checker.
Real images, quotes, and statistics are frequently stripped of context to mean something different. Is this the full picture, or a carefully chosen fragment designed to create a specific impression?
Everything you do online leaves a trace. Every search, like, comment, purchase, and location check-in contributes to your digital footprint — a comprehensive profile of who you are, what you believe, and what you do. This is collected by platforms, sold to advertisers, analysed by algorithms, and in some contexts accessed by employers or institutions.
Your footprint has two parts. Your active footprint is what you deliberately create and share: posts, comments, profile information. Your passive footprint is everything platforms record without you noticing: what you linger on, what you search for, where you are. The passive footprint is often far more detailed — and almost entirely invisible to most users. Long-term consequences are real: offensive comments made at 17 can affect job applications at 27. Screenshots persist long after posts are deleted.
Should social media platforms have the power to remove content and ban users? Who holds those platforms accountable for their decisions?
Is it ethical for algorithms to personalise your information environment, effectively deciding what version of reality you see? Who benefits from this?
If you share misinformation without knowing it is false, are you still ethically responsible for the harm it causes? Does intent change the impact?
Where is the line between a public figure's public role and their right to a private life? Does choosing a public role reduce your claim to privacy?
Ethical digital behaviour is not about perfection — it is about intention, consistency, and accountability. The commitment is not to never make mistakes, but to keep asking hard questions and to correct course honestly when you cause harm.
Positive online communities do not happen by accident. They are built deliberately, consistently, by the people who inhabit them. Whether you are moderating a youth group chat, running an NGO's social media, or simply showing up as a member of an online space, your choices shape the culture of that space.
Inclusion is frequently described as a value. It is more useful to think of it as a practice — something you do, not just something you believe in. Research and practice in online community management consistently point to the same persistent barriers:
The most effective remedies: co-creating community guidelines with members rather than imposing them; regularly checking in with quieter voices; actively elevating underrepresented perspectives; and responding consistently and transparently to violations.
Conflict resolved in public turns into performance. Comment sections have audiences, and audiences change behaviour. Take the conversation into a direct message where both parties can engage without performing for others.
You can disagree strongly with what someone is saying while still treating them with basic dignity. Conflating disagreement with an idea and an attack on the person holding it is one of the most common drivers of online conflict escalation.
Even in heated disagreements, there is usually a shared value underneath. Finding and naming it explicitly changes the dynamic: from two people fighting each other, to two people who care about the same thing and disagree on how to address it.
Not every online argument is worth pursuing. Sometimes the most constructive choice is to withdraw, let things cool, and return — or not. Disengaging is not losing; it is a valid conflict management strategy.
Address problematic behaviour early. Apply rules consistently to all members, and explain your decisions. Arbitrary moderation destroys community trust faster than almost anything else.
You don't need to be an admin or a moderator to improve an online space. Every interaction is an opportunity to model the culture you want to see. Small, consistent choices add up into community norms.
Positive online communities are built through consistent, deliberate practice — not good intentions alone. Inclusion is something you do. Communication is a skill you develop. Conflict is something you can learn to navigate rather than avoid or inflame.
Five commitments to bring into every digital space you inhabit.
Online campaigns are among the most accessible advocacy tools available to youth workers and young people today. A well-designed campaign can shift public opinion, build communities around a cause, and hold power to account — without significant financial resources. What it requires is clarity of purpose, strategic thinking, and consistent execution.
"Raise awareness" is where campaigns start — but it is not an objective. Awareness is an intermediate goal. The real question: awareness leading to what? What specific change do you want to see in the world?
"Young people" is not an audience. "16–19 year olds in secondary school who primarily use TikTok and are comfortable in both their national language and English" is an audience — you can design content, choose platforms, and craft messages specifically for them. For each audience segment, think about: Where do they spend time online? What do they already believe about your topic? What barriers might prevent them from taking the action you're asking for?
Instagram / TikTok: Visual storytelling, short video, strong among 13–25. High organic reach for compelling content. Best for: awareness, emotional connection, campaigns with strong visual assets.
Facebook / Groups: Strongest among 25+ and community contexts. Good for: community building, event promotion, reaching parents, professionals, and NGO networks.
LinkedIn: Professional audiences. Good for: campaigns aimed at institutions, employers, or policy audiences. Less suited for youth-facing content.
YouTube: Long-form video. Strong for educational content and campaigns that benefit from depth. Good for reaching people already searching for information on your topic.
The most effective campaigns come from people who are genuinely close to the issue they are addressing. You already have that. Pair it with a clear strategy and you have everything you need to make real impact.
A campaign is a strategy, not just a post. Objective, audience, message, platform, content, and measurement — six elements that work together. Get them right, and your budget becomes far less important than the clarity of your purpose.
Complete this before creating a single piece of content. Each field is a decision that shapes everything else.
Specific, measurable, time-bound
Age, platform, existing attitudes
One sentence. Ruthlessly edit.
One clear action per post
Where your audience already is
Video, graphic, story, text, live…
Launch date, key moments, end
Must connect directly to your goal
Key terms from this toolkit, defined simply.
A curated directory of free tools, platforms, and organisations to support your work in digital youth spaces — organised by topic.
The go-to free tool for checking whether your email or password has appeared in a known data breach. Instantly shows which breaches exposed your data and what type of information was leaked — results feel personal and immediate.
Mozilla's free breach monitoring service that alerts you when your personal information appears in new data leaks, and scans for your data on broker sites that sell personal information without your consent.
Scans your saved Google passwords against known breach databases and flags weak, reused, or compromised ones. Takes under a minute — a practical starting point for anyone who uses Google to manage passwords.
A guided walkthrough of your Google account's privacy and security settings — location history, ad personalisation, data sharing, and app access. Takes about 10 minutes and often surfaces settings people didn't know existed.
A free, open-source password manager that generates and securely stores strong unique passwords across all devices. Its code is publicly auditable, making it a more trustworthy recommendation than closed commercial alternatives.
Reviews your Facebook and Instagram login activity, two-factor authentication status, and connected third-party apps. Often reveals forgotten app permissions granted years ago — a useful exercise in awareness.
The simplest and most impactful exercise: search your own name, username, and email address and see what is publicly visible. Try image search too. Often more revealing than expected, and sparks strong discussion about self-presentation and data exposure.
Connects to your Gmail and scans for every service you have ever signed up to, then helps you delete old accounts systematically. Reducing your account footprint directly reduces exposure in future breaches — most people discover accounts they completely forgot existed.
Mozilla's research-backed guide that rates the privacy practices of popular apps, platforms, and devices — from dating apps to smart speakers. Each product is assessed on data collection, encryption, and potential for misuse.
A directory that rates how easy or hard it is to delete your account from hundreds of websites, colour-coded from easy to impossible. Highlights how many platforms deliberately make it difficult to leave — a great conversation starter about digital rights and GDPR.
A free browser extension for verifying online videos and images — checks metadata, reverse-searches frames, and detects manipulation. Developed as part of an EU-funded research project and widely used by journalists across Europe.
Drag any image into Google Images to instantly find its original source, earlier versions, and where else it appears online. One of the most transferable verification skills you can pass on — takes seconds and can debunk misleading posts on the spot.
One of the oldest and most reliable fact-checking sites, covering viral claims, urban legends, and political misinformation with clearly sourced verdicts. A solid reference for showing what rigorous, transparent fact-checking looks like.
An independent European fact-checking organisation that also publishes practical guides on how to verify claims yourself. More relevant to European political and media contexts than most US-focused alternatives. Free resources translate well into workshop activities.
A capable AI assistant well suited for creating workshop materials, discussion scenarios, netiquette case studies, and facilitation guides. Handles nuanced and sensitive topics around online ethics thoughtfully — a strong first AI tool to introduce to youth workers new to AI.
The most widely known AI assistant, useful for drafting session plans, quiz questions, icebreakers, and role-play scenarios. Its widespread adoption among young adults makes it a natural tool to explore critically — participants are very likely already using it.
An AI-powered research assistant that answers questions with cited, linked sources rather than generating unverifiable content. Particularly valuable for demonstrating the difference between AI that references real information and AI that fabricates it.
An AI tool that adapts any topic or article to a chosen reading level or language — extremely useful for making complex cybersecurity and privacy content accessible to diverse groups. A practical tool for youth workers dealing with multilingual or mixed-background participants.
Combines AI writing assistance with Canva's design tools to quickly produce posters, infographics, and presentations on netiquette and online safety topics. The free tier is generous and the interface is fast to learn.
The EU's official hub for online safety, co-funded by the European Commission and available in all EU languages — directly Erasmus-aligned. Contains substantial resources for educators, youth workers, and professionals working with young adults.
A network of national centres across Europe providing local helplines, hotlines, and country-specific resources on online safety. Each centre operates independently and offers support contacts relevant to its national context.
Run by the National Cybersecurity Alliance, this site offers plain-language guides, tip sheets, and infographics on a wide range of online safety topics. Materials are free to download and share — particularly useful for creating handouts from workshops.
A community-maintained directory of privacy-respecting alternatives to mainstream tools — browsers, email, VPNs, messaging apps, and more. Useful when participants ask what they should actually switch to after becoming aware of privacy issues. Honest about trade-offs.
A leading digital rights organisation that publishes accessible guides on surveillance, privacy, encryption, and online freedoms. Their Surveillance Self-Defense guide (ssd.eff.org) is particularly strong — adds a rights dimension that pairs well with practical safety tools.
The US Cybersecurity and Infrastructure Security Agency publishes a large bank of free awareness materials, videos, and tip cards on phishing, ransomware, and account security. While US-focused in places, most content is universally applicable.
The Erasmus+ tool for documenting, reflecting on, and formally recognising non-formal and informal learning in youth projects. Essential for participants completing this programme.
European resource centre for youth work and youth workers. Toolkits, training calendars, and networking opportunities for Erasmus+ professionals across Europe.

