Honored to Be a 2026 Omnissa Tech Insider (Year Two!)

I’m incredibly grateful to share that I’ve been selected once again as part of the 2026 Omnissa Tech Insiders —my second year in this inspiring community.

This year’s cohort brings together an exceptional group of professionals with deep experience across AI, cloud, security, developer tools, and beyond. The diversity of perspectives, real-world impact, and accomplishments across the group truly impressed me.

Being part of this community has been both energizing and humbling—learning from peers, exchanging ideas, and contributing to conversations that are shaping the future of technology. I’m proud to stand alongside such talented individuals and excited about what lies ahead.

A huge thank you to the Omnissa team and to everyone in this cohort. Congratulations to all the 2026 Tech Insiders—I’m looking forward to another great year of collaboration and growth.

👏

👉 View the full announcement here: https://lnkd.in/etxzrcVS

Creating an AWS Linux System and Using Amazon Polly (CLI, Python, and GUI)


Amazon Polly makes it easy to convert text into natural-sounding speech using AI-powered voices. Whether you prefer clicking through a web interface or automating everything on a Linux server, Polly has you covered.

In this guide, we’ll:

  • Launch an Amazon Linux EC2 instance
  • Use Amazon Polly from the AWS Console (GUI)
  • Generate speech using the AWS CLI
  • Create audio files programmatically with Python

What Is Amazon Polly?

Amazon Polly is a managed text-to-speech service that:

  • Converts text into lifelike speech
  • Supports multiple languages and neural voices
  • Outputs MP3, OGG, and PCM audio formats
  • Requires no infrastructure management

Prerequisites

You’ll need:

  • An AWS account
  • An EC2 key pair
  • Basic Linux knowledge
  • An IAM user or role with Polly permissions

Step 1: Launch an Amazon Linux EC2 Instance

  1. Go to AWS EC2 Console
  2. Click Launch Instance
  3. Choose Amazon Linux 2
  4. Select t2.micro or t3.micro
  5. Allow SSH (port 22) in the security group
  6. Launch the instance

Copy the public IP address once the instance is running.


Step 2: Connect to the EC2 Instance

ssh -i your-key.pem ec2-user@<EC2_PUBLIC_IP>

You are now logged into your Amazon Linux server.


Step 3: Using Amazon Polly via the AWS Console (GUI)

Before touching the command line, let’s explore Polly using the AWS Management Console — this is the fastest way to experiment.

Accessing the Polly Console

  1. Log in to the AWS Management Console
  2. Search for Polly
  3. Click Amazon Polly
  4. Open the Text-to-Speech page

No EC2 instance is required for this step.


Generating Speech in the GUI

  1. In the Text-to-Speech editor:
    • Enter your text:
Welcome to Amazon Polly. This audio was created using the AWS Console.

  1. Choose a voice (e.g., Joanna, Matthew)
  2. Select Engine:
    • Standard
    • Neural (more natural, recommended)
  3. Choose Language
  4. Click Listen ▶️

You’ll hear the generated speech instantly.


Downloading the Audio File

  1. Select Output format (MP3 or OGG)
  2. Click Download
  3. Save the file locally

This is perfect for:

  • Testing voices
  • Demos and presentations
  • Content creation workflows

Using SSML in the GUI (Optional)

Enable SSML to control speech:

<speak>
  Welcome to <emphasis level="strong">Amazon Polly</emphasis>.
  <break time="1s"/>
  This is an example using SSML.
</speak>

SSML allows:

  • Pauses
  • Emphasis
  • Speaking rate control
  • Pronunciation tuning

Step 4: Configure AWS Credentials on Linux

Recommended: IAM Role

Attach an IAM role to the EC2 instance with:

  • AmazonPollyFullAccess

No credentials required on the server.

Alternative: AWS CLI Credentials

aws configure

Enter:

  • Access key
  • Secret key
  • Region (e.g., us-east-1)

Step 5: Using Amazon Polly from the AWS CLI

Generate speech directly from Linux:

aws polly synthesize-speech \
  --voice-id Joanna \
  --output-format mp3 \
  --text "This audio was generated from the AWS CLI" \
  cli-output.mp3

Install an audio player:

sudo yum install -y mpg123

Play the file:

mpg123 cli-output.mp3

Step 6: Using Amazon Polly with Python

Install Dependencies

sudo yum install -y python3 pip
pip3 install boto3

Python Script Example

Create the script:

nano polly_tts.py

Add:

import boto3

polly = boto3.client("polly")

response = polly.synthesize_speech(
    Text="Hello from Amazon Polly using Python on Amazon Linux",
    OutputFormat="mp3",
    VoiceId="Matthew"
)

with open("python-output.mp3", "wb") as file:
    file.write(response["AudioStream"].read())

print("Audio file created: python-output.mp3")

Run it:

python3 polly_tts.py
mpg123 python-output.mp3

Comparing GUI vs CLI vs Code

MethodBest For
AWS Console (GUI)Voice testing, demos, learning
AWS CLIAutomation, scripting
Python / SDKApplication integration

Security Best Practices

  • Prefer IAM roles over access keys
  • Use least-privilege IAM policies
  • Monitor usage with CloudWatch
  • Avoid committing credentials to Git

Conclusion

Amazon Polly is flexible enough for beginners and powerful enough for production systems. Whether you use the AWS Console GUI, CLI, or Python SDK, Polly lets you bring natural-sounding speech to your applications quickly and securely.

Once you’re comfortable, you can combine Polly with:

  • S3 for audio storage
  • Lambda for serverless processing
  • Transcribe for full speech workflows

Happy building—and enjoy giving your apps a voice 🔊🚀


Creating an AWS Linux System Running Docker and Managing It with Portainer

Running containers in the cloud doesn’t have to be complicated. In this guide, we’ll walk through creating an AWS EC2 Linux instance, installing Docker, and setting up Portainer to manage containers visually and effortlessly.

By the end, you’ll have a lightweight, production-ready Docker host you can control from your browser.


Prerequisites

Before we begin, make sure you have:

  • An AWS account
  • Basic familiarity with Linux and SSH
  • An EC2 key pair for secure access
  • A local machine with SSH installed

Step 1: Launch an AWS EC2 Linux Instance

  1. Log in to the AWS Management Console
  2. Navigate to EC2 → Launch Instance
  3. Choose an AMI:
    • Select Amazon Linux 2 (recommended for stability and AWS compatibility)
  4. Choose an instance type:
    • t2.micro or t3.micro (free tier eligible)
  5. Configure key settings:
    • Attach your key pair
    • Allow SSH (port 22) in the security group
    • Add port 9000 (Portainer UI) and port 80 if you plan to run web apps
  6. Launch the instance 🚀

Once running, copy the public IPv4 address.


Step 2: Connect to the EC2 Instance

From your local terminal:

ssh -i your-key.pem ec2-user@<EC2_PUBLIC_IP>

If successful, you’ll be logged into your Amazon Linux server.


Step 3: Install Docker on Amazon Linux

Update the system:

sudo yum update -y

Install Docker:

sudo amazon-linux-extras install docker -y

Start and enable Docker:

sudo systemctl start docker
sudo systemctl enable docker

(Optional) Allow your user to run Docker without sudo:

sudo usermod -aG docker ec2-user
exit

Reconnect to apply the changes.

Verify installation:

docker --version

Step 4: Run Docker Containers

Test Docker by running a container:

docker run hello-world

If you see the success message, Docker is working correctly 🎉


Step 5: Install Portainer

Portainer gives you a clean web UI to manage containers, images, networks, and volumes.

Create a Docker volume for Portainer

docker volume create portainer_data

Run Portainer

docker run -d \
  -p 9000:9000 \
  --name portainer \
  --restart always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce

Check that it’s running:

docker ps

Step 6: Access the Portainer Dashboard

Open your browser and go to:

http://<EC2_PUBLIC_IP>:9000

On first launch:

  1. Create an admin password
  2. Select Docker (local) as the environment
  3. Click Connect

You now have full visual control over your Docker host 🎯


What You Can Do with Portainer

With Portainer, you can:

  • Deploy containers using forms or Docker Compose
  • Monitor container health and logs
  • Manage volumes, networks, and images
  • Stop, start, or scale services
  • Secure access with user roles

It’s perfect for:

  • Small production workloads
  • Learning Docker visually
  • Managing remote servers with ease

Security Best Practices

Before using this in production, consider:

  • Restricting port 9000 to your IP only
  • Enabling HTTPS with a reverse proxy
  • Using IAM roles instead of access keys
  • Regularly updating Docker and the OS

Conclusion

By combining AWS EC2, Amazon Linux, Docker, and Portainer, you get a powerful yet simple container platform that scales with your needs. Whether you’re deploying side projects or learning container orchestration, this setup is an excellent foundation.

Happy containerizing 🐳🚀


I Miss vCenter — So I’m Building My Own (in AWS)

I’ve been living in AWS long enough that I’m supposed to have moved on.

I can design multi-account landing zones, argue about Transit Gateways vs. VPC peering, and recite IAM best practices in my sleep. I understand why cloud-native patterns exist. I even agree with most of them.

But if I’m being honest?

I miss vCenter.

The Comfort of a Single Pane of Glass

Back in the vSphere days, vCenter was home base. One UI. One mental model. One place where I could:

  • See all my workloads
  • Understand capacity at a glance
  • Migrate compute without rewriting the world
  • Apply policies consistently
  • Fix problems visually instead of spelunking through APIs

Yes, it was centralized. Yes, it had limitations. Yes, it could be fragile.

But it was coherent.

In AWS, coherence is… optional.

AWS Is Powerful — But Fragmented

Don’t get me wrong: AWS is incredible. The primitives are flexible, scalable, and battle-tested. But as an operator, the experience is scattered:

  • EC2 over here
  • ASGs over there
  • Load balancers somewhere else
  • Metrics in CloudWatch
  • Config in tags (maybe)
  • Inventory split across accounts and regions

The AWS Console isn’t lying to you — but it also isn’t telling you the whole story in one place.

Instead of operating infrastructure, I often feel like I’m assembling context.

What vCenter Got Right

vCenter wasn’t just a hypervisor manager. It was an operations platform:

  • Strong inventory model
  • Clear parent/child relationships
  • First-class lifecycle concepts
  • Human-readable abstractions
  • Predictable workflows

You didn’t need five services and a wiki page just to answer:

“What’s running where, and why?”

So… I’m Building My Own vCenter (Sort Of)

I’m not trying to recreate vSphere in the cloud. That would miss the point.

What I am doing is building a control plane on top of AWS Using APIS that gives me back what I miss:

  • A unified inventory across accounts and regions
  • Opinionated metadata instead of tag chaos
  • Clear ownership and lifecycle states
  • Capacity and cost visibility that makes sense to humans
  • Operational workflows that don’t start with “open three consoles”

Think less “hypervisor replacement” and more operator experience layer.

AWS provides the raw materials. I’m just putting a dashboard, model, and brain on top of them.

Cloud-Native Doesn’t Have to Mean Operator-Hostile

Somewhere along the way, “cloud-native” became synonymous with:

  • More YAML
  • More dashboards
  • More glue code
  • More tribal knowledge

But abstraction isn’t the enemy. Bad abstraction is.

vCenter succeeded because it respected how humans think about systems. AWS succeeds because it gives you freedom. The gap between the two is where a lot of operator pain lives.

That gap is exactly what I’m trying to close.

This Is Not Nostalgia — It’s a Design Problem

I don’t miss vCenter because it was old.

I miss it because it solved real operational problems well.

If we can acknowledge that, we can stop pretending the current state is perfect — and start building better tools on top of the cloud we actually run.

So yes, I’m an AWS admin Now.

And yes, I miss vCenter.

That’s why I’m building my own. More to come

From UAG 23.12 + Tunnel Client 24.05 to “current”: a practical compatibility + migration playbook

If you’re sitting on Omnissa Unified Access Gateway (UAG) 23.12 with Workspace ONE Tunnel clients 24.05, you’re in a common (and totally reasonable) place: stable, known-good, and old enough that newer security defaults, platform support boundaries, and client behaviors can surprise you during an upgrade.

This post walks through:
• what actually changes across newer UAG generations,
• where compatibility issues tend to show up (spoiler: not always where you expect),
• and a clean upgrade path from 23.12 → latest UAG plus 24.05 → latest Tunnel clients with minimal user pain.

The moving pieces you’re upgrading

In a typical Workspace ONE Tunnel deployment with UAG, you’re juggling:
1. UAG appliance version (your “edge”)
2. Tunnel service configuration on UAG (and any related auth/cert/TLS posture)
3. Tunnel client versions (Windows/macOS/iOS/Android/ChromeOS/Linux) distributed via UEM
4. Profiles / payloads (per-app vs full-device, proxy rules, domains, certs, etc.)

When you change (1), you often implicitly change (2) — and that’s where upgrade “compatibility” breaks tend to live.

What’s “latest” right now (and why it matters)

Omnissa’s UAG release notes catalog shows newer trains beyond 23.12, including 24.12, 25.03, 25.06, and 25.06.1. 

On the Tunnel side, Omnissa’s documentation hub continues to publish frequent client updates across platforms (with 2025 updates for several clients). 

So your upgrade isn’t just “one hop.” You’re effectively moving across multiple release trains, which is why a staged approach works best.

Key compatibility “gotchas” when moving off UAG 23.12

1) UAG 23.12 introduced security posture changes that can surface during upgrades

Omnissa called out security enhancements “in UAG 2312 and beyond,” along with remediation guidance for older settings/configurations. 

Why it matters: if your 23.12 deployment was tuned around older TLS/cipher assumptions or legacy settings, later releases can tighten defaults further—turning what used to be “fine” into handshake/auth issues.

Practical takeaway: plan a validation pass specifically for:
• TLS/cert chain correctness (full chain, intermediates)
• any custom SSL profiles / ciphers
• SAML/IdP flows (especially with modern browser policies)

2) Platform support boundaries can force infra upgrades first (vSphere/ESXi)

A notable line that has tripped people up: UAG 24.12 release notes indicate it supports vSphere 7.x and later only, tied to an OS change in the appliance. 

Practical takeaway: before you pick a UAG target version, confirm your hypervisor version. If you’re still on vSphere 6.7, you’ll need to address that first (or choose an older UAG ceiling intentionally).

What changes when you move from Tunnel client 24.05 forward

iOS: 24.05 introduced Full-Device Tunnel mode (MDM enrolled)

Starting with Workspace ONE Tunnel for iOS 24.05, Omnissa introduced Full-Device Tunnel mode on MDM-enrolled devices. 

Even if you don’t enable it, that release boundary is important because it signals a more modern split between:
• per-app tunneling behaviors, and
• device-wide tunneling use cases

Windows: later 25.x updates include “action required” style changes

Omnissa published guidance noting that beginning with Tunnel for Windows client 25.08, Rapid DTR becomes enabled by default and a one-time in-app sync may be required. 

Practical takeaway: for Windows fleets, plan user communications and staged rollout rings, because a “one-time sync” prompt is the kind of thing that spikes helpdesk volume if everyone hits it on the same morning.

A sane upgrade strategy (that avoids the classic outage pattern)

Guiding principles
• Separate appliance upgrades from client upgrades (don’t change everything in one maintenance window).
• Prefer parallel build + cutover over in-place upgrades for major train jumps.
• Treat certificates + TLS settings as first-class migration objects, not “we’ll see if it works.”

Recommended upgrade path: UAG 23.12 → latest UAG (25.06/25.06.1) + Tunnel clients → latest

Phase 0 — Preflight checklist (do this before touching anything)
1. Confirm platform compatibility
• Hypervisor: if you want to go to newer UAG trains, verify you meet the vSphere requirements highlighted for later versions (e.g., UAG 24.12+ requiring vSphere 7+). 
2. Inventory your “edge contract”
• External URL/FQDNs, VIP/LB behavior, ports
• Cert chain + renewal process
• Auth method(s): SAML, RSA, RADIUS, cert auth, etc.
• Tunnel use cases: per-app vs (any) full-device, platform coverage
3. Document current Tunnel profile behaviors
• Split tunnel rules, domains, proxy PAC, bypass lists
• Any app-specific exceptions users rely on

Phase 1 — Get to a modern UAG without changing the client fleet yet

Goal: Stand up the target UAG version in parallel and prove it can serve your existing clients.
1. Select target UAG train
• “Latest” currently includes 25.03 / 25.06 / 25.06.1 listed by Omnissa’s UAG release notes hub. 
• If you have strict change control, pick the newest patch in the train you’re standardizing on.
2. Deploy new UAG(s) in parallel
• Same sizing, same network zones, same LB pattern
• Import/mirror config via your standard method (PowerShell/REST/UI), but do not re-use old mistakes blindly—this is where you clean up old TLS/cert shortcuts.
3. Connectivity validation with current clients (24.05)
• Start with a small pilot group on each platform
• Validate:
• app reachability
• auth flow consistency
• idle/reconnect behavior
• DNS resolution through the tunnel
4. Cutover
• Prefer DNS/LB cutover over changing each device
• Roll back by flipping VIP/DNS back if needed

Phase 2 — Upgrade Tunnel clients in rings (now that UAG is stable)

Goal: Move from 24.05 to current clients with predictable user impact.
1. Define rollout rings
• Ring 0: IT + a few power users
• Ring 1: one department / one region
• Ring 2: broad rollout
2. Platform-specific “watch items”
• iOS: decide whether you’ll use Full-Device Tunnel (introduced in 24.05) or stay per-app; ensure your profiles match that intent. 
• Windows: plan comms around potential one-time in-app sync / Rapid DTR behavior in newer versions (noted for 25.08). 
3. Update profiles only when needed
• If you keep the same tunneling mode and routing rules, you can often upgrade the app without reauthoring the profile.
• If you switch modes (per-app → full-device) treat it like a mini-project: pilot, measure, expand.
4. Observe and iterate
• Look for: auth retries, DNS oddities, app-specific failures, battery/perf complaints (mobile)

Migration “quality of life” tips that prevent 2am surprises
• Don’t skimp on cert chain correctness. Most mysterious “handshake” incidents are just missing intermediates or mismatched cert/key pairs.
• Keep one “known-good” UAG 23.12 instance temporarily (powered off but ready) if your environment allows it—rollback becomes far simpler.
• Upgrade your monitoring along with the edge. If your log parsing assumes old formats/paths, you’ll feel blind right when you need visibility.
• Stagger Windows updates more slowly than mobile. Windows client changes tend to have the most “interaction required” edge cases.

I

🛡 How to Turn an Alpine Linux Server into a Tailscale Gateway for Your LAN

Why a Tailscale Gateway?

Tailscale normally requires each device to run the Tailscale client. That works fine for laptops, phones, and servers, but what about devices like printers, cameras, or NAS boxes?

With a Subnet Router, a single Tailscale-connected server can act as a bridge to your entire LAN — so any device on your Tailscale network can reach those local-only devices securely.


What You’ll Need

  • A small Alpine Linux server (VM, bare metal, or Raspberry Pi)
  • An active Tailscale account
  • Access to your LAN network (e.g., 192.168.1.0/24)
  • Your Tailscale auth key (from the Tailscale admin panel)

Step 1: Update & Install Tailscale

First, update Alpine and install Tailscale:

apk update && apk upgrade
apk add tailscale tailscale-openrc

Step 2: Enable IP Forwarding

This allows the Alpine box to forward traffic between your Tailscale network and LAN.

Edit /etc/sysctl.conf:

nano /etc/sysctl.conf

Add:

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Apply:

sysctl -p

Step 3: Start Tailscale & Advertise Routes

Start the Tailscale service:

rc-update add tailscaled default
rc-service tailscaled start

Now bring Tailscale online, advertising your LAN subnet:

tailscale up \
  --auth-key=tskey-auth-XXXXXXX \
  --advertise-routes=192.168.1.0/24 \
  --accept-routes

Step 4: Approve Routes in Tailscale Admin

Log in to Tailscale Admin Routes and enable the route for 192.168.1.0/24.


Step 5: (Optional) Adjust Firewall Rules

If Alpine’s firewall is active, you’ll need to allow forwarding:

apk add iptables
iptables -A FORWARD -i tailscale0 -j ACCEPT
iptables -A FORWARD -o tailscale0 -j ACCEPT
/etc/init.d/iptables save

Done! 🎉

Now, any device on your Tailscale network can securely reach devices on your LAN without needing a VPN client installed.

Example:

  • From your laptop on Tailscale, you can hit http://192.168.1.50 to access your NAS dashboard — even from across the world.

Why This Rocks

  • Zero Trust Security — Every connection is authenticated via your Tailscale identity provider.
  • No Port Forwarding — Works through NAT and firewalls.
  • Cross-Platform — Works for Windows, macOS, Linux, iOS, Android, and even cloud VMs.

💡 Pro tip: Combine this with Tailscale ACLs to restrict who can access which LAN devices.

CrowdStrike MacOS Workspace One Setup

After weeks of troubleshooting Verese issues I thought it would be good to document a working process. Hopefully this helps not go through some of the pain that I have been through due to some of corks.

I have a few profile to enable CrowdStrike with no user interaction needed.

macOS - CrowdStrike - Content Filter

Filter Name: falcon
Identifier: com.crowdstrike.falcon.App
Organization: CrowdStrike, Inc.
Filter Socket Traffic: Enabled
Socket Filter Bundle ID: com.crowdstrike.falcon.Agent
Socket Requirement: identifier "com.crowdstrike.falcon.Agent" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] and certificate leaf[field.1.2.840.113635.100.6.1.13] and certificate leaf[subject.OU] = X9E956P446
Filter Grade: Inspector

macOS – CrowdStrike – Login and Background Items

Rule Type: BundleIdentifier
Rule Value: com.crowdstrike.falcon.UserAgent
Team Identifier: X9E956P446
macOS - CrowdStrike - Notification Settings
App Bundle ID: com.crowdstrike.falcon.UserAgent
Allow notifications: Enable
Show in Notification Center: Enable
Show in Lock Screen: Enable
Allow badging: Enable
Allow sounds: Enable
Allow critical alert notifications: Enable
Alert Type: Temporary Banner 

macOS – CrowdStrike – System Extension

Allowed System Extension Types
Team Identifier: X9E956P446
Endpoint Security & Network Enable

Allowed System Extensions
Team Identifier: X9E956P446
Bundle Identifier: com.crowdstrike.falcon.Agent

Now this is what gave me and so many people issue. I dont know if this is a bug or undocumented need for Workspace one and Crowd Strike Profile.

In this order Create a MacOS – Crowdstrike – Privacy Preference in this order

Identifier: com.crowdstrike.falcon.Agent
Identifier Type Bundle ID
Code Requirement: identifier "com.crowdstrike.falcon.Agent" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = X9E956P446
Comment: agent
System Policy All Files: Allow
System Policy Sys Admin Files: Allow

Now add a second Prefrences inside the same one for Falcon App

Identifier: com.crowdstrike.falcon.App
Identifier Type Bundle ID
Code Requirement: identifier "com.crowdstrike.falcon.App" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = X9E956P446
Comment: app
System Policy All Files: Allow
System Policy Sys Admin Files: Allow

I hope this helps save you time: The big issue was not having something in the comments. Once that was added the rest your app should not go green in some cases I need to reboot

Bonus: Install Script

Post Install Script
#!/bin/bash
sudo /Applications/Falcon.app/Contents/Resources/falconctl license "Your Key"

Omnissa Pass

Omnissa Pass: Elevating Enterprise Authentication with Passwordless Security

In today’s digital landscape, traditional passwords have become a significant vulnerability, often leading to security breaches and user frustration. Recognizing this challenge, Omnissa introduces Omnissa Pass, a cutting-edge multi-factor authentication (MFA) solution designed to enhance security while simplifying the user experience.


🔐 What is Omnissa Pass?

Omnissa Pass is a mobile application that provides secure, passwordless authentication for enterprise applications and services. By leveraging FIDO2 passkeys, it offers a modern approach to authentication, eliminating the need for passwords and reducing the risk of credential theft. Users can authenticate using biometric methods or device-based credentials, ensuring both security and convenience.


🚀 Key Features

  • Passwordless Authentication: Utilizes FIDO2 passkeys to enable secure, password-free logins.
  • Multi-Factor Authentication (MFA): Combines device-based credentials with biometric verification for enhanced security.
  • Device Compliance Checks: Integrates with Omnissa Access to ensure that only compliant devices can authenticate, enforcing organizational security policies. 
  • Seamless Integration: Works across various platforms and integrates with existing enterprise systems, facilitating a smooth transition to passwordless authentication.

📱 Availability

Omnissa Pass is available for download on major mobile platforms:


🛡️ Enhancing Security with Omnissa Access

When paired with Omnissa Access, organizations can enforce strict access controls based on device compliance and user authentication. This integration ensures that only authorized users on compliant devices can access sensitive corporate resources, aligning with Zero Trust security principles. 


🌐 Embracing the Future of Authentication

By adopting Omnissa Pass, enterprises can:

  • Reduce Security Risks: Eliminate vulnerabilities associated with traditional passwords.
  • Improve User Experience: Offer a seamless and intuitive authentication process.
  • Ensure Compliance: Meet regulatory requirements with robust security measures.

Transitioning to passwordless authentication with Omnissa Pass not only strengthens security but also enhances overall user satisfaction.


For more information and to explore how Omnissa Pass can benefit your organization, visit the Omnissa Tech Zone.

Fetch – Windows Application Lifecycle Tool for Workspace ONE UEM Omnissa

Fetch Review: Simplifying Windows Application Management

Hi there folks!

After spending some time with Fetch, I’m excited to share my review of this innovative tool that addresses one of the biggest challenges in Windows Desktop management—Application Management.


The Challenge of Application Management

Workspace ONE Administrators know how complex and time-consuming it can be to make applications available on managed devices. Traditionally, the process involves manually downloading installers, preparing binaries, and creating detailed application entries within Workspace ONE UEM. This often leads to delays and inconsistencies in deployments.


What is Fetch?

Fetch is a Windows application designed to streamline and automate the deployment of native Windows applications within Workspace ONE. By automating the process of downloading installers, uploading binaries, and creating Native Windows Application entries complete with all required metadata, Fetch drastically reduces the manual workload and potential for errors.

With a robust database boasting over 7,000+ unique applications and a staggering 62,000+ application versions, Fetch offers an extensive resource that simplifies the deployment process.

Below is a snapshot of the tool in action:


Key Workflows Offered by Fetch

Fetch enhances the application management process with four main workflows:

1. Application Search and Creation:

• Simply search for an application by name and automatically generate its corresponding Native App entry in Workspace ONE UEM.

2. Software Asset Management Integration:

• Upload a Software Asset Management or Application Report (like the Installed Apps report from Workspace ONE Intelligence, Software Deployment Report from SCCM, or a Powershell report of network devices). Fetch checks its extensive database for matching applications, then assists in creating the corresponding Native App in UEM.

3. Application Version Management:

• Interrogate your current Workspace ONE UEM environment to discover if updated versions of applications are available. Fetch then enables you to upload and create the updated application version seamlessly.

4. Manifest-Based Deployment:

• Upload a manifest (template) containing details of your organization’s existing Native Windows Applications along with your installer files. Fill in the necessary metadata, and Fetch processes the manifest to upload the installers and create the apps in UEM accordingly.


The Verdict

As a reviewer, I found that Fetch effectively addresses many of the hurdles traditionally faced by Workspace ONE Administrators. Its automation of repetitive tasks not only saves time but also reduces the likelihood of manual errors, ensuring that application deployments are both consistent and efficient. The extensive database is a clear highlight, providing a strong foundation that supports a wide array of applications and versions.

If you’re looking for a tool that simplifies and accelerates Windows application management, I highly recommend giving Fetch a try. For more detailed instructions and to download the tool, check out the documentation and download Fetch.

Happy managing!