Unlocking the Power of Multimodal AI: A Deep Dive into LLaVA and LLaMA 3 – Demo in LM Studio

23 May

In my earlier post we explored uncensored LLM like Dolphin. Today, we shall look into the intersection of visual and language understanding what happens when a marriage takes place between Vision & LLM. One such innovation is LLaVA (Large Language and Visual Assistant), an open-source generative AI model that combines the strengths of vision encoders and large language models to create a powerful tool for general-purpose visual and language understanding. In this blog post, we’ll delve into the details of LLaVA, its underlying models, and how you can harness its capabilities using LMStudio.

What is LLaVA?

🖼️ LLaVA is a novel, end-to-end trained large multimodal model that integrates a pre-trained CLIP ViT-L/14 visual encoder with the Vicuna large language model. The integration is achieved through a projection matrix, enabling seamless interaction between visual and language data. LLaVA is designed to excel in both daily user-oriented applications and specialized domains such as science, offering a versatile tool for multimodal reasoning and instruction-following tasks.

What is LLaMA 3?

🧠 LLaMA 3 is the third iteration of the Large Language Model from Meta AI, known for its remarkable language understanding and generation capabilities. LLaMA 3 builds upon its predecessors with improved architecture, enhanced training techniques, and a broader dataset, making it one of the most advanced language models available. In the context of LLaVA, LLaMA 3 serves as the foundation for the language model component, providing robust support for complex conversational and reasoning tasks.

How to Run the Model Locally Using LMStudio

💻 Running LLaVA locally using LMStudio is a straightforward process that allows you to leverage the model’s capabilities on your own hardware. Here’s a step-by-step guide to get you started:

  • Setup Your Environment
    • Install LMStudio: The software its available on (Windows, Mac & Linux). This software allows you to manage and deploy local LLMs without you having to setup Python, Machine Learning, Transformers etc. libraries. Link to Download the Windows Bits  – LM Studio – Discover, download, and run local LLMs
  • Download the Model and Dependencies
    • The best space to keep a track on models is Hugging Face – Models – Hugging Face. You can keep a track of the model releases and updates here.
    • Copy the model name from Hugging Face – xtuner/llava-llama-3-8b-v1_1-gguf
    • Paste this name in LM Studio and it will list out all the quantized models
    • In my case due to the configurations I selected int4 model. Please note lower the quantized version less accurate the model is.
    • Obtain the LLaVA model files, including the quantized GGUF version and MMProj files, from the official repository.
    • Download of the model will take time depending upon your internet connection.
  • Prepare the Model for Running:
    • Within LMStudio click on the Chat interface to configure model settings.
    • Select the model from the drop down list – llava llama 3 v int4 GGUF
    • You will be able to run it stock but I like to configure the Advanced Configurations
    • Adjust the model settings to match your hardware capabilities and specific requirements.
    • Based on your system set the GPU to 50/50 or max. I have setup for max
    • Click Relod model to apply configuration
  • Run Inference: Start the model and begin running inference tasks, whether for visual chat, science QA, or other applications.

Note – If there is enough interest, I can also do a extended blogpost on Dockerized version of this model. Leave comments down below.

What are MMProj Files?

📂 MMProj files are a key component in the LLaVA ecosystem, representing multimodal projection matrices that facilitate the alignment between visual and language features. These files are crucial for the seamless integration of visual encoders and language models, enabling LLaVA to effectively interpret and generate content that spans both modalities. MMProj files are fine-tuned during the model’s training process to ensure optimal performance in various applications.

What is the Quantized GGUF Version of LLaVA?

💾 The quantized GGUF (GPT-Generated Unified Format) version of LLaVA represents a compressed and optimized variant of the model, enabling efficient deployment on consumer-grade hardware. Quantization reduces the precision of the model’s weights, significantly decreasing the memory footprint and computational requirements while maintaining a high level of performance. This makes the quantized GGUF version ideal for applications where resource constraints are a concern.

Testing the Model

🧪 Testing showcases the beauty of the LLaVA model look at the details its providing in the example images.

Example 1

Example 2

Through rigorous testing and validation, LLaVA continues to demonstrate its potential as a versatile and powerful multimodal model.

Reference Links

Following are the list of helpful links:

DescriptionLink
LLaVA Github PageLLaVA (llava-vl.github.io)
Microsoft Research Paper LLaVA: Large Language and Vision Assistant – Microsoft Research
Hugging Face GGUF modelxtuner/llava-llama-3-8b-v1_1-gguf · Hugging Face
Visual Instruction Tuning (arxiv)[2304.08485] Visual Instruction Tuning (arxiv.org)

🌐 LLaVA represents a significant advancement in the field of multimodal AI, combining powerful visual and language understanding capabilities in a single, efficient model. By leveraging the strengths of LLaMA 3 and innovative techniques like quantization and multimodal projection, LLaVA offers a robust tool for a wide range of applications. Whether you’re a researcher, developer, or enthusiast, exploring the potential of LLaVA can open up new possibilities in the realm of AI-driven interaction and understanding.

By following the steps outlined in this post, you can get started with LLaVA and begin harnessing its capabilities for your own projects. Please let me know if I’ve missed any steps or details, and I’ll be happy to update the post.

Thanks,
Aresh Sarkari

Easily Upgrade Your Windows 365 Cloud PC Licenses with Step-up Licensing

3 May

Are you a Windows 365 Enterprise admin looking to upgrade your users to a higher configuration license without the full cost of two separate licenses? Step-up licensing makes this process simple and cost-effective.

What are Step-up Licenses?

Step-up licenses allow admins with a direct Enterprise Agreement to migrate users from a lower-configuration Windows 365 license to a higher-configuration one. These are available for compute (RAM/CPU) and storage upgrades.

The great thing about step-up licenses is you only pay the difference in cost between the lower and higher tier licenses, rather than the full price of two standalone licenses. This can mean significant savings when you need to upgrade multiple users.

How to Resize Cloud PCs with Step-up Licenses

Let’s walk through an example. Say you purchased step-up licenses to upgrade from Windows 365 Enterprise 2vCPU/4GB/128GB to 4vCPU/16GB/128GB. Here’s how to bulk resize the Cloud PCs to the new higher spec while preserving user data:

  1. In the Microsoft Admin Center, go to the “Your Products” page. You’ll see your new stepped-up 4vCPU licenses added and an equal number of 2vCPU licenses removed.
  2. Follow the bulk resize process, selecting the 2vCPU as the base license and 4vCPU as the target license. This will migrate the users and their data to the higher spec Cloud PCs.
  1. Important: You have 90 days to complete the migration before users lose access to their old 2vCPU Cloud PCs. So don’t wait too long!

Key Takeaways

  • Step-up licenses make it easy and affordable to upgrade to higher configs
  • You can bulk resize to migrate users while keeping their data intact
  • You have 90 days to complete the switch before the old licenses expire

Reference Microsoft LinkResize a Cloud PC | Microsoft Learn

Hopefully this helps clarify how to take advantage of step-up licensing to give your Windows 365 users an upgraded experience. Please let me know if I’ve missed any steps or details, and I’ll be happy to update the post.

Thanks,
Aresh Sarkari

Exploring Uncensored LLM Model – Dolphin 2.9 on Llama-3-8b

2 May

I’ve been diving deep into the world of Large Language Models (LLMs) like ChatGPT, Gemini, Claude, and LLAMA. But recently, I stumbled upon something that completely blew my mind: uncensored LLMs! 🤯

As someone who loves pushing the boundaries of AI and exploring new frontiers, I couldn’t resist the temptation to try out an uncensored LLM for myself. And let me tell you, the experience was nothing short of mind-blowing! 🎆 After setting up and running an uncensored LLM locally for the first time, I was amazed by the raw, unfiltered outputs it generated. It gave me a whole new perspective on the potential of such LLMs and why having an uncensored variant is so important for certain perspectives and society in general.

In this blog post, I’ll be sharing my journey with uncensored LLMs, diving into the nitty-gritty details of what they are, how they differ from regular LLMs, and why they exist. I’ll also be sharing my hands-on experience with setting up and running an uncensored LLM locally, so you can try it out for yourself! 💻

🤖 Introduction: Uncensored LLM vs Regular LLM

Large Language Models (LLMs) are AI systems trained on vast amounts of text data to understand and generate human-like text based on input prompts. There are two main types of LLMs: regular and uncensored.

Regular LLMs, such as those created by major organizations like OpenAI, Anthropic, Google, etc. are designed with specific safety and ethical guidelines, often reflecting societal norms and legal standards. These models avoid generating harmful or inappropriate content. (Click on each link to read their AI Principles)

Uncensored LLMs, on the other hand, are models that do not have these built-in restrictions. They are designed to generate outputs based on the input without ethical filtering, which can be useful for certain applications but also pose risks.

📊 Table of Comparison

FeatureRegular LLMUncensored LLM
Content FilteringYes (aligned to avoid harmful content)No (generates responses as is)
Use CasesGeneral purpose, safer for public useSpecialized tasks needing raw output
Cultural AlignmentOften aligned with Western normsNo specific alignment
Risk of Harmful OutputLowerHigher
FlexibilityRestricted by ethical guidelinesHigher flexibility in responses

🐬 What is the Dolphin 2.9 Latest Model?

🐬Dolphin 2.9 is a project by Eric Hartford @ Cognitive Computations aimed at creating an open-source, uncensored, and commercially licensed dataset and series of instruct-tuned language models. This initiative is based on Microsoft’s Orca paper and seeks to provide a foundation for building customized models without the typical content restrictions found in conventional LLMs. The model uses a dataset that removes biases, alignment, or any form of censorship, aiming to create a purely instructional tool that can be layered with user-specific alignments.

🐬 The Dolphin 2.9 Dataset

Following are the details of the dataset used to train the Dolphin Model: (Note the base model is Llama-3-8b)

Dataset DetailsLinks
cognitivecomputations/dolphin – This dataset is an attempt to replicate the results of Microsoft’s Orca
cognitivecomputations/dolphin · Datasets at Hugging Face
HuggingFaceH4/ultrachat_200k – HuggingFaceH4/ultrachat_200k · Datasets at Hugging Face
teknium/OpenHermes-2.5 – This is the dataset that made OpenHermes 2.5 and Nous Hermes 2 series of models.teknium/OpenHermes-2.5 · Datasets at Hugging Face
microsoft/orca-math-word-problems-200k – This dataset contains ~200K grade school math word problems.microsoft/orca-math-word-problems-200k · Datasets at Hugging Face

💻 How to Run the Model Locally Using LMStudio

To run Dolphin or any similar uncensored model locally, you typically need to follow these steps, assuming you are using a system like LMStudio for managing your AI models:

  • Setup Your Environment:
    • Install LMStudio software its available on (Windows, Mac & Linux). This software allows you to manage and deploy local LLMs without you having to setup Python, Machine Learning, Transformers etc. libraries.
    • Link to Download the Windows Bits – LM Studio – Discover, download, and run local LLMs
    • My laptop config has 11th Gen Intel processor, 64 GB RAM & Nvdia RTX 3080 8 GB VRAM, 3 TB Storage.
  • Download the Model and Dependencies:
    • The best space to keep a track on models is Hugging Face – Models – Hugging Face. You can keep a track of the model releases and updates here.
    • Copy the model name from Hugging Face – cognitivecomputations/dolphin-2.9-llama3-8b
    • Paste this name in LM Studio and it will list out all the quantized models
    • In my case due to the configurations I selected 8Bit model. Please note lower the quantized version less accurate the model is.
    • Download of the model will take time depending upon your internet connection.
  • Prepare the Model for Running:
    • Within LMStudio click on the Chat interface to configure model settings.
    • Select the model from the drop down list – dolphin 2.9 llama3
    • You will be able to run it stock but I like to configure the Advanced Configurations
    • Based on your system set the GPU to 50/50 or max. I have setup for max
    • Click Relod model to apply configuration
  • Run the Model:
    • Use LMStudio to load and run the model.
    • Within the User Prompt enter what you want to ask the Dolphin model
    • Monitor the model’s performance and adjust settings as needed.
  • Testing and Usage:
    • Once the model is running, you can begin to input prompts and receive outputs.
    • Test the model with various inputs to ensure it functions as expected and adjust configurations as needed.
    • Note below was a test fun prompt across ChatGPT, Claude & Dolphin. You can clearly see the winner being Dolphin 🤗 
  • Eject and Closing the model:
    • Once you done with the session you can select Eject Model
    • This will release the VRAM/RAM and CPU utlization back to normal

💻 Quantized & GGUF Model

As home systems usually wont have the necessary GPU to run LLM models natively on consumer grade hardware. A quantized model is a compressed version of a neural network where the weights and activations are represented with lower-precision data types, such as int8 or uint8, instead of the typical float32. This reduces the model’s size and computational requirements while maintaining acceptable performance.

GGUF stands for “GPT-Generated Unified Format“. It refers to a type of large language model that is designed to be versatile and capable of performing a wide range of natural language processing tasks without requiring expensive GPU hardware for inference.

The Dolphin 2.9 GGUF models are:

Model NameQuantizationModel SizeCPUGPUVRAMRAM
dolphin-2.9-llama3-8b-q3_K_M.gguf3-bit (q3)4.02 GBCompatible with most CPUsNot required for inferenceNot required for inference~4.02 GB
dolphin-2.9-llama3-8b-q4_K_M.gguf4-bit (q4)4.92 GBCompatible with most CPUsNot required for inferenceNot required for inference~4.92 GB
dolphin-2.9-llama3-8b-q5_K_M.gguf5-bit (q5)5.73 GBCompatible with most CPUsNot required for inferenceNot required for inference~5.73 GB
dolphin-2.9-llama3-8b-q6_K.gguf6-bit (q6)6.6 GBCompatible with most CPUsNot required for inferenceNot required for inference~6.6 GB
dolphin-2.9-llama3-8b-q8_0.gguf8-bit (q8)8.54 GBCompatible with most CPUsNot required for inferenceNot required for inference~8.54 GB

Reference Links

Following are the list of helpful links:

DescriptionLink
Details and background about the Dolphin ModelDolphin 🐬 (erichartford.com)
What are uncensored models?Uncensored Models (erichartford.com)
Various Dolphin Models on various base LLMscognitivecomputations (Cognitive Computations) (huggingface.co)
Dolphin Llama 3 8B GGUF model I used on LMStudio cognitivecomputations/dolphin-2.9-llama3-8b-gguf · Hugging Face
LM StudioLM Studio – Discover, download, and run local LLMs
Model Memory Estimator UtilityModel Memory Utility – a Hugging Face Space by hf-accelerate

By following these steps, you can deploy and utilize an uncensored LLM like Dolphin 2.9 for research, development, or any specialized application where conventional content restrictions are not desirable. I hope you’ll find this insightful on your joruney of LLMs. Please let me know if I’ve missed any steps or details, and I’ll be happy to update the post.

Thanks,
Aresh Sarkari

Building an Image Captioning API with FastAPI and Hugging Face Transformers packaged with Docker

17 Apr

In this blog post, we’ll embark on an exciting journey of building an Image Captioning API using FastAPI and Hugging Face Transformers. Image captioning is a fascinating task that involves generating textual descriptions for given images. By leveraging the power of deep learning and natural language processing, we can create a system that automatically understands the content of an image and generates human-like captions. The example below, I input a image with a rider on a bike in a garage and the caption provides the exact details of the image.

Project Overview

👨‍💻 GitHub: https://github.com/askaresh/blip-image-captioning-api

The goal of this project is to develop a RESTful API that accepts an image as input and returns a generated caption describing the image. We’ll be using FastAPI, a modern and fast web framework for building APIs, along with Hugging Face Transformers, a popular library for natural language processing tasks.

The key components of our project include:

  1. FastAPI: A web framework for building efficient and scalable APIs in Python.
  2. Hugging Face Transformers: A library that provides state-of-the-art pre-trained models for various NLP tasks, including image captioning.
  3. Docker: A containerization platform that allows us to package our application and its dependencies into a portable and reproducible environment.

Implementation Details

To build our Image Captioning API, we started by setting up a FastAPI project and defining the necessary endpoints. The main endpoint accepts an image file and an optional text input for conditional image captioning.

We utilized the pre-trained BLIP (Bootstrapping Language-Image Pre-training) model from Hugging Face Transformers for image captioning. BLIP is a powerful model that has been trained on a large dataset of image-caption pairs and achieves impressive results in generating accurate and coherent captions.

To ensure a smooth development experience, and ability for it to run on AnyCloud I containerized our application using Docker. This allowed us to encapsulate all the dependencies, including Python libraries and the pre-trained model, into a portable and reproducible environment.

HF-IT-DOCKER/

├── app/
│ ├── config.py
│ ├── main.py
│ ├── model.py
│ └── utils.py

├── .dockerignore
├── .gitignore
├── compose.yaml
├── Dockerfile
├── logging.conf
├── README.Docker.md
└── requirements.txt

Detailed description of each file:

  • app/config.py:
    • This file contains the configuration settings for the application.
    • It defines a Settings class using the pydantic_settings library to store and manage application-specific settings.
    • The blip_model_name setting specifies the name of the BLIP model to be used for image captioning.
  • app/main.py:
    • This is the main entry point of the FastAPI application.
    • It sets up the FastAPI app, loads the BLIP model, and configures logging.
    • It defines the API endpoints, including the root path (“/”) and the image captioning endpoint (“/caption”).
    • The “/caption” endpoint accepts an image file and an optional text input, processes the image, generates a caption using the BLIP model, and returns the generated caption.
  • app/model.py:
    • This file contains the functions related to loading and using the BLIP model for image captioning.
    • The load_model function loads the pre-trained BLIP model and processor based on the specified model name.
    • The generate_caption function takes an image and optional text input, preprocesses the inputs, and generates a caption using the loaded BLIP model.
  • app/utils.py:
    • This file contains utility functions used in the project.
    • The load_image_from_file function reads an image file and converts it to the appropriate format (RGB) using the PIL library.
  • .dockerignore:
    • This file specifies the files and directories that should be excluded when building the Docker image.
    • It helps to reduce the size of the Docker image by excluding unnecessary files and directories.
  • .gitignore:
    • This file specifies the files and directories that should be ignored by Git version control.
    • It helps to keep the repository clean by excluding files that are not necessary to track, such as generated files, cache files, and environment-specific files.
  • compose.yaml:
    • This file contains the configuration for Docker Compose, which is used to define and run multi-container Docker applications.
    • It defines the services, including the FastAPI server, and specifies the build context, ports, and any necessary dependencies.
  • Dockerfile:
    • This file contains the instructions for building the Docker image for the FastAPI application.
    • It specifies the base image, sets up the working directory, installs dependencies, copies the application code, and defines the entry point for running the application.
  • logging.conf:
    • This file contains the configuration for the Python logging system.
    • It defines the loggers, handlers, formatters, and their respective settings.
    • It specifies the log levels, log file paths, and log message formats.
  • README.Docker.md:
    • This file provides documentation and instructions specific to running the application using Docker.
    • It may include information on how to build the Docker image, run the container, and any other Docker-related details.
  • requirements.txt:
    • This file lists the Python dependencies required by the application.
    • It includes the necessary libraries and their versions, such as FastAPI, Hugging Face Transformers, PIL, and others.
    • It is used by pip to install the required packages when building the Docker image or setting up the development environment.

Lessons Learned and Debugging

Throughout the development process, I encountered several challenges and learned valuable lessons:

  1. Dependency Management: Managing dependencies can be tricky, especially when working with large pre-trained models. We learned the importance of properly specifying dependencies in our requirements file and using Docker to ensure consistent environments across different systems.
  2. Debugging Permission Issues: We encountered permission-related issues when running our application inside a Docker container. Through debugging, we learned the significance of properly setting file and directory permissions and running the container as a non-root user to enhance security.
  3. Logging Configuration: Proper logging is crucial for understanding the behavior of our application and troubleshooting issues. I learned how to configure logging using a configuration file and ensure that log files are written to directories with appropriate permissions.
  4. Testing and Error Handling: Comprehensive testing and error handling are essential for building a robust API. We implemented thorough error handling to provide meaningful error messages to API users and conducted extensive testing to ensure the reliability of our image captioning functionality.

Validation of the API

After the container is up and running go to http://localhost:8004/docs and select Post method and pick try out. Upload any image of your choice and enter the text (optional) and further click Execute. You will have the caption below as the output.

Conclusion

Building an Image Captioning API with FastAPI and Hugging Face Transformers has been an incredible learning experience. By leveraging the power of pre-trained models and containerization, I created a scalable and efficient solution for generating image captions automatically.

Through this project, I gained valuable insights into dependency management, debugging permission issues, logging configuration, and the importance of testing and error handling. These lessons will undoubtedly be applicable to future projects and contribute to our growth as developers.

I hope that this blog post has provided you with a comprehensive overview of our Image Captioning API project and inspired you to explore the fascinating world of image captioning and natural language processing. Feel free to reach out with any questions or suggestions, and happy captioning!

Thanks,
Aresh Sarkari

Windows Intune Settings Catalog Policy to Disable Windows Copilot – Windows 365 Cloud PC & Windows 11 + Bonus PowerShell

28 Feb

In the latest update of Windows Intune, a new method has been introduced to disable Windows Copilot through the settings catalog policy. Previously, the technique revolved around CSP and registry, but now users can conveniently manage this setting directly within the settings catalog. There are plenty of blog posts showing how to do it via CSP and registry, and we are not going into that here.

What is Windows Copilot?

For those unfamiliar, Windows Copilot (formerly Bing Chat) is a built-in AI-powered intelligent assistant that helps you get answers and inspirations from across the web, supports creativity and collaboration, and enables you to focus on the task at hand.

How to Disable Windows Copilot via Settings Catalog Policy

The process to disable Windows Copilot through the settings catalog policy is simple and straightforward. Here’s a step-by-step guide:

  • Log in to the Microsoft Intune admin center.
  • Create a configuration profile for Windows 10 and later devices with the Settings catalog and slect Create
  • Enter the profile Name “DisableCopilot” and Description “Settings to disable Copilot” and select Next
  • Under the Setting Picker select category as Windows AI and select the setting “Turn off Copilot in Windows (user).”
  • Next Within the settings select the silder and “Disable the Copilot settings”
  • Assign the Policy to a Windows 365 or Windows 11 Device group

By following these steps, administrators can effectively manage the Windows Copilot setting for their organization’s devices.

Validation

Now after sometime login to your Windows 365 CloudPC or Windows 11 device and the Copilot Icon will disappear from the TaskBar

Bonus (PowerShell)

If you want to create the above policy using PowerShell and MS Graph you can run the below code:

# Import necessary modules
Import-Module Microsoft.Graph.Beta.Groups
Import-Module Microsoft.Graph.Beta.DeviceManagement

# Define parameters for the new device management configuration policy
$params = @{
name = "DisableCopilot"
description = "Disable AI copilot"
platforms = "windows10"
technologies = "mdm"
roleScopeTagIds = @(
"0"
)
settings = @(
@{
"@odata.type" = "#microsoft.graph.deviceManagementConfigurationSetting"
settingInstance = @{
"@odata.type" = "#microsoft.graph.deviceManagementConfigurationChoiceSettingInstance"
settingDefinitionId = "user_vendor_msft_policy_config_windowsai_turnoffwindowscopilot"
choiceSettingValue = @{
"@odata.type" = "#microsoft.graph.deviceManagementConfigurationChoiceSettingValue"
value = "user_vendor_msft_policy_config_windowsai_turnoffwindowscopilot_1"
children = @()
}
}
}
)
}

# Create a new device management configuration policy with the specified parameters
New-MgBetaDeviceManagementConfigurationPolicy -BodyParameter $params

Check out my other blog post that outlines how to use MS Graph and Powershell to execute the above code.

I hope you’ll find this insightful for easily disabling the Copilot within the Windows 11 physical and Windows 365 Cloud PC fleet of device. Please let me know if I’ve missed any steps or details, and I’ll be happy to update the post.

Thanks,
Aresh Sarkari

Customizing Branding for Windows 365 Boot Sign-in Screen

20 Feb

If you’re looking to customize the branding displayed at the top of the sign-in screen for Windows 365 Boot, you’re in the right place. In a previous workflow within the Intune Portal, I deployed the Windows 365 Boot and all I needed was to incorporate the company logo, text, and lock screen wallpaper.

Image

Key Steps for Customization

Pre-created Configuration Policy(Windows 365 Boot)

I have decided I will modify the existing Windows 365 configuration profiles that was originally deployed during the W365 Boot deployment.

  • Policy Name – Whatever Prefix(W365Boot) Windows 365 Boot Shared PC Device Configuration Policy

Configuration Policy Update (Company Name, Logo and Lock Screen Image)

To customize the login screen on the Windows 365 Boot PC, follow these detailed steps:

  1. Log in to the Microsoft Intune admin center.
  2. Create or edit a configuration profile for Windows 10 and later devices with the Settings catalog profile type, specifically for the “W365Boot Windows 365 Boot Shared PC Device Configuration Policy”.
  3. Click on edit on “Configuration Settings” and select “Add.”
  4. Enter the details within the table for the OMA-URI and values:
    • Upload your company logo in the specified format and size.
    • Enter your company name in the designated field.
    • Save your changes and ensure they are applied to the login screen.
NameOMA-URIFormatValue
CompanyName./Vendor/MSFT/Personalization/CompanyNamestringAskAresh.com
CompanyLogoUrl./Vendor/MSFT/Personalization/CompanyLogoUrlstringhttps://i0.wp.com/askaresh.com/wp-content/uploads/2020/05/askaresh-logo_200x200.png
LockScreenImageUrl./Vendor/MSFT/Personalization/LockScreenImageUrlstringhttps://itsupportla.com/files/2022/08/Windows-3652.jpg
Image

This straightforward process allows you to personalize the login experience for your Windows 365 users, enhancing your company’s branding and identity.

Company Name and Logo

See how the Name and Logo appear on the Windows 365 Boot Physical PC.

Image

LockScreen Logo

Check out how the Lock Screen appears on the Windows 365 Boot Physical PC.

Image

I trust you’ll find this insightful for customizing the end-user experience on the physical Windows 365 Boot device. Please let me know if I’ve missed any steps or details, and I’ll be happy to update the post.

Thanks,
Aresh Sarkari

Windows 365 – Report – Cloud PC actions + PowerShell Report Download

18 Dec

In the Dec 4th, 2023 for Windows 365 Enterprise, the reports for Cloud PC actions were announced. In today’s post, I will showcase how to access and make sense of the new report available within Microsoft Intune.

What does the report offer?

The Cloud PC Actions Report, currently in public preview, is a powerful report within the Windows 365 ecosystem. It provides detailed information on various actions taken by administrators on the Cloud PCs. Imagine you have multiple teams and admins within your organisation. This report can help you track the actions along with the Status and Date initiated, which can come in handy for troubleshooting and audit purposes.

Accessing the report – Cloud PC Actions

To view the report in Microsoft Intune portal, you can follow these steps:

What Actions are displayed?

The following actions are included in the report:

ActionDescription
Create SnapshotThis action allows administrators to capture the current state of a Cloud PC. It’s useful for backup purposes or before making significant changes, ensuring that there’s a point to revert back to if needed.
Move RegionThis feature enables the relocation of a Cloud PC to a different geographic region. It’s particularly beneficial for organizations with a global presence, ensuring that Cloud PCs are hosted closer to where the users are located, potentially improving performance and compliance with regional data laws.
Place Under ReviewThis action is used to flag a Cloud PC for further examination. It could be due to performance issues, security concerns, or compliance checks. Placing a PC under review may restrict certain functionalities until the review is completed.
Power On/Off (W365 Frontline only)Specific to Windows 365 Frontline, this action allows administrators to remotely power on or off a Cloud PC. This is crucial for managing devices in a frontline environment, where PCs might need to be controlled outside of regular working hours.
ReprovisionReprovisioning a Cloud PC involves resetting it to its initial state. This action is useful when a PC needs to be reassigned to a different user or if it’s experiencing significant issues that can’t be resolved through regular troubleshooting.
ResizeThis action refers to changing the size/specifications of a Cloud PC, such as adjusting its CPU, RAM, or storage. It’s essential for adapting to changing workload requirements or optimizing resource allocation.
RestartAdministrators can remotely restart a Cloud PC. This is a basic but critical action for applying updates, implementing configuration changes, or resolving minor issues.
RestoreThis action allows the restoration of a Cloud PC to a previous state using a saved snapshot. It’s a vital feature for recovery scenarios, such as after a failed update or when dealing with software issues.
TroubleshootThis is a general action that encompasses various diagnostic and repair tasks to resolve issues with a Cloud PC. It might include running automated diagnostics, checking logs, or applying fixes.

How This Report Benefits You

  • Enhanced Troubleshooting: Quickly identify failed actions and understand potential reasons for failure.
  • Efficient Management: Monitor ongoing actions and ensure smooth operation of Cloud PCs.
  • Actionable Insights: Make informed decisions based on the status and details of actions taken.

If you have a failed action you can select and click on retry and it will try to perform the action on your behalf.

Bonus – PowerShell Access to Cloud PC Actions Report

To get the csv download of the report via MS Graph follow these steps:

Connect to MS Graph API

Step 1 – Install the MS Graph Powershell Module

#Install Microsoft Graph Beta Module

PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

Step 2 – Connect to scopes and specify which API you wish to authenticate to. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the policy, so we need to change the scope to “CloudPC.ReadWrite.All”

#Read-only

PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All" -NoWelcome
Welcome To Microsoft Graph!

OR

#Read-Write
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All" -NoWelcome
Welcome To Microsoft Graph!

Step 3 – Check the User account by running the following beta command.

#Beta APIs

PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

Download the csv Report

You will pass the following $prams variable with all the fields within the report. GitHub link – avdwin365mem/cloudpcactionreport at main · askaresh/avdwin365mem (github.com)

$params = @{

top = 50
skip = 0
search = ""
filter = ""
select = @(
"Id"
"CloudPcId"
"CloudPcDeviceDisplayName"
"DeviceOwnerUserPrincipalName"
"Action"
"ActionState"
"InitiatedByUserPrincipalName"
"RequestDateTime"
"LastUpdatedDateTime"
)
orderBy = @(
"LastUpdatedDateTime desc"
)
}

Get-MgBetaDeviceManagementVirtualEndpointReportActionStatusReport -BodyParameter $params

The Cloud PC Actions Report is a significant addition to Windows 365, offering a level of transparency and control that administrators have long sought. I hope you will find this helpful information for tracking the Cloud PC actions via this report. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

App Volumes for Azure Virtual Desktop and Windows 365 Cloud PC

4 Dec

Recently the Apps on demand was released, essentially this an Azure market place offering from VMware aka Broadcom. You can deploy App Volumes manager virtual machine that runs Windows Server 2022 with App Volumes Manager pre-installed. Additionally, deploy Azure file share and database configuration.

What is an AppStack

An AppStack in VMware App Volumes is a virtual container that contains a set of applications packaged together. It’s used in virtual desktop environments such as (AVD and Windows 365) to dynamically deliver these applications to users. AppStacks are read-only, can be attached to user sessions transparently, and appear as if the applications are natively installed. They offer efficient application management, allowing for easy updates and maintenance without impacting the underlying system. This approach simplifies application lifecycle management and enhances user experience in virtualized desktop scenarios.

Prerequisites

You’ll need the following things ready before you can rollout App Volumes:

  • Azure Subscription: Have an active Microsoft Azure subscription. Azure Documentation
  • Azure Permissions: Ensure permissions to create virtual machines for App Volumes Manager, storage accounts, and file shares on Azure.
  • Access to On-Prem Active Directory: Required for identity and access control.
  • Azure Virtual Desktop Access: For remote connectivity.
  • Azure AD Connect: Install Azure AD Connect on the Active Directory Domain Controller.
  • Active Directory Configurations:
  • App Volumes License File: Download and install the App Volumes license file. App Volumes in Azure follows a Bring Your Own License model. VMware App Volumes Product Download Page
  • Download App Volumes Agent:
  • System Requirements Awareness: Familiarize yourself with the system requirements for using App Volumes. VMware App Volumes 4 Installation Guide
  • SQL Server Database for Production: For a production environment, ensure the availability of an existing SQL server database, as the default SQL Server Express Database is not recommended for production use.

My party came crashing on the requirement of Active Directory (AD) Domain Controller. In my entire Azure, Azure Virtual Desktop, Windows 365 Cloud PC and Microsoft Intune I have made deliberate choice to keep it with mordern management\authentication and follow the Microsoft best practice guidance. Though I am not able to complete the entire configurations due to AD I decided to share whatever configurations I manage to perform to deploy the AV Manager appliance within my Azure subscription for this blog post.

Starting in the Azure Portal

  • Access the Marketplace: Begin by navigating to the Microsoft Azure portal and clicking on ‘Marketplace’.
  • Find VMware App Volumes: Search for the ‘VMware App Volumes’ offer and click on it.
  • Select App Volumes Version: In the VMware App Volumes Azure Application page, choose the desired version from the Plan drop-down box.
  • Click ‘Create’

Deploy the Virtual Machine Appliance

  • Details: Select your Subscription and Resource group. You can also create a new resource group if needed.
  • Instance Details: Choose the Region and enter the name for your virtual machine. Set up the credentials for the local administrator. (#TIP – Ensure you deploy the Appliance where the session host pools are located)
  • Public IP Address: A new public IP address is created by default, but you can opt for an existing one or none at all. (#TIP – This allows accessing the App Volumes Manager over the internet )
  • DNS Prefix: If using a public IP address, consider creating a DNS prefix for easier access. (#TIP – Pick a friendly name I decided to leave it as defaults)
  • By default, a new virtual network and subnet are created. However, in my situation, I want to leverage the existing VNET and Subnet.
  • Database Selection: Choose between a Local SQL Server Express Database (not recommended for production) or a Remote SQL Server database. Enter the necessary database connection details. (#TIP – Explore for production use evaluate using the Azure SQL DB as its a managed SQL instance and you will not have the overhead of managing and maintaining a SQL Server)
  • File Share Configuration: Decide if you want Azure Marketplace to automatically provision storage with file shares or use existing storage. Configure accordingly. (#TIP – Same if you have Azure Dell or NetApp Files storage you can leverage that or using Azure storage account)
  • Tags: Add any desired tags. (#TIP – Ensure you add the tags as it will help you later during Billing and Automation)
  • Review and Launch Deployment: Review your setting Click ‘Create’ to start the deployment process.
  • Navigate to App Volumes Manager Admin UI: This is where you’ll configure Active Directory and other settings.
  • Configure Storage: If you opted out of automatic storage provisioning, manually configure the storage in the App Volumes Manager admin UI.
  • When you navigate to the Storage the newly created SMB shares are listed for templates and AppStacks stroage.

I will come back to this blog post once I learn more about the “No AD” mode as this will enable me to further integrate App Volumes into my Azure Virtual Desktop and Windows 365 Cloud PCs.

Resources deployed within the Azure Resource Group

Following resources are deployed within your Azure Resource Group for App Volumes

Whislist

I would like to see the following enhacements at a later day:

  • VMware aka Broadcom should evlaute a complete SaaS offering taking it to the next level instead of deploying an appliance and required to do alot of configuration. Just give me “as a Service” and the only responsiblity is to create/update and mange the applications with Entra ID Groups.
  • Microsoft Entra ID integration is a must

I hope you will find this helpful information for getting started with App Volumes for Windows 365 Cloud PC and Azure Virtual Desktop. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

Windows 365 Cloud PC Audit Logs with Azure Log Analytics & Graph API using PowerShell

3 Nov

Are you looking to keep a vigilant eye on your Windows 365 environment? Good news! You can now send Windows 365 audit events to Azure Log Analytics, Splunk, or any other SIEM system that supports it.

Understanding the Scope of Windows 365 Audit Logs

When it comes to monitoring your Cloud PC environment, Windows 365 audit logs are an indispensable resource. These logs provide a comprehensive chronicle of significant activities that result in modifications within your Cloud PC setup (https://intune.microsoft.com/). Here’s what gets captured:

  • Creation Events: Every time a Cloud PC is provisioned, it’s meticulously logged.
  • Update Events: Any alterations or configurations changes made to an existing Cloud PC are recorded.
  • Deletion Events: If a Cloud PC is decommissioned, this action is also captured in the logs.
  • Assignment Events: The process of assigning Cloud PCs to users doesn’t go unnoticed; it’s all in the logs.
  • Remote Actions: Activities such as remote sign-outs or restarts are tracked for administrative oversight.

These audit events encompass most actions executed via the Microsoft Graph API, ensuring that administrators have visibility into the operations that affect their Cloud PC infrastructure. It’s important to note that audit logging is an always-on feature for Windows 365 customers. This means that from the moment you start using Cloud PCs, every eligible action is automatically logged without any additional configuration.

Windows 365 and Azure Log Analytics

Windows 365 has made it easier than ever to integrate with Azure Log Analytics. With a few simple PowerShell commands, you can create a diagnostic setting to send your logs directly to your Azure Log Analytics workspace.

  • Sign in to the Microsoft Intune admin center, select Reports > Diagnostic settings (under Azure monitor)> Add Diagnostic settings.
  • Under Logs, select Windows365AuditLogs.
  • Under Destination details, select the Azure Log Analytics and choose the Subscription & Workspace.
  • Select Save.

Query the Azure Log Analytics

Once your logs are safely stored in Azure Log Analytics, retrieving them is a breeze. You can use Kusto Query Language (KQL) to extract and analyze the data. Here’s a basic example of how you might query the logs:

  • Sign in to the Microsoft Intune admin center, select Reports > Log analytics (under Azure monitor)> New Query
  • Paste the below query under and click on Run
  • Optional you may save the Select Save. to use the query in the future.
Windows365AuditLogs
| where TimeGenerated > ago(7d)
| extend ParsedApplicationId = tostring(parse_json(ApplicationId)[0].Identity)
| extend ParsedUserPrincipalName = tostring(parse_json(UserPrincipalName)[0].Identity)
| extend ParsedUserId = tostring(parse_json(UserId)[0].Identity)
| project TenantId, TimeGenerated, OperationName, Result, 
          ParsedApplicationId, 
          ParsedUserPrincipalName, 
          ParsedUserId
| sort by TimeGenerated desc

Leverage Graph API to retrieve Windows 365 audit events

Connect to MS Graph API

Step 1 – Install the MS Graph Powershell Module

#Install Microsoft Graph Beta Module
PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

Step 2 – Connect to scopes and specify which API you wish to authenticate to. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the policy, so we need to change the scope to “CloudPC.ReadWrite.All”

#Read-only
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All" -NoWelcome
Welcome To Microsoft Graph!

OR

#Read-Write
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All" -NoWelcome
Welcome To Microsoft Graph!
Permissions for MS Graph API

Step 3 –  Check the User account by running the following beta command.

#Beta APIs
PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

Get entire list of audit events, including the audit actor

To get the entire list of audit events including the actor (person who performed the action), use the following command:

Get-MgBetaDeviceManagementVirtualEndpointAuditEvent | Select-Object -Property Actor,ActivityDateTime,ActivityType,ActivityResult -ExpandProperty Actor | Format-Table UserId, UserPrincipalName, ActivityType, ActivityDateTime, ActivityResult


Get a list of audit events

To get a list of audit events without the audit actor, use the following command:

Get-MgBetaDeviceManagementVirtualEndpointAuditEvent -All -Top 100

Integrating Windows 365 with Azure Log Analytics is a smart move for any organization looking to bolster its security and compliance posture. With the added flexibility of forwarding to multiple endpoints, you’re well-equipped to handle whatever audit challenges come your way.

I hope you will find this helpful information for enabling and quering Windows 365 Audit Logs in Azure Logs Analytics or using Graph API with PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

Azure Virtual Desktop – Terraform – Create a Scaling Plan for Pooled Host Pools (Part 4)

27 Oct

In today’s digital age, managing cloud resources efficiently is paramount, not just for operational efficacy but also for cost management. Enter Azure Virtual Desktop (AVD) Scaling Plans – Microsoft’s answer to dynamic and intelligent scaling of your virtual desktop infrastructure. No longer do organizations need to overprovision resources or let them sit idle; with AVD Scaling Plans, you get a responsive environment tailored to your usage patterns. In this blog post, we’ll create the scaling plans using Terraform.

In the previous blog post, we delved into the distinctions between the Personal Desktop (1:1 mapping), Pooled Desktop (1:Many mapping) and Remote App configurations, providing a comprehensive guide on their creation via Terraform. The series continues as we further explore how to create the AVD Scaling Plan for Pooled Host Pool.

Table of Content

Pre-requisites

Following are the pre-requisites before you begin

  • An Azure subscription
  • The Terraform CLI
  • The Azure CLI
  • Permissions within the Azure Subscription for using Terraform

Terraform – Authenticating via Service Principal & Client Secret

Before running any Terraform code, we will execute the following PowerShell (Run as administrator)and store the credentials as environment variables. If we do this via the environment variable, we don’t have to keep the below information within the providers.tf file. In a future blog post, there are better ways to store the below details, and I hope to showcase them:

# PowerShell
$env:ARM_CLIENT_ID = "9e453b62-0000-0000-0000-00000006e1ac"
$env:ARM_CLIENT_SECRET = "Z318Q~00000000000000000000000000000000_"
$env:ARM_TENANT_ID = "a02e602c-0000-000-0000-0e0000008bba61"
$env:ARM_SUBSCRIPTION_ID = "7b051460-00000-00000-00000-000000ecb1"
  • Azure Subcription ID – Azure Portal Subcription copy the ID
  • Client ID – From the above step you will have the details
  • Client Secret – From the above step you will have the details
  • Tenant ID – While creating the Enterprise Apps in ADD you will have the details

Terraform Folder Structure

The following is the folder structure for the terrraform code:

Azure Virtual Desktop Scaling Plan – Create a directory in which the below Terraform code will be published (providers.tf, main.tf, variables.tf and output.tf)

+---Config-AVD-ScalingPlans
|   |   main.tf
|   |   output.tf
|   |   providers.tf
|   |   variables.tf

Configure AVD – ScalingPlans – Providers.tf

Create a file named providers.tf and insert the following code:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.76.0"
    }
    azuread = {
      source = "hashicorp/azuread"
    }
  }
}

provider "azurerm" {
  features {}
}

Configure AVD – ScalingPlans – main.tf

Create a file named main.tf and insert the following code. Let me explain what all we are attempting to accomplish here:

  • Leverage a existing Resource Group
  • Leverage a existing Host Pool
  • Create a custom role AVD AutoScale and assign to the Resource Group
    • This is a prerequisite for ensuring the scaling plan can increase and decrease the resources in your resource group.
  • Assign the role – AVD AutoScale to the service principal (AVD)
  • Create a a scaling plan with a production grade schedule
  • Associate the scaling plan with the host pool
# Generate a random UUID for role assignment
resource "random_uuid" "example" {}

# Fetch details of the existing Azure Resource Group
data "azurerm_resource_group" "example" {
  name = var.resource_group_name
}

# Fetch details of the existing Azure Virtual Desktop Host Pool
data "azurerm_virtual_desktop_host_pool" "existing" {
  name                = var.existing_host_pool_name
  resource_group_name = var.resource_group_name
}

# Define the Azure Role Definition for AVD AutoScale
resource "azurerm_role_definition" "example" {
  name        = "AVD-AutoScale"
  scope       = data.azurerm_resource_group.example.id
  description = "AVD AutoScale Role"
  # Define the permissions for this role
  permissions {
    actions = [
      # List of required permissions.
      "Microsoft.Insights/eventtypes/values/read",
      "Microsoft.Compute/virtualMachines/deallocate/action",
      "Microsoft.Compute/virtualMachines/restart/action",
      "Microsoft.Compute/virtualMachines/powerOff/action",
      "Microsoft.Compute/virtualMachines/start/action",
      "Microsoft.Compute/virtualMachines/read",
      "Microsoft.DesktopVirtualization/hostpools/read",
      "Microsoft.DesktopVirtualization/hostpools/write",
      "Microsoft.DesktopVirtualization/hostpools/sessionhosts/read",
      "Microsoft.DesktopVirtualization/hostpools/sessionhosts/write",
      "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete",
      "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read",
      "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action",
      "Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read"
    ]
    not_actions = []
  }
  assignable_scopes = [
    data.azurerm_resource_group.example.id,
  ]
}

# Fetch the Azure AD Service Principal for Windows Virtual Desktop
data "azuread_service_principal" "example" {
  display_name = "Azure Virtual Desktop"
}

# Assign the role to the service principal
resource "azurerm_role_assignment" "example" {
  name                   = random_uuid.example.result
  scope                  = data.azurerm_resource_group.example.id
  role_definition_id     = azurerm_role_definition.example.role_definition_resource_id
  principal_id           = data.azuread_service_principal.example.id
  skip_service_principal_aad_check = true
}

# Define the Azure Virtual Desktop Scaling Plan
resource "azurerm_virtual_desktop_scaling_plan" "example" {
  name                = var.scaling_plan_name
  location            = var.location
  resource_group_name = var.resource_group_name
  friendly_name       = var.friendly_name
  description         = var.scaling_plan_description
  time_zone           = var.timezone
  tags = var.tags

  dynamic "schedule" {
    for_each = var.schedules
    content {
      name                              = schedule.value.name
      days_of_week                      = schedule.value.days_of_week
      ramp_up_start_time                = schedule.value.ramp_up_start_time
      ramp_up_load_balancing_algorithm  = schedule.value.ramp_up_load_balancing_algorithm
      ramp_up_minimum_hosts_percent     = schedule.value.ramp_up_minimum_hosts_percent
      ramp_up_capacity_threshold_percent= schedule.value.ramp_up_capacity_threshold_pct
      peak_start_time                   = schedule.value.peak_start_time
      peak_load_balancing_algorithm     = schedule.value.peak_load_balancing_algorithm
      ramp_down_start_time              = schedule.value.ramp_down_start_time
      ramp_down_load_balancing_algorithm= schedule.value.ramp_down_load_balancing_algorithm
      ramp_down_minimum_hosts_percent   = schedule.value.ramp_down_minimum_hosts_percent
      ramp_down_force_logoff_users      = schedule.value.ramp_down_force_logoff_users
      ramp_down_wait_time_minutes       = schedule.value.ramp_down_wait_time_minutes
      ramp_down_notification_message    = schedule.value.ramp_down_notification_message
      ramp_down_capacity_threshold_percent = schedule.value.ramp_down_capacity_threshold_pct
      ramp_down_stop_hosts_when         = schedule.value.ramp_down_stop_hosts_when
      off_peak_start_time               = schedule.value.off_peak_start_time
      off_peak_load_balancing_algorithm = schedule.value.off_peak_load_balancing_algorithm
    }
  }
  
  # Associate the scaling plan with the host pool
  host_pool {
    hostpool_id          = data.azurerm_virtual_desktop_host_pool.existing.id
    scaling_plan_enabled = true
  }
}

Configure AVD – ScalingPlans – variables.tf

Create a file named variables.tf and insert the following code. The place where we define existing or new variables:

# Define the resource group of the Azure Virtual Desktop Scaling Plan
variable "resource_group_name" {
  description = "The name of the resource group."
  type        = string
  default     = "AE-DEV-AVD-01-PO-D-RG"
}

# Define the attributes of the Azure Virtual Desktop Scaling Plan
variable "scaling_plan_name" {
  description = "The name of the Scaling plan to be created."
  type        = string
  default     = "AVD-RA-HP-01-SP-01"
}

# Define the description of the scaling plan
variable "scaling_plan_description" {
  description = "The description of the Scaling plan to be created."
  type        = string
  default     = "AVD Host Pool Scaling plan"
}

# Define the timezone of the Azure Virtual Desktop Scaling Plan
variable "timezone" {
  description = "Scaling plan autoscaling triggers and Start/Stop actions will execute in the time zone selected."
  type        = string
  default     = "AUS Eastern Standard Time"
}

# Define the freindlyname of the Azure Virtual Desktop Scaling Plan
variable "friendly_name" {
  description = "The friendly name of the Scaling plan to be created."
  type        = string
  default     = "AVD-RA-HP-SP-01"
}

# Define the host pool type(Pooled or Dedicated) of the Azure Virtual Desktop Scaling Plan
variable "host_pool_type" {
  description = "The host pool type of the Scaling plan to be created."
  type        = string
  default     = "Pooled"
}

# Define the details of the scaling plan schedule
variable "schedules" {
  description = "The schedules of the Scaling plan to be created."
  type        = list(object({
    name                          = string
    days_of_week                  = list(string)
    ramp_up_start_time            = string
    ramp_up_load_balancing_algorithm = string
    ramp_up_minimum_hosts_percent = number
    ramp_up_capacity_threshold_pct = number
    peak_start_time               = string
    peak_load_balancing_algorithm = string
    ramp_down_start_time          = string
    ramp_down_load_balancing_algorithm = string
    ramp_down_minimum_hosts_percent = number
    ramp_down_capacity_threshold_pct = number
    ramp_down_wait_time_minutes   = number
    ramp_down_stop_hosts_when     = string
    ramp_down_notification_message = string
    off_peak_start_time           = string
    off_peak_load_balancing_algorithm = string
    ramp_down_force_logoff_users  = bool
  }))
  default = [
    {
      name = "weekdays_schedule"
      days_of_week = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
      ramp_up_start_time = "08:00"
      ramp_up_load_balancing_algorithm = "BreadthFirst"
      ramp_up_minimum_hosts_percent = 20
      ramp_up_capacity_threshold_pct = 60
      peak_start_time = "09:00"
      peak_load_balancing_algorithm = "DepthFirst"
      ramp_down_start_time = "18:00"
      ramp_down_load_balancing_algorithm = "DepthFirst"
      ramp_down_minimum_hosts_percent = 10
      ramp_down_capacity_threshold_pct = 90
      ramp_down_wait_time_minutes = 30
      ramp_down_stop_hosts_when = "ZeroActiveSessions"
      ramp_down_notification_message = "You will be logged off in 30 min. Make sure to save your work."
      off_peak_start_time = "20:00"
      off_peak_load_balancing_algorithm = "DepthFirst"
      ramp_down_force_logoff_users = false
    }
  ]
}

# Define the location of the Azure Virtual Desktop Scaling Plan
variable "location" {
  description = "The location where the resources will be deployed."
  type        = string
  default     = "australiaeast"
}

# Define the tags of the Azure Virtual Desktop Scaling Plan
variable "tags" {
  description = "The tags to be assigned to the Scaling plan."
  type        = map(string)
  default     = {
    "Billing" = "IT"
    "Department" = "IT"
    "Location" = "AUS-East"
  }
}

# Define the name of the Azure Virtual Desktop Host Pool
variable "existing_host_pool_name" {
  description = "The name of the existing Azure Virtual Desktop Host Pool."
  type        = string
  default     = "AE-DEV-AVD-01-PO-D-HP"
}

Configure AVD – ScalingPlans – output.tf

Create a file named output.tf and insert the following code. This will showcase in the console what is getting deployed in form of a output.

# Output the ID of the Azure Virtual Desktop Scaling Plan
output "scaling_plan_id" {
  description = "The ID of the Virtual Desktop Scaling Plan."
  value       = azurerm_virtual_desktop_scaling_plan.example.id
}

Intialize Terraform – AVD – ScalingPlans

Run terraform init to initialize the Terraform deployment. This command downloads the Azure provider required to manage your Azure resources. (Its pulling the AzureRM and AzureAD)

terraform init -upgrade

Create Terraform Execution Plan – AVD – ScalingPlans

Run terraform plan to create an execution plan.

terraform plan -out scaleplan.tfplan

Apply Terraform Execution Plan – AVD – ScalingPlans

Run terraform apply to apply the execution plan to your cloud infrastructure.

terraform apply "scaleplan.tfplan"

Validate the Output in Azure Portal

Go to the Azure portal, Select Azure Virtual Desktop and Select Scaling Plans and validate all the details such as Host Pool Assignment and Schedule:

Clean-up the above resources (Optional)

If you want to delete all the above resources then you can use the following commands to destroy. Run terraform plan and specify the destroy flag.

terraform plan -destroy -out scaleplan.destory.tfplan
terraform apply "scaleplan.destory.tfplan"

The intention here is to get you quickly started with Terraform on Azure Virtual Desktop Solution:

DescriptionLinks
Create an autoscale scaling plan for Azure Virtual DesktopCreate an autoscale scaling plan for Azure Virtual Desktop | Microsoft Learn
Setting up your computer to get started with Terrafor using PowershellInstall Terraform on Windows with Azure PowerShell
AVD Configure Azure Virtual Desktophttps://learn.microsoft.com/en-us/azure/developer/terraform/configure-azure-virtual-desktop
Terraform Learninghttps://youtube.com/playlist?list=PLLc2nQDXYMHowSZ4Lkq2jnZ0gsJL3ArAw

I hope you will find this helpful information for getting started with Terraform to deploy the Azure Virtual Desktop – Scaling Plans. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari