Tag Archives: API

Building a Comprehensive Image Analysis API with Microsoft Florence-2-large, Chainlit and Docker

8 Jul

In this blog post, we’ll embark on an exciting journey of building a comprehensive Image Analysis API using Microsoft Florence-2-large, Chainlit, and Docker. Image analysis is a fascinating field that involves extracting meaningful information from images using advanced AI techniques. By leveraging the power of Microsoft’s Florence-2-large model, we can create a system that automatically understands the content of an image and performs various analysis tasks such as captioning, object detection, expression segmentation, OCR etc..

My Florence2 Code Repository askaresh/MS-Florence2 (github.com)

Note – In the past have written a blog article on Image Captioning you can read more here – Building an Image Captioning API with FastAPI and Hugging Face Transformers packaged with Docker | AskAresh

Model Overview

Hugging Face Link – microsoft/Florence-2-large · Hugging Face

The Microsoft Florence-2-large model is a powerful pre-trained model designed for various image analysis tasks. Developed by Microsoft, this model is part of the Florence family, which is known for its robust performance in computer vision applications. The Florence-2-large model leverages extensive training on a vast dataset of images, enabling it to excel in tasks such as image captioning, object detection, and optical character recognition (OCR).

Key Features of Florence-2-large

  • Multitask Capabilities: The model can perform a wide range of image analysis tasks, including generating captions, detecting objects, segmenting regions, and recognizing text within images.
  • High Accuracy: Trained on diverse and extensive datasets, the Florence-2-large model achieves high accuracy in understanding and analyzing image content.
  • Scalability: Its architecture is designed to scale effectively, making it suitable for integration into various applications and systems.

Why Florence-2-large?

We chose the Florence-2-large model for our Image Analysis API due to its versatility and performance. The model’s ability to handle multiple tasks with high precision makes it an ideal choice for building a comprehensive image analysis system. By leveraging this model, we can ensure that our API delivers accurate and reliable results across different types of image analysis tasks.

Implementation Details

To build our Image Analysis API, we started by setting up a Chainlit project and defining the necessary message handlers. The main handler accepts an image file and processes it through various analysis tasks.

We utilized the pre-trained Florence-2-large model from Hugging Face Transformers for image analysis. This powerful model has been trained on a vast dataset of images and can perform multiple tasks such as image captioning, object detection, and OCR.

To ensure a smooth development experience and ability to run on any cloud, we containerized our application using Docker. This allowed us to encapsulate all the dependencies, including Python libraries and the pre-trained model, into a portable and reproducible environment.

Choosing NVIDIA Docker Image

We specifically chose the NVIDIA CUDA-based Docker image (nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04) for our containerization. This choice was driven by the need to leverage GPU acceleration for the Florence-2-large model, which significantly enhances the performance of image processing tasks. The CUDA-based image ensures compatibility with GPU drivers and provides pre-installed libraries necessary for efficient model execution.

Our project structure looks like this:

MS-FLORENCE2/
│
├── app/
│   ├── __init__.py
│   ├── config.py
│   ├── model.py
│   └── utils.py
│
├── Dockerfile
├── docker-compose.yml
├── .env
├── .gitignore
├── chainlit_app.py
├── requirements.txt
└── logging_config.py

Let’s break down the key components:

  1. chainlit_app.py: This is the heart of our Chainlit application. It defines the message handler that processes uploaded images and generates responses using the Florence model.
  2. app/model.py: This file contains the ModelManager class, which is responsible for loading and managing the Florence-2-large model.
  3. app/utils.py: This directory contains utility functions for image drwaing plot boxes, polygons and OCR boxes.
  4. logging_config.py: This file contains the detailed logging of this entire project and its various files
  5. Dockerfile: This file defines how our application is containerized, ensuring all dependencies are properly installed and the environment is consistent. The use of the NVIDIA CUDA-based Docker image ensures compatibility and performance optimization.

Task Prompts and Their Functions

Let’s break down the task prompts used in the Florence-2-large model and explain what each of them does:

  • <CAPTION>
    • Purpose: Generates a simple, concise caption for the image.
    • Output: A brief description of the main elements in the image.
    • Example: “A credit card bill with a price tag on it”
  • <DETAILED_CAPTION>
    • Purpose: Provides a more detailed description of the image.
    • Output: A comprehensive description including more elements and details from the image.
    • Example: “The image shows a credit card bill with a black background. The bill is printed on a white sheet of paper with a blue border and a blue header. The header reads ‘Credit Card Bill’ in bold black font. The bottom of the bill has a space for the customer’s name, address, and contact information.”
  • <OD> Object Detection
    • Purpose: Detects and locates objects within the image.
    • Output: A list of detected objects with their bounding box coordinates and labels.
    • Example: [{‘bboxes’: [[x1, y1, x2, y2], …], ‘labels’: [‘credit card’, ‘price tag’, …]}]
  • <OCR>
    • Purpose: Performs Optical Character Recognition on the image.
    • Output: Extracted text from the image.
    • Example: “Credit Card Bill\nName: John Doe\nAddress: 123 Main St…”
  • <CAPTION_TO_PHRASE_GROUNDING>
    • Purpose: Locates specific phrases or objects mentioned in the caption within the image.
    • Input: Requires a caption (usually the output from ”) as additional text input.
    • Output: Bounding boxes and labels for phrases/objects from the caption found in the image.
    • Example: [{‘bboxes’: [[x1, y1, x2, y2], …], ‘labels’: [‘credit card’, ‘price tag’, …]}]
  • <DENSE_REGION_CAPTION>
    • Purpose: Generates captions for specific regions within the image.
    • Output: A list of regions with their bounding boxes and corresponding captions.
    • Example: [{‘bboxes’: [[x1, y1, x2, y2], …], ‘labels’: [‘Header with Credit Card Bill text’, ‘Customer information section’, …]}]
  • <REGION_PROPOSAL>
    • Purpose: Suggests regions of interest within the image without labeling them.
    • Output: A list of bounding boxes for potentially important regions in the image.
    • Example: {‘bboxes’: [[x1, y1, x2, y2], …], ‘labels’: [”, ”, …]}
  • <MORE_DETAILED_CAPTION>
    • Purpose: Generates an even more comprehensive description of the image than ”.
    • Output: A very detailed narrative of the image, often including subtle details and potential interpretations.
    • Example: “The image displays a credit card bill document against a stark black background. The bill itself is printed on crisp white paper, framed by a professional-looking blue border. At the top, a bold blue header prominently declares ‘Credit Card Bill’ in a large, easy-to-read font. Below this, the document is structured into clear sections, likely detailing transactions, fees, and payment information. At the bottom of the bill, there’s a designated area for customer details, including name, address, and possibly account information. The contrast between the white document and black background gives the image a formal, official appearance, emphasizing the importance of the financial information presented.”
  • <REFERRING_EXPRESSION_SEGMENTATION>
    • Purpose: Segments the image based on a textual description of a specific object or region.
    • Input: Requires a textual description as additional input.
    • Output: A segmentation mask for the described object or region.
  • <REGION_TO_SEGMENTATION>
    • Purpose: Generates a segmentation mask for a specified region in the image.
    • Input: Requires coordinates of the region of interest.
    • Output: A segmentation mask for the specified region.
  • <OPEN_VOCABULARY_DETECTION>
    • Purpose: Detects objects in the image based on user-specified categories.
    • Input: Can accept a list of categories to look for.
    • Output: Bounding boxes and labels for detected objects matching the specified categories.
  • <REGION_TO_CATEGORY>
    • Purpose: Classifies a specific region of the image into a category.
    • Input: Requires coordinates of the region of interest.
    • Output: A category label for the specified region.
  • <REGION_TO_DESCRIPTION>
    • Purpose: Generates a detailed description of a specific region in the image.
    • Input: Requires coordinates of the region of interest.
    • Output: A textual description of the contents of the specified region.
  • ‘<OCR_WITH_REGION>’
    • Purpose: Performs OCR on specific regions of the image.
    • Output: Extracted text along with the corresponding regions (bounding boxes) where the text was found.

These task prompts allow us to leverage the Florence-2-large model’s capabilities for various image analysis tasks. By combining these prompts, we can create a comprehensive analysis of an image, from basic captioning to detailed object detection and text recognition. Understanding and effectively utilizing these task prompts was crucial in maximizing the potential of the Florence-2-large model in our project.

Lessons Learned and Debugging

Throughout the development of our Florence Image Analysis project, I encountered several challenges and learned valuable lessons:

  • Flash Attention Challenges: One of the most significant hurdles we faced was integrating flash-attn into our project. Initially, we encountered installation issues and compatibility problems with our CUDA setup. We learned that:
    • Flash-attn requires specific CUDA versions and can be sensitive to the exact configuration of the environment.
    • Note we moved to NVDIA based docker image to take care of all the pre-requsites specific to CUDA/Flashattention and interoperatbility of versions, that helped tremendously
    • Building flash-attn from source was often necessary to ensure compatibility with our specific setup. environment.
    • Using the --no-build-isolation flag during installation helped resolve some dependency conflicts. Solution: We ended up creating a custom build process in our Dockerfile, ensuring all dependencies were correctly installed before attempting to install flash-attn.
  • Segmentation and OCR with Region Iterations: Implementing effective OCR, especially with region detection, proved to be an iterative process:
    • Initially, we tried using the Florence model for general OCR, but found it lacking in accuracy for structured documents.
    • We experimented with pre-processing steps to detect distinct regions in documents (headers, body, footer) before applying OCR.
    • Balancing between processing speed and accuracy was a constant challenge. Solution: We implemented a custom region detection algorithm that identifies potential text blocks before applying OCR. This improved both accuracy and processing speed.
  • Error Handling and Logging: As the project grew more complex, we realized the importance of robust error handling and comprehensive logging:
    • Initially, errors in model processing would crash the entire application.
    • Debugging was challenging without detailed logs. Solution: We implemented try-except blocks throughout the code, added detailed logging, and created a system to gracefully handle and report errors to users.
  • Optimizing for Different Document Types: We found that the performance of our system varied significantly depending on the type of document being processed:
    • Handwritten documents required different preprocessing than printed text.
    • Certain document layouts (e.g., tables, multi-column text) posed unique challenges. Solution: We implemented a document type detection step and adjusted our processing pipeline based on the detected type.
  • Balancing Between Flexibility and Specialization: While I aimed to create a general-purpose image analysis tool, we found that specializing for certain tasks greatly improved performance:
    • We created separate processing paths for tasks like receipt OCR, business card analysis, and general document processing. Solution: We implemented a modular architecture that allows for easy addition of specialized processing pipelines while maintaining a common core.

These lessons significantly improved the robustness and effectiveness of our Florence Image Analysis project.

Validation of the API with real Examples

After the container is up and running, users can access the Chainlit interface at http://localhost:8010. Here’s an example of how to use the API:

Example – <Caption>

Example – <MORE_DETAILED_CAPTION>

Example – <OCR>

Example – <OCR_WITH_REGION>

Model GPU – VRAM Consumption

Following are the list of helpful links:

DescriptionLink
Florence-2: Advancing a Unified Representation for a Variety of Vision TasksFlorence-2: Advancing a Unified Representation for a Variety of Vision Tasks – Microsoft Research
Sample Google Collab Notebook Florence2sample_inference.ipynb · microsoft/Florence-2-large at main (huggingface.co)
Research Paper link direct[2311.06242] Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks (arxiv.org)
Florence 2 Inference ChainlitFlorence 2 Inference Chainlit – Mervin Praison

Conclusion

Building this comprehensive Image Analysis API with Florence-2-large, Chainlit, and Docker has been an incredible learning experience. I must have spend atleast a week to get to this point working all the features and functionality within a Docker image. By leveraging the power of advanced AI models and containerization, we created a scalable and efficient solution for performing various image analysis tasks automatically. Through this project, we gained valuable insights into model management, error handling, GPU utilization in containerized environments, and designing interactive UIs for AI applications.

I hope that this blog post has provided you with a comprehensive overview of our Image Analysis API project and inspired you to explore the fascinating world of computer vision. Feel free to check out our GitHub repository, try out the API, and let me know if you have any questions or suggestions!.

Thanks,
Aresh Sarkari

Building an Image Captioning API with FastAPI and Hugging Face Transformers packaged with Docker

17 Apr

In this blog post, we’ll embark on an exciting journey of building an Image Captioning API using FastAPI and Hugging Face Transformers. Image captioning is a fascinating task that involves generating textual descriptions for given images. By leveraging the power of deep learning and natural language processing, we can create a system that automatically understands the content of an image and generates human-like captions. The example below, I input a image with a rider on a bike in a garage and the caption provides the exact details of the image.

Project Overview

👨‍💻 GitHub: https://github.com/askaresh/blip-image-captioning-api

The goal of this project is to develop a RESTful API that accepts an image as input and returns a generated caption describing the image. We’ll be using FastAPI, a modern and fast web framework for building APIs, along with Hugging Face Transformers, a popular library for natural language processing tasks.

The key components of our project include:

  1. FastAPI: A web framework for building efficient and scalable APIs in Python.
  2. Hugging Face Transformers: A library that provides state-of-the-art pre-trained models for various NLP tasks, including image captioning.
  3. Docker: A containerization platform that allows us to package our application and its dependencies into a portable and reproducible environment.

Implementation Details

To build our Image Captioning API, we started by setting up a FastAPI project and defining the necessary endpoints. The main endpoint accepts an image file and an optional text input for conditional image captioning.

We utilized the pre-trained BLIP (Bootstrapping Language-Image Pre-training) model from Hugging Face Transformers for image captioning. BLIP is a powerful model that has been trained on a large dataset of image-caption pairs and achieves impressive results in generating accurate and coherent captions.

To ensure a smooth development experience, and ability for it to run on AnyCloud I containerized our application using Docker. This allowed us to encapsulate all the dependencies, including Python libraries and the pre-trained model, into a portable and reproducible environment.

HF-IT-DOCKER/

├── app/
│ ├── config.py
│ ├── main.py
│ ├── model.py
│ └── utils.py

├── .dockerignore
├── .gitignore
├── compose.yaml
├── Dockerfile
├── logging.conf
├── README.Docker.md
└── requirements.txt

Detailed description of each file:

  • app/config.py:
    • This file contains the configuration settings for the application.
    • It defines a Settings class using the pydantic_settings library to store and manage application-specific settings.
    • The blip_model_name setting specifies the name of the BLIP model to be used for image captioning.
  • app/main.py:
    • This is the main entry point of the FastAPI application.
    • It sets up the FastAPI app, loads the BLIP model, and configures logging.
    • It defines the API endpoints, including the root path (“/”) and the image captioning endpoint (“/caption”).
    • The “/caption” endpoint accepts an image file and an optional text input, processes the image, generates a caption using the BLIP model, and returns the generated caption.
  • app/model.py:
    • This file contains the functions related to loading and using the BLIP model for image captioning.
    • The load_model function loads the pre-trained BLIP model and processor based on the specified model name.
    • The generate_caption function takes an image and optional text input, preprocesses the inputs, and generates a caption using the loaded BLIP model.
  • app/utils.py:
    • This file contains utility functions used in the project.
    • The load_image_from_file function reads an image file and converts it to the appropriate format (RGB) using the PIL library.
  • .dockerignore:
    • This file specifies the files and directories that should be excluded when building the Docker image.
    • It helps to reduce the size of the Docker image by excluding unnecessary files and directories.
  • .gitignore:
    • This file specifies the files and directories that should be ignored by Git version control.
    • It helps to keep the repository clean by excluding files that are not necessary to track, such as generated files, cache files, and environment-specific files.
  • compose.yaml:
    • This file contains the configuration for Docker Compose, which is used to define and run multi-container Docker applications.
    • It defines the services, including the FastAPI server, and specifies the build context, ports, and any necessary dependencies.
  • Dockerfile:
    • This file contains the instructions for building the Docker image for the FastAPI application.
    • It specifies the base image, sets up the working directory, installs dependencies, copies the application code, and defines the entry point for running the application.
  • logging.conf:
    • This file contains the configuration for the Python logging system.
    • It defines the loggers, handlers, formatters, and their respective settings.
    • It specifies the log levels, log file paths, and log message formats.
  • README.Docker.md:
    • This file provides documentation and instructions specific to running the application using Docker.
    • It may include information on how to build the Docker image, run the container, and any other Docker-related details.
  • requirements.txt:
    • This file lists the Python dependencies required by the application.
    • It includes the necessary libraries and their versions, such as FastAPI, Hugging Face Transformers, PIL, and others.
    • It is used by pip to install the required packages when building the Docker image or setting up the development environment.

Lessons Learned and Debugging

Throughout the development process, I encountered several challenges and learned valuable lessons:

  1. Dependency Management: Managing dependencies can be tricky, especially when working with large pre-trained models. We learned the importance of properly specifying dependencies in our requirements file and using Docker to ensure consistent environments across different systems.
  2. Debugging Permission Issues: We encountered permission-related issues when running our application inside a Docker container. Through debugging, we learned the significance of properly setting file and directory permissions and running the container as a non-root user to enhance security.
  3. Logging Configuration: Proper logging is crucial for understanding the behavior of our application and troubleshooting issues. I learned how to configure logging using a configuration file and ensure that log files are written to directories with appropriate permissions.
  4. Testing and Error Handling: Comprehensive testing and error handling are essential for building a robust API. We implemented thorough error handling to provide meaningful error messages to API users and conducted extensive testing to ensure the reliability of our image captioning functionality.

Validation of the API

After the container is up and running go to http://localhost:8004/docs and select Post method and pick try out. Upload any image of your choice and enter the text (optional) and further click Execute. You will have the caption below as the output.

Conclusion

Building an Image Captioning API with FastAPI and Hugging Face Transformers has been an incredible learning experience. By leveraging the power of pre-trained models and containerization, I created a scalable and efficient solution for generating image captions automatically.

Through this project, I gained valuable insights into dependency management, debugging permission issues, logging configuration, and the importance of testing and error handling. These lessons will undoubtedly be applicable to future projects and contribute to our growth as developers.

I hope that this blog post has provided you with a comprehensive overview of our Image Captioning API project and inspired you to explore the fascinating world of image captioning and natural language processing. Feel free to reach out with any questions or suggestions, and happy captioning!

Thanks,
Aresh Sarkari

PowerShell – Frontline Workers – Create Windows 365 Cloud PC Provisioning Policy

23 May

I have a blog post about creating a Windows 365 Cloud PC Provisioning Policy using PowerShell. In this post blog, I will demonstrate how to create the provisioning policy using PowerShell and MS Graph API with beta modules for Windows 365 Cloud PC – Frontline Workers.

Windows 365 Frontline Worker

Introduction

I will not attempt to explain Frontline, but the best explanation is here: What is Windows 365 Frontline? | Microsoft Learn.

Example – Each Windows 365 Frontline license can be shared with up to three employees. This means that if you have 30 employees, you only need to purchase 10 licenses to provision the CloudPC for all 30 employees with access over the day. However, note you are buying the frontline license based on the active sessions. You must purchase the license accordingly if you have more than 10 active workers in a shift.

What happens when license are exhausted?

In my demo tenant, I have two licenses for Frontline workers. When I try to log in to the third one (Note I have already logged into 2 active sessions and running them.) Get the following message.

Connect to MS Graph API

Step 1 – Install the MS Graph Powershell Module

#Install Microsoft Graph Beta Module
PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

Step 2 – Connect to scopes and specify which API you wish to authenticate to. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the policy, so we need to change the scope to “CloudPC.ReadWrite.All”

#Read-only
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All" -NoWelcome
Welcome To Microsoft Graph!

OR

#Read-Write
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All" -NoWelcome
Welcome To Microsoft Graph!
Permissions for MS Graph API

Step 3 –  Check the User account by running the following beta command.

#Beta APIs
PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

Create Provisioning Policy (Frontline Worker)

We are creating a provisioning policy that involves the following: avdwin365mem/win365frontlineCreateProvPolicy at main · askaresh/avdwin365mem · GitHub

  • Azure AD Joined Cloud PC desktops
  • The region for deployment – Australia East
  • Image Name – Windows 11 Enterprise + Microsoft 365 Apps 22H2 (from the Gallery)
  • Language & Region – English (United States)
  • Network – Microsoft Managed
  • Cloud PC Naming format – FLW-%USERNAME:5%-%RAND:5% (FLW – Frontline Worker)
$params = @{
	displayName = "Demo-FrontLine"
	description = "Front Line Workers Prov Policy"
	provisioningType = "shared"
	managedBy = "windows365"
	imageId = "MicrosoftWindowsDesktop_windows-ent-cpc_win11-22h2-ent-cpc-m365"
	imageDisplayName = "Windows 11 Enterprise + Microsoft 365 Apps 22H2"
	imageType = "gallery"
	microsoftManagedDesktop = @{
		type = "starterManaged"
		profile = $null
	}
	enableSingleSignOn = $true
	domainJoinConfigurations = @(
		@{
			type = "azureADJoin"
			regionGroup = "australia"
			regionName = "automatic"
		}
	)
	windowsSettings = @{
		language = "en-US"
	}
	cloudPcNamingTemplate = "FLW-%USERNAME:5%-%RAND:5%"
}

New-MgBetaDeviceManagementVirtualEndpointProvisioningPolicy -BodyParameter $params

Note – Post provisioning, you need to add the assignment of a AAD group consisting of all the frontline users. In the future I can demonstrate the API call for assignments. You can also use Andrew Taylors post around using Graph to create the Windows 365 Group – Creating Windows 365 Groups and assigning licenses using Graph and PowerShell

Powershell Output

Policy will show up in the MEM Portal

Optional Properties

If you are doing on-premise network integration (Azure Network Connection) , then the following additional property and value is required. In my lab, I am leveraging the Microsoft Managed Network, so this is not required.

OnPremisesConnectionId = "4e47d0f6-6f77-44f0-8893-c0fe1701ffff"

I hope you will find this helpful information for creating a frontline worker provisioning policy using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

Consolidated Scripts – All configurational task via PowerShell for Windows 365 Cloud PC under Microsoft Intune Portal (MEM)

18 Jan

I have written various individual blog posts on PowerShell creation of all configurational task for Windows 365 Cloud PC under Microsoft Endpoint Portal (MEM).

Based on public demand, I want to create a consolidated post for all the scripts and configuration items that can get you started with Windows 365 Cloud PC using PowerShell: (Of course all the below features can also be configured using the UI, however below is the guidance strictly using PowerShell)

PowerShell links to my blog post

Following are the links to my blog post for each and individual task:

PowerShell – Create Windows 365 Cloud PC Provisioning Policy https://askaresh.com/2022/10/11/powershell-create-windows-365-cloud-pc-provisioning-policy/

PowerShell – Assign a AAD group to the Windows 365 Cloud PC Provisioning Policy
https://askaresh.com/2022/10/12/powershell-assign-a-aad-group-to-the-windows-365-cloud-pc-provisioning-policy/

PowerShell – Unassign/Delete the Windows 365 Cloud PC Provisioning Policy
https://askaresh.com/2022/10/14/powershell-unassign-delete-the-windows-365-cloud-pc-provisioning-policy/

PowerShell – Create a custom Windows 11 Enterprise (22H2) + Microsoft 365 Apps golden image for Windows 365 Cloud PC using Marketplace Image
https://askaresh.com/2022/12/01/powershell-create-a-custom-windows-11-enterprise-22h2-microsoft-365-apps-golden-image-for-windows-365-cloud-pc-using-marketplace-image/

PowerShell – Create Azure Network Connection (ANC) for Windows 365 Cloud PC
https://askaresh.com/2023/01/16/powershell-create-azure-network-connection-anc-for-windows-365-cloud-pc/

PowerShell – Create and Assign Windows 365 Cloud PC – User Settings
https://askaresh.com/2022/11/08/powershell-create-and-assign-windows-365-cloud-pc-user-settings/

PowerShell – Report – Get Cloud PC Windows 365 with low utilization
https://askaresh.com/2022/11/24/powershell-report-get-cloud-pc-windows-365-with-low-utilization/

I promise you once you have done the hard work, you can get up and running in a few hours using all the above PowerShell scripts with Windows 365 Cloud PC.

Here is the repo with all the scripts and more – askaresh/avdwin365mem (github.com). A big thanks to Andrew Taylor for collabrating and updating the Provisioning policy script with the SSO details that was release in late Nov 2022.

I hope you will find this helpful information for all things PowerShell w.r.t Windows 365 Cloud PC. I will update the post if I publish or update more information.

Thanks,
Aresh Sarkari

PowerShell – Create Azure Network Connection (ANC) for Windows 365 Cloud PC

16 Jan

If you want to establish a network connection that allows communication between the Windows 365 Cloud PC and the existing Azure Virtual Network (ANC), then keep following this post. Today, I will demonstrate the Powershell method of creating the Azure Network Connection (ANC). Note that we need information from the Azure Portal to make sure you have all the necessary information handy or/or involve the necessary teams who can provide you with the information on Azure Networking.

Overview

  • Create the ANC first before creating the Win365 – Cloud Provisioning Policy (CPP)
  • If the ANC precreated then during the cloud provisioning of the Cloud PC desktops it will create them on the Azure VNET on your desired subnet
  • Make sure you have a working DNS configured on the VNET which can communicate with your on-premise network using express route or other Azure VNETs
  • Open necessary firewall ports based on your requirements on the NSG or Azure Firewall for the communication to your on-premise network using express route or other Azure VNETs
  • Permissions
    • Intune Administrator in Azure AD
    • Cloud PC Administrator
    • Global Administrator
  • If you decide to alter or change the ANC, you will have to reprovision the Cloud PC, and it’s a destructive activity. Make sure you architect it properly
  • You can delete your ANC however, you will have to update your cloud provisioning policy with the new ANC first, and then you can delete the existing ANC.

Connect to MS Graph API

Step 1 – Install the MS Graph Powershell Module

#Install Microsoft Graph Module
PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

Step 2 – Connect to scopes and specify which API you want to authenticate. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the ANC, so we need to change the scope to “CloudPC.ReadWrite.All”

#Read-only
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All" -NoWelcome
Welcome To Microsoft Graph!

OR

#Read-Write
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All" -NoWelcome
Welcome To Microsoft Graph!


Step 3 – Check the User account by running the following beta command.

#Beta User
PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

Connect to Azure & Grab Details (Variable Region)

We are logging into Azure to grab all the details regarding to Resource Group, Subscription ID/Name, VNET and Subnets

  • Connect to the Azure Portal using the necessary credentials
  • Select the Azure Subscription that holds all the networking information
  • A display name of the Azure Network Connection – ANC – (ANC-W365-Sub01)
  • What is the join type of the ANC of the golden image virtual machine (azureADJoin)
  • Resource Group ID of the existing resource group. You will have to enter the resource group name (W365-AVD-RG01), and it will get us the ID we need.
  • Name of the existing subnet within the vNET (W365Workload-Sub01), and it will get us the ID we need.
  • Name of the existing VNET used for the connection. You will have to enter the VNET name (W365-AVD-VNET01), and it will get us the ID we need.
  • Connection to the MS Graph API and ensure you have the necessary write permissions.
  • We are using the beta API for Cloud PC
# Connect to the Azure Subcription
Connect-AzAccount

# Get existing context
$currentAzContext = Get-AzContext

# Your subscription. This command gets your current subscription
$subscriptionID = $currentAzContext.Subscription.Id

# Your subscription. This command gets your current subscription name
$subscriptionName = $currentAzContext.Subscription.Name

# ANC Display Name
$ancdname = "ANC-W365-Sub01"

# Join Ype for the Azure Network Connection
# Two types Azure AD and Hyrbird "azureADJoin" or "hybridAzureADJoin"
$ancjointype = "azureADJoin"

# Get your Win365 Resouce Group id for RG Name - W365-AVD-RG01
# Put your RG Name
$win365RGID = Get-AzResourceGroup -Name "W365-AVD-RG01" | Select-Object -ExpandProperty ResourceId

# Get your Azure VNET id used for Windows 365 Cloud PC
# Put your VNET Name
$win365VNETID = Get-AzVirtualNetwork -Name "W365-AVD-VNET01" | Select-Object -ExpandProperty Id

# Get your Subnet ID within the Azure VNET for Windows 365 Cloud PC
# Put your VNET Name
$win365VNET = Get-AzVirtualNetwork -Name "W365-AVD-VNET01"

# Enter your Subnet Name
$win365SubID = Get-AzVirtualNetworkSubnetConfig -Name "W365Workload-Sub01" -VirtualNetwork $win365VNET | Select-Object -ExpandProperty Id

# Connec to MS Graph for Cloud PC W365
Connect-MgGraph -Scopes "CloudPC.ReadWrite.All"

We shall pass the above variable into the final ANC creation.

Create the Azure Network Connection

We are creating a Azure Network Connection that includes the following:

  • Display Name of the network – $ancdname
  • Azure Subscription ID – $subscriptionID
  • Azure Subscription Name – $subscriptionName
  • Type – There are two types we are selecting Azure AD join – azureADJoin
  • Resource Group ID – The resource group within Azure – $win365RGID
  • Virtual Network ID – The VNET within Azure – $win365VNETID
  • Subnet ID – The subnet for W365 within VNET – $win365SubID
# Create the ANC for Windows 365 with AAD join type
try
{
write-host "Create the ANC for Windows 365 with AAD join type"
$params = @{
    displayName = "$ancdname"
    subscriptionId = "$subscriptionID"
    type = "$ancjointype"
    subscriptionName = "$subscriptionName"
    resourceGroupId = "$win365RGID"
    virtualNetworkId = "$win365VNETID"
    subnetId = "$win365SubID"
}

New-MgBetaDeviceManagementVirtualEndpointOnPremiseConnection -BodyParameter $params -Debug
}
catch
{
    Write-Host $_.Exception.Message -ForegroundColor Yellow
}

Final Script

Here I will paste the entire script block for seamless execution in single run. Following is the link to my Github for this script – avdwin365mem/win365CreateANC at main · askaresh/avdwin365mem (github.com)

# Import module Az and MS Graph
Import-Module Az.Accounts
Install-Module Microsoft.Graph

# Connect to the Azure Subcription
Connect-AzAccount

# Get existing context
$currentAzContext = Get-AzContext

# Your subscription. This command gets your current subscription
$subscriptionID = $currentAzContext.Subscription.Id

# Your subscription. This command gets your current subscription name
$subscriptionName = $currentAzContext.Subscription.Name

# ANC Display Name
$ancdname = "ANC-W365-Sub01"

# Join Ype for the Azure Network Connection
# Two types Azure AD and Hyrbird "azureADJoin" or "hybridAzureADJoin"
$ancjointype = "azureADJoin"

# Get your Win365 Resouce Group id for RG Name - W365-AVD-RG01
# Put your RG Name
$win365RGID = Get-AzResourceGroup -Name "W365-AVD-RG01" | Select-Object -ExpandProperty ResourceId

# Get your Azure VNET id used for Windows 365 Cloud PC
# Put your VNET Name
$win365VNETID = Get-AzVirtualNetwork -Name "W365-AVD-VNET01" | Select-Object -ExpandProperty Id

# Get your Subnet ID within the Azure VNET for Windows 365 Cloud PC
# Put your VNET Name
$win365VNET = Get-AzVirtualNetwork -Name "W365-AVD-VNET01"

# Enter your Subnet Name
$win365SubID = Get-AzVirtualNetworkSubnetConfig -Name "W365Workload-Sub01" -VirtualNetwork $win365VNET | Select-Object -ExpandProperty Id

# Connec to MS Graph for Cloud PC W365
Connect-MgGraph -Scopes "CloudPC.ReadWrite.All"

# Create the ANC for Windows 365 with AAD join type
try
{
write-host "Create the ANC for Windows 365 with AAD join type"
$params = @{
    displayName = "$ancdname"
    subscriptionId = "$subscriptionID"
    type = "$ancjointype"
    subscriptionName = "$subscriptionName"
    resourceGroupId = "$win365RGID"
    virtualNetworkId = "$win365VNETID"
    subnetId = "$win365SubID"
}

New-MgBetaDeviceManagementVirtualEndpointOnPremiseConnection -BodyParameter $params -Debug
}
catch
{
    Write-Host $_.Exception.Message -ForegroundColor Yellow
}

I hope you will find this helpful information for creating Azure Network Connection using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

PowerShell – Create a Windows 11 Multi-session golden image for Azure Virtual Desktop using Marketplace Image

28 Nov

Do you want to deploy an Azure Virtual Desktop – Host pools quickly and want a starting point for a golden image? Look no further in this blog post. I will show you how to create a golden image using PowerShell in no more than 10 min.

I will break down the code block into smaller chunks first to explain the critical bits, and in the end, I will post the entire code block that can be run all at once. In this way, explaining block by block becomes easier than pasting one single block.

Pre-requisites

Following are the pre-requisites before you begin

  • PowerShell 5.1 and above
  • Azure Subscription
  • Permissions within the Auzre Subscription for Azure Compute
  • Assumption
    • You have an existing Resource Group (RG)
    • You have an existing Azure Virtual Network (VNET)
    • You have an existing workload subnet within the VNET
    • Identify the VM Size you will be using for the golden image
  • Azure PowerShell Modules

Sign to Azure

To start working with Azure PowerShell, sign in with your Azure credentials.

Connect-AzAccount

Identify the Windows 11 Multi-session (Marketplace Image)

There are many different versions of Windows 11 marketplace images from Microsoft. Let’s identify what is available within the gallery.

Get-AzVMImageSku -Location australiaeast -PublisherName MicrosoftWindowsDesktop -Offer windows-11

#Bonus Information

If you want the Multi-session gallery image with Office, than use the following command

Get-AzVMImageSku -Location australiaeast -PublisherName MicrosoftWindowsDesktop -Offer office-365

We are going to use the Windows 11 22H2 Mutli-session – win11-22h2-avd within this script

Variable Region

Delcare all the variable within this section. Lets take a look at what we are declaring within the script:

  • Existing Resource Group within the Azure Subscription (AZ104-RG)
  • A location where you are deploying this virtual machine (Australia East)
  • Name of the golden image virtual machine (VM03)
  • NIC Interface name for the virtual machine (VM03-nic)
  • RG of the VNET (In my case they are same AZ104-RG, they can be seperate too and hence a independent variable)
  • Name of the existing subnet within the vNET (AZ104-VDI-Workload-L1)
  • Name of the existing VNET (AZ104-RG-vnet)
  • Mapping of the exisitng VNET
  • Mapping of the existing subnet
  • T-shirt size of the golden image we are deploying (Standard_D2s_v3)
  • Gallery details of the image
    • Published – MicrosoftWindowsDesktop
    • Offer – windows-11
    • SKU – win11-22h2-avd
    • version – Offcourse latest
  • Get credentials – A local admin account is created on the golden image (A input box to capture the uisername and password)
# Existing Resource Group to deploy the VM
$rgName = "AZ104-RG"

# Geo Location to deploy the VM
$location = "Australia East"

# Image template name
$vmName = "VM03"

# Networking Interfance Name for the VM
$nicName = "$vmName-nic"

# Resource Group for VNET
$vnetrgName = "AZ104-RG"

# Existing Subnet Name
$Existsubnetname = "AZ104-VDI-Workload-L1"

# Existing VNET Name
$Existvnetname = "AZ104-RG-vnet"

# Existing VNET where we are deploying this Virtual Machine
$vnet = Get-AzVirtualNetwork -Name $Existvnetname -ResourceGroupName $vnetrgName

# Existing Subnet within the VNET for the this virtual machine
$subnet = Get-AzVirtualNetworkSubnetConfig -Name $Existsubnetname -VirtualNetwork $vnet

# T-shirt size of the VM
$vmSize = "Standard_D2s_v3"

# Gallery Publisher of the Image - Microsoft
$publisher = "MicrosoftWindowsDesktop"

# Version of Windows 10/11
$offer = "windows-11"

# The SKY ending with avd are the multi-session
$sku = "win11-22h2-avd"

# Choosing the latest version
$version = "latest"

# Setting up the Local Admin on the VM
$cred = Get-Credential `
   -Message "Enter a username and password for the virtual machine."

Execution block

Execution code block within this section. Lets take a look at what we are we executing within the script:

  • First its creating the network interface for the virtual machine (VM03)
  • Next, under the variable $VM all virtual machine configurations
    • Tshirt size of the virtual machine
    • Credentials for the local admin (username/password)
    • The network interface assignment along with the delete option (Note delete option is essential or/else during deletion of VM it will not delete the network interface)
    • The gallery image, sku, offer from the Microsoft Market Place gallery
    • The os disk assignment along with the delete option (Note delete option is essential or/else during deletion of VM it will not delete the disk)
    • The configuration around “Trusted Platform” and enabling of TPM and Secure Boot
    • The final command to create the virtual machine with all the above configurations
# Create New network interface for the virtual machine
$NIC = New-AzNetworkInterface -Name $nicName -ResourceGroupName $vnetrgName -Location $location -Subnet $subnet

# Creation of the new virtual machine with delete option for Disk/NIC together
$vm = New-AzVMConfig -VMName $vmName -VMSize $vmSize 

$vm = Set-AzVMOperatingSystem `
   -VM $vm -Windows `
   -ComputerName $vmName `
   -Credential $cred `
   -ProvisionVMAgent `
   -EnableAutoUpdate 

# Delete option for NIC
$vm = Add-AzVMNetworkInterface -VM $vm `
   -Id $NIC.Id `
   -DeleteOption "Delete"

$vm = Set-AzVMSourceImage -VM $vm `
   -PublisherName $publisher `
   -Offer $offer `
   -Skus $sku `
   -Version $version 

# Delete option for Disk
$vm = Set-AzVMOSDisk -VM $vm `
   -StorageAccountType "StandardSSD_LRS" `
   -CreateOption "FromImage" `
   -DeleteOption "Delete"

# The sauce around enabling the Trusted Platform
$vm = Set-AzVmSecurityProfile -VM $vm `
   -SecurityType "TrustedLaunch" 

# The sauce around enabling TPM and Secure Boot
$vm = Set-AzVmUefi -VM $vm `
   -EnableVtpm $true `
   -EnableSecureBoot $true 

New-AzVM -ResourceGroupName $rgName -Location $location -VM $vm

Final Script

Here I will paste the entire script block for seamless execution in single run. Following is the link to my Github for this script – Create Virtual Machine with Trusted Platform and Delete disk/nic options.

# Step 1: Import module
#Import-Module Az.Accounts

# Connect to the Azure Subcription
#Connect-AzAccount

# Get existing context
$currentAzContext = Get-AzContext

# Your subscription. This command gets your current subscription
$subscriptionID=$currentAzContext.Subscription.Id

# Command to get the Multi-session Image in Gallery
# Details from this command will help in filling out variables below on Gallery Image
# Get-AzVMImageSku -Location australiaeast -PublisherName MicrosoftWindowsDesktop -Offer windows-11

# Existing Resource Group to deploy the VM
$rgName = "AZ104-RG"

# Geo Location to deploy the VM
$location = "Australia East"

# Image template name
$vmName = "VM03"

# Networking Interfance Name for the VM
$nicName = "$vmName-nic"

# Resource Group for VNET
$vnetrgName = "AZ104-RG"

# Existing Subnet Name
$Existsubnetname = "AZ104-VDI-Workload-L1"

# Existing VNET Name
$Existvnetname = "AZ104-RG-vnet"

# Existing VNET where we are deploying this Virtual Machine
$vnet = Get-AzVirtualNetwork -Name $Existvnetname -ResourceGroupName $vnetrgName

# Existing Subnet within the VNET for the this virtual machine
$subnet = Get-AzVirtualNetworkSubnetConfig -Name $Existsubnetname -VirtualNetwork $vnet

# T-shirt size of the VM
$vmSize = "Standard_D2s_v3"

# Gallery Publisher of the Image - Microsoft
$publisher = "MicrosoftWindowsDesktop"

# Version of Windows 10/11
$offer = "windows-11"

# The SKY ending with avd are the multi-session
$sku = "win11-22h2-avd"

# Choosing the latest version
$version = "latest"

# Setting up the Local Admin on the VM
$cred = Get-Credential `
   -Message "Enter a username and password for the virtual machine."

# Create New network interface for the virtual machine
$NIC = New-AzNetworkInterface -Name $nicName -ResourceGroupName $vnetrgName -Location $location -Subnet $subnet

# Creation of the new virtual machine with delete option for Disk/NIC together
$vm = New-AzVMConfig -VMName $vmName -VMSize $vmSize 

$vm = Set-AzVMOperatingSystem `
   -VM $vm -Windows `
   -ComputerName $vmName `
   -Credential $cred `
   -ProvisionVMAgent `
   -EnableAutoUpdate 

# Delete option for NIC
$vm = Add-AzVMNetworkInterface -VM $vm `
   -Id $NIC.Id `
   -DeleteOption "Delete"

$vm = Set-AzVMSourceImage -VM $vm `
   -PublisherName $publisher `
   -Offer $offer `
   -Skus $sku `
   -Version $version 

# Delete option for Disk
$vm = Set-AzVMOSDisk -VM $vm `
   -StorageAccountType "StandardSSD_LRS" `
   -CreateOption "FromImage" `
   -DeleteOption "Delete"

# The sauce around enabling the Trusted Platform
$vm = Set-AzVmSecurityProfile -VM $vm `
   -SecurityType "TrustedLaunch" 

# The sauce around enabling TPM and Secure Boot
$vm = Set-AzVmUefi -VM $vm `
   -EnableVtpm $true `
   -EnableSecureBoot $true 

New-AzVM -ResourceGroupName $rgName -Location $location -VM $vm

Note – It will give you a pop-up box for entering the username and password for the local account, and in under 10 mins you will see your virtual machine within the Azure portal

Next Steps on Golden Image

Now that the virtual machine is ready following are the next steps involved:

  • Using Azure Bastion console and installing all the required applications
  • Generalize and sysprep and shutdown the image
  • Capture the image to the Azure Compute Galleries
  • Deploy within the Azure Virtual Desktop

I hope you will find this helpful information for deploying a golden image within Azure – Virtual Machine to deploy the Azure Virtual Desktop – Host Pools. If you want to see a Powershell version of the host pool activities, leave me a comment below or on my socials. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

PowerShell – Report – Get Cloud PC Windows 365 with low utilization

24 Nov

In my previous post, I had demonstrated the new reports (in-preview) Windows 365 Cloud PC – New Reports – Connection quality & Low Utilization. Today, I will showcase how to generate the report of “Cloud PCs with low utilization” using PowerShell and MS Graph API with beta modules on Windows 365 Cloud PC.

Connect to MS Graph API

Step 1 – Install the MS Graph Powershell Module

#Install Microsoft Graph Module
PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

Step 2 – Connect to scopes and specify which API you want to authenticate. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the policy, so we need to change the scope to “CloudPC.ReadWrite.All”

#Read-only
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All"
Welcome To Microsoft Graph!

OR

#Read-Write
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All"
Welcome To Microsoft Graph!

Step 3 – Check the User account by running the following beta command.

#Beta User
PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

Generate the report – Low Utilization

We are generating a report that will showcase the low utilization of the Cloud PC within your environment. This can help you decide to decommission the Cloud PC or send a notification to the end-user etc. – https://github.com/askaresh/avdwin365mem/blob/main/report-lowutilz-cloudpc

  • Building the bodyparameters:
    • Top – How many records you want to return (In the current example its 25)
    • Skip – Number of records to skip ((In the current example its 0)
  • Filter
    • In my example, as its a demo tenant and to generate the report I am using the following – TotalUsageInHour le 40 (Usage less than 40 hours)
  • It will provide the details of the Cloud PC Name, UPN, Total time connected and Days since last sign-in.
$params = @{
	Top = 25
	Skip = 0
	Filter = "(TotalUsageInHour le 40)"
	Select = @(
		"CloudPcId"
		"ManagedDeviceName"
		"UserPrincipalName"
		"TotalUsageInHour"
		"DaysSinceLastSignIn"
	)
}

Get-MgBetaDeviceManagementVirtualEndpointReportTotalAggregatedRemoteConnectionReport -BodyParameter $params

Note – You will have to enter the OutFile path where you want to save the report in my example C:\Temp\abc.csv

The actual report in the Intune Portal looks like the following – The same result is now available within the Value section of the CSV (Note – The formatting of the output is terrible, some excel work will be required to format the data properly)

I hope you will find this helpful information for generating low utilization report for Cloud PC using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

PowerShell – Create and Assign Windows 365 Cloud PC – User Settings

8 Nov

There are numerous posts that talk about creating the Windows 365 Cloud PC – User Settings. In this blog post, I will demonstrate how to create user settings using PowerShell and MS Graph API with beta modules on Windows 365 Cloud PC.

Connect to MS Graph API

Step 1 – Install the MS Graph Powershell Module

#Install Microsoft Graph Module
PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

Step 2 – Connect to scopes and specify which API you want to authenticate. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the policy, so we need to change the scope to “CloudPC.ReadWrite.All”

#Read-only
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All"
Welcome To Microsoft Graph!

OR

#Read-Write
PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All"
Welcome To Microsoft Graph!
Permissions for MS Graph API

Step 3 –  Check the User account by running the following beta command.

#Beta User Check
PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

Create User Settings

We are creating a provisioning policy that involves the following: (avdwin365mem/win365CreateUsrSetting at main · askaresh/avdwin365mem (github.com))

  • Display Name of the setting – CPC-UserSettings01
  • Local Admin – No (#Highly recommend not to enable local admin on Cloud PCs)
  • Allow user to initiate restore service – Yes (#This will allow them to restore from Winodws365 App/Browser)
  • Frequency of backup – 6 hours (#Set whatever your requirements call out)
  • Note – Post creation of user settings, you need to add the assignment AAD group
$params = @{
	"@odata.type" = "#microsoft.graph.cloudPcUserSetting"
	DisplayName = "CPC-UserSettings02"
	SelfServiceEnabled = $false
	LocalAdminEnabled = $false
	RestorePointSetting = @{
		FrequencyInHours = 6
		UserRestoreEnabled = $true
	}
}

New-MgBetaDeviceManagementVirtualEndpointUserSetting -BodyParameter $params

Powershell Output

Settings will show up in the MEM/Intune Portal

Assign User Settings

Now that we have the User Settings created, it’s time to assign it to an AAD group. We need to follow the following procedure

AAD Group (Copy – Object ID)

I have an existing AAD (Azure Active Directory) group called “Win365-Users” and I plan to use this group for assignment to this User Settings. The important step here is to make a note of the “Object ID” of the AAD group you are planning to assign. Please make sure you copy this ID.

User Settings (Copy ID)

Copy the ID of the previously created User Settings. We need to copy this ID for the assignment. Use the command – Get-MgDeviceManagementVirtualEndpointUserSetting | FT. Note if multiple CPC user settings, select the relevant ID.

Assign the AAD Group to the User Settings

We are assigning the provisioning policy that involves the following: (avdwin365mem/win365AssignUsrSetting at main · askaresh/avdwin365mem (github.com))

  • ID – The existing Cloud PC User Settings ID
  • GroupID – The Azure AD group which has the end-users/license to be assigned to the policy
  • Within the variable, enter the value of User Settings ID $cloudPcUserSettingId
$cloudPcUserSettingId = "ed7271e3-8844-XXXX-XXXX-9bc8bd70da4c"

$params = @{
	Assignments = @(
		@{
			Id = "ed7271e3-8844-XXXX-XXXX-9bc8bd70da4c"
			Target = @{
				"@odata.type" = "microsoft.graph.cloudPcManagementGroupAssignmentTarget"
				GroupId = "01eecc64-c3bb-XXXX-XXXX-bafb18feef12"
			}
		}
	)
}

Set-MgBetaDeviceManagementVirtualEndpointUserSetting -CloudPcUserSettingId $cloudPcUserSettingId -BodyParameter $params

AAD group assigned within MEM Portal

I hope you will find this helpful information for creating/assigning the user settings using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

PowerShell – Unassign/Delete the Windows 365 Cloud PC Provisioning Policy

14 Oct

Please check out my earlier blog post on PowerShell – Create Windows 365 Cloud PC Provisioning Policy and PowerShell – Assign a AAD group to the Windows 365 Cloud PC Provisioning Policy. This is the last part in the series where we will delete the Windows 365 Cloud PC Provisioning Policy via PowerShell.

A safety feature within MEM Portal – Windows 365 Cloud PC – Provisioning Policies that within the UI when you try to delete the policy, it will be grey out. The only way to delete the policy is to remove the Assignment Group (AAD Group assigned to the policy) and then delete the provisioning policy within UI. The motive of this blog series is PowerShell actions, and we will perform the two actions using that method.

Provisioning Policy (Copy ID)

We need the Windows 365 Provisioning Policy – ID to perform the AAD (Azure Active Directory) group un-assignment and delete operation. We need to copy this ID. Simply use the commandlet – Get-MgDeviceManagementVirtualEndpointProvisioningPolicy. Note if multiple CPC policy, select the ID that is relevant for deletion.

Un-assign AAD Group from the Provisioning Policy

The only way to delete the CPC – Provisioning policy is to remove the AAD group assignment, and it involves the following: avdwin365mem/win365DeleteProvPolicy at main · askaresh/avdwin365mem (github.com)

  • ID – The existing Cloud PC Provisioning Policy ID
  • Load the $params variable first before running the Set-MgDeviceManagementVirtualEndpointProvisioningPolicy commandlet
  • Copy/Paste the Prov policy ID within -CloudPcProvisioningPolicyId
$params = @{
	"@odata.type" = "#microsoft.graph.cloudPcProvisioningPolicyAssignment"
	Assignments = @(
		@{
			Id = "6d54435b-74cd-XXXX-XXXX-7d9b5cc0a78d"
		}
	)
}
Set-MgBetaDeviceManagementVirtualEndpointProvisioningPolicy -CloudPcProvisioningPolicyId "6d54435b-74cd-XXXX-XXXX-7d9b5cc0a78d" -BodyParameter $params

Delete the Provisioning Policy

Now that the AAD Group has been un-assigned it’s time to delete the Cloud PC Provisioning Policy.

Remove-MgBetaDeviceManagementVirtualEndpointProvisioningPolicy -CloudPcProvisioningPolicyId "6d54435b-74cd-4722-9ab7-7d9b5cc0a78d"

I hope you will find this helpful information for the un-assignment & deletion of the CloudPC provisioning policy using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

PowerShell – Assign a AAD group to the Windows 365 Cloud PC Provisioning Policy

12 Oct

If you haven’t looked at my previous blog on PowerShell – Create Windows 365 Cloud PC Provisioning Policy, please check that out first. After creating the Cloud PC provisioning policy, the next step is to assign the Azure AD Group, which has the end-users and Windows 365 license assigned.

AAD Group (Copy – Object ID)

I have an AAD (Azure Active Directory) group called “Win365-Users” and assigned the Windows 365 Cloud PC Enterprise license. The important step here is to make a note of the “Object ID” of the AAD group you are planning to assign. Please make sure you copy this ID.

AAD Group

Provisioning Policy (Copy ID)

In the previous blog, when we created the Cloud PC provisioning policy, Azure will assign an ID. We need to copy this ID for the assignment. Simply use the commandlet – Get-MgBetaDeviceManagementVirtualEndpointProvisioningPolicy. Note if multiple CPC policy, select the ID that is relevant.

PowerShell Output

Assign Provisioning Policy

We are assigning the provisioning policy that involves the following: (avdwin365mem/win365AssignProvPolicy at main · askaresh/avdwin365mem (github.com))

  • ID – The existing Cloud PC Provisioning Policy ID
  • GroupID – The Azure AD group which has the end-users/license to be assigned to the policy
  • Copy/Paste the Prov policy ID within -CloudPcProvisioningPolicyId
$params = @{
	"@odata.type" = "#microsoft.graph.cloudPcProvisioningPolicyAssignment"
	Assignments = @(
		@{
			Id = "6d54435b-74cd-XXXX-XXXX-7d9b5cc0a78d"
			Target = @{
				"@odata.type" = "microsoft.graph.cloudPcManagementGroupAssignmentTarget"
				GroupId = "01eecc64-c3bb-XXXX-XXXX-bafb18feef12"
			}
		}
	)
}

Set-MgBetaDeviceManagementVirtualEndpointProvisioningPolicy -CloudPcProvisioningPolicyId "6d54435b-74cd-XXXX-XXXX-7d9b5cc0a78d" -BodyParameter $params

Assignment is created

I hope you will find this helpful information for the assignment of the AAD group to a CloudPC provisioning policy using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari