Windows 365 – Report – Connected Frontline Cloud PCs

8 Sep

In the Aug 28, 2023 release for Windows 365 Enterprise, the reports for Connected Frontline Cloud PCs were announced. In today’s post, I will showcase how to access and make sure of the new report available within Microsoft Intune.

What does the report offer?

The primary aim of the Connected Frontline Cloud PCs report is to provide clarity on concurrent connections based on each frontline Cloud PC. This is crucial for businesses and IT Admins to understand their usage patterns and ensure they have the correct number of licenses. By analyzing the maximum concurrent connections, we can determine if there’s a need to acquire more licenses. This ensures that end users have uninterrupted access to their Frontline Cloud PCs. You can read more about Frontline Cloud PC provisioning in my previous blog post – PowerShell – Frontline Workers – Create Windows 365 Cloud PC Provisioning Policy | AskAresh

Accessing the report – Connected Frontline Cloud PCs

To view the report in Microsoft Intune portal, you can follow these steps:

  • Login to the Microsoft Intune admin center.
  • If you don’t have the necessary permissions to manage Windows 365. You will also need these two roles to view the report (SharedUseLicenseUsageReport and SharedUseServicePlans)
  • Go to Devices > Overview > Cloud PC performance (preview) > You will find the report name Connected Frontline Cloud PCs. > Select View report

I have two licenses for Windows 365 Frontline Cloud PCs in my lab. I have four provisioned Cloud PCs. However, the maximum Cloud PCs that can be connected and worked upon simultaneously are two.

Scenario 1 – One Frontline Cloud PC connected out of my total two licenses

There is no warning here when one frontline Cloud PC is connected.

Scenario 2 – Two Frontline Cloud PC connected out of my total two licenses

You can see a warning stating you have reached the concurrency limit. The third session of frontline cloud PC will not be allowed.

Concurrent Frontline Cloud PC Connections

This is how the overall report looks when you click on the Cloud PC Size. The report aggregates data for the last 28 days and showcases:

  • The current number of connected Frontline Cloud PCs.
  • The maximum number of connected Frontline Cloud PCs for each day within the selected range (either 7 or 28 days).
  • The maximum concurrency limit.
  • Warnings when nearing or reaching the maximum concurrency limit.

It’s worth noting that this report is tailored for Windows 365 Frontline. If a business hasn’t purchased any Windows 365 Frontline licenses, the report will remain empty.

I hope you will find this helpful information for estimating and tracking the frontline worker concurrent usage via this report. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

Watermarking & Session Capture Protection in Azure Virtual Desktop using Microsoft Intune and Azure AD Joined devices

31 Aug

In the July 2023 release for Azure Virtual Desktop, the Watermarking and Session capture protection features became generally available. Numerous blog posts already showcase how to enable the feature using Group Policy. In today’s post, I will showcase how to enable Watermarking and Session Capture protection using Microsoft Intune for Session Host Virtual machines that are Azure AD joined.

Prerequisites

You’ll need the following things ready before you can rollout watermarking/session capture protection:

  • Azure Virtual Desktop: You must have Azure Virtual Desktop deployed (Pooled or Personal Desktops) and set up in your Azure environment.
  • Microsoft Intune: You should have an active subscription to Microsoft Intune, which is a cloud-based service that enables device management and security. The role within Intune Portal for creating and assigning the configuration profiles is – Policy and Profile manager built-in role-based access control (RBAC) role.
  • Azure Active Directory: Your Azure Virtual Desktop environment should be integrated with Azure Active Directory (AD) (The Host pools RDP properties – targetisaadjoined:i:1). The AAD Security groups must be in place, which has the members as the session’s host in AVD.
  • Azure AD Joined Devices: The session host virtual machines (VMs) you want to enable Watermarking and Session Capture protection for should be Azure AD joined. This means they must be connected to Azure AD and registered as members of your organization’s directory.
  • Admin Access: You need administrative access to the Azure portal (https://portal.azure.com) and Microsoft Intune (https://intune.microsoft.com).
  • Windows 11 operating system for the client along with the Azure Virtual Desktop Client or Remote Desktop Client versions 1.2.x and higher

Configuration Profiles – Intune

To enable the Watermarking and Session Capture protection features in Azure Virtual Desktop using Microsoft Intune Configuration profiles and Azure AD joined devices, you can follow these steps:

  • In the settings picker, browse to Administrative templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Azure Virtual Desktop. You should see settings in the Azure Virtual Desktop subcategory available for you to configure, such as “Enable watermarking” and “Enable screen capture protection”
  • Select the “Enable screen capture protection” settings, too and leave the values as defaults. (Feel free to tweak it based on your requirements)
  • Assigning the configuration to the AAD group, which has all the session host devices
  • Reboot the session host after applying or wait until the next maintenance cycle

Client Validation

Connect to a remote session with a supported client (Azure Virtual Desktop Client or Remote Desktop Client versions 1.2.x), where you should see QR codes appear.

The QR code only works for Windows 11 Multi-session\Windows 11 Enterprise (pooled or personal desktops). The RemoteApps will not show the QR code as its not supported.

Screenshot protection – In the session, it will be completely blank if you try to take a screenshot. Below is an example. I was trying to take a screenshot of the text file, and the screenshot was completely blank.

Mobile Phone Photo

When you try to take a screenshot from the mobile phone, this is how it will look, and it will pop the Connection ID. You have this connection ID you can match in Azure Insights.

Azure Virtual Desktop Insights

To find out the session information from the QR code by using Azure Virtual Desktop Insights:

  1. Open a web browser and go to https://aka.ms/avdi to open Azure Virtual Desktop Insights. Sign-in using your Azure credentials when prompted.
  2. Select the relevant subscription, resource group, host pool and time range, then select the Connection Diagnostics tab.
  3. In the section Success rate of (re)establishing a connection (% of connections), there’s a list of all connections showing First attemptConnection IdUser, and Attempts. You can look for the connection ID from the QR code in this list, or export to Excel.

I hope you will find this helpful information for getting started with Watermarking and Screenshot protection for the Azure Virtual Desktop – Session Host. Please let me know if I have missed any steps or details, and I will be happy to update the post.

Thanks,
Aresh Sarkari

Microsoft Intune – Add additional DNS Client Servers across the managed devices

24 Aug

I recently wrote a blog post about adding DNS Client via GPO, highlighting which methods work and which don’t. If you’re interested, you can read more about it on – GPO – PowerShell – Intune – Add additional DNS Client Servers across the enterprise | AskAresh. As promised, here are the steps for performing the same task in Microsoft Intune for all of your managed devices.

Note – The best method of assigning the DNS Servers is through the DHCP server. If you are setting the IP using DHCP, always make sure you add/remove additional DNS Client Servers from there. In my situation, there was no DHCP server, hence the detailed blog post.

Prerequsites

We are going to implement this configuration via Microsoft Intune using the Scripts:

  • The necessary Microsoft Intune permissions to create, the PowerShell Scripts.
  • A device group available within Microsoft Entra with all the devices you want to target this change.

    PowerShell Script for DNSClient (Additional DNS Servers)

    Save the below script and place on the desktop and we shall be uploading it to Microsft Intune portal – “AddDNSClient.ps1″

    • Please enter the proper DNS Server Address within the script based on your environment and requirement. In the example below the existing two DNS servers are 8.8.8.8 and 8.8.8.4. We are adding additional two DNS Servers 9.9.9.9 and 9.9.9.4.
    $dnsclient=Get-DnsClient  | Get-DnsClientServerAddress | where{$_.ServerAddresses -contains "8.8.8.8" -or $_.ServerAddresses -contains "8.8.8.4"}
    foreach($nic in $dnsclient){
    Set-DnsClientServerAddress -InterfaceIndex $nic.InterfaceIndex -ServerAddresses ("8.8.8.8","8.8.8.4","9.9.9.9","9.9.9.4")
    }

    Create a script policy and assign it – Intune

    1. Sign in to the Microsoft Intune admin center.
    2. Select Devices > Scripts > Add > Windows 10 and later.Screenshot that shows creating a new script for a Windows 10 device.
    3. In Basics, enter the following properties, and select Next:
      • Name: AddDNSClientServers
      • Description: Additional DNS Server 3 & 4
    4. In Script settings, enter the following properties, and select Next:
      • Script location: Browse to the PowerShell script. saved previously and upload it (AddDNSClient.ps1)
      • Run this script using the logged on credentials: Select No.
      • Enforce script signature check: Select No 
      • Run script in 64-bit PowerShell host: Select Yes to run the script in a 64-bit PowerShell host on a 64-bit client architecture.
    5. Select Assignments > Select groups to include. Add the AAD group “Win11-P-DG”

    Wait for approx. 15-20 minutes and the policy will apply to the managed devices. (Machine Win11-Intune-15)

    Managed Device

    You can validate that the settings have been applied to the client by going to the path – C:\ProgramData\Microsoft\IntuneManagementExtension\Logs and opening the file IntuneManagementExtension.txt. I copied the policy ID – cf09649b-78b7-4d98-8bcc-b122c29e5527 from the Intune portal hyperlink and searched within the log file. We can see the policy has been applied successfully.

    I hope you will find this helpful information for applying additional DNS servers via Intune – Scripts and PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari

    VMware App Volumes – Writable Volumes – Third-party Application Exclusions (snapvol.cfg)

    17 Aug

    Over the years, I’ve discovered a list of exclusions that can help with the smooth functioning of VMware App Volumes – Writable Volumes. Though these exclusions are just suggestions, each environment is unique, so take them at your own risk. Testing them in your environment before implementing them in production is essential.

    Path/Process/File Exclusion (Snapvol.cfg)

    In this blog, I am not outlining the steps on how to add the snapvol.cfg exclusion as my ex-colleague Daniel Bakshi outlines on a VMware blog post on how to do it step by step. I hope you will find this information useful if you encounter intermittent black screen issues.

    Cisco AnyConnect

    VPN – Cisco AnyConnect Secure Mobility Client v4.x – Cisco

    #Cisco AnyConnect Exclusions
    exclude_process_name=vpnui.exe
    exclude_process_name=vpnagent.exe
    exclude_registry=\REGISTRY\MACHINE\SYSTEM\CurrentControlSet\Services\vpnagent

    Crowdstrike Falcon Agent

    Falcon Agent – The CrowdStrike Falcon® platform

    #Crowdstrike Exclusions
    exclude_process_name=CSFalconService.exe
    exclude_process_name=CSFalconContainer.exe

    Mcafee Antivirus aka Trellix

    Antivirus Software – Trellix | Revolutionary Threat Detection and Response

    #McAfee Provided Exclusions as per McAfee KB89553
    exclude_process_path=\Program Files\McAfee
    exclude_process_path=\Program Files\Common Files\McAfee
    exclude_process_path=\Program Files (x86)\McAfee
    exclude_process_path=\Program Files (x86)\Common Files\McAfee
    exclude_process_path=\ProgramData\McAfee
    exclude_path=\Windows\Temp\McAfeeLogs
    exclude_process_name=MER.exe
    exclude_process_name=webMERclient.exe
    exclude_process_name=amtrace.exe
    exclude_process_name=etltrace.exe
    exclude_process_name=procmon.exe
    exclude_process_name=procmon64.exe

    Zscalar Client Connector

    Zero trust client – Zscaler Client Connector

    #ZScalar Exclusions
    exclude_path=\ProgramData\Zscaler
    exclude_path=\Program Files (x86)\Zscaler

    BlueYonder Application

    Popular supply chain applications – Blue Yonder | World’s Leading Supply Chain Management Solutions

    #Exclusions to resolve issues with BlueYonder
    exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Classes\ProFloor.Application
    exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Classes\ProSpace.Application
    exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Classes\WOW6432Node\CLSID\{22FBECF5-10A3-11D2-9194-204C4F4F5020}
    exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Classes\WOW6432Node\CLSID\{5E77716A-9680-4C0D-883E-74D49A2F4456}

    VMware DEM

    VMware Dynamic Enivornment Manager – Dynamic Environment Manager | Profile Management | VMware | AU

    #VMware DEM
    exclude_registry=\REGISTRY\MACHINE\SOFTWARE\VMware, Inc.\VMware UEM

    I hope you will find this helpful information for applying exclusions within the snapvol.cfg file. Please let me know if I have missed any steps or details, and I will be happy to update the post. I will gladly add more exclusions if you want to share them in the comments section.

    Thanks,
    Aresh Sarkari

    GPO – PowerShell – Intune – Add additional DNS Client Servers across the enterprise

    16 Aug

    Let’s say you have the entire Windows member server fleet of Windows Server 2016/2019/2022, Windows 11 Pro/Enterprise etc., using DNS Server 1 and Server 2 within their TCP-IP properties and now you decide to add DNS Server address 3 and Server 4 to the member servers to increase resiliency.

    In the blog post, I will demonstrate how you can add the additional DNS Server using Group Policy Object and PowerShell with your enterprise.

    What doesn’t work?

    It would be best if you didn’t waste time – The GPO Computer Configuration –> Administrative Templates –> Network –> DNS Client –> DNS Servers doesn’t work. The “Supported On” version doesn’t include Windows Server 2016\Windows 10 in the compatibility. Even if you apply this GPO, it will apply to the server within the registry, but there will be no visible change under the TCP-IP properties.

    Prerequsites

    We are going to implement this configuration via group policy object within the enterprise:

    • The necessary active directory permissions to create, apply and link the GPOs
    • Access to the Sysvol folder to store the script
    • WMI Filters to target the script\GPO to specific subnets (More details below)

    PowerShell Script for DNSClient (Additional DNS Servers)

    Save the below script and place it within the location – \\DOMAINNAME\SYSVOL\DOMAINNAME\scripts\SetDNSAddress.ps1″

    • Please enter the proper DNS Server Address within the script based on your environment and requirements.
    $dnsclient=Get-DnsClient  | Get-DnsClientServerAddress | where{$_.ServerAddresses -contains "192.168.0.3" -or $_.ServerAddresses -contains "192.168.0.4"}
    foreach($nic in $dnsclient){
    Set-DnsClientServerAddress -InterfaceIndex $nic.InterfaceIndex -ServerAddresses ("192.168.0.3","192.168.0.4","192.168.0.5","192.168.0.6")
    }

    Create the GPO (Additional DNS Servers)

    On a member server with administrative privileges, press Win + R to open the Run box. Type gpmc.msc and press Enter to open the Group Policy Management Console.

    • In the GPMC, expand the forest and domain trees on the left pane to locate the domain you want to create the GPO in.
    • Right-click on “Group Policy Objects” under the domain and select “New” to create a new GPO.
    • In the “New GPO” dialog box, provide a name for the GPO (e.g., “Additional DNS Servers”) and click “OK”.
    • Right-click on the newly created GPO and select “Edit” to open the Group Policy Management Editor.
    • Navigate to Computer Configuration > Preferences > Control Panel Settings > Scheduled Tasks
    • Right Click on Scheduled Tasks > Configure the task as Immediate Task.
    • Give it a name – SetDNSClient
    • Set the user account as SYSTEM. It will automatically convert into NT Authority\system.
    • Set the check “run with highest privileges”
    • In the Actions tab, create a new “Start a program” action.
    • Set the Program as: PowerShell.exe
    • Set the Add Arguments point to this line, and modify including your network share and file: ExecutionPolicy Bypass -command “& \\DOMAINNAME\SYSVOL\DOMAINNAME\scripts\SetDNSAddress.ps1”
    • Set the following in common Tab. – “Apply once and do not reapply”

    Bonus Tip – WMI Filters

    You want to target the GPO to a specific set of member servers who’s IP range starts with a particular IP address. Then you can create a WMI filter such as the below to target particular computers that meet the below range. In the below example, the GPO will apply to the machine starting with IP Address 10.XX OR 10.XX.

    Select * FROM Win32_IP4RouteTable
    WHERE (Mask='255.255.255.255'
    AND (Destination Like '192.168.%' OR Destination Like '192.169.%'))

    Intune (Configuration Profiles – Doesn’t Work)

    As of writing the blog post the Intune built-in setting\CSP is showing similar behaviour like the DNS Server GPO it doesn’t work.

    CSP

    Under both situations (CSP & ADMX templates), the report says the policy is applied successfully. However, there is no visible impact on the operating system’s TCP-IP properties. I am optimistic that using the Scripts method and PowerShell can achieve the same results in Intune. Please let me know in the comments sections if you got it working or/else if you would like to see a blog post on using Intune Scripts to set the DNS Client on member servers.

    Following are the references and important links worth going through for more details:

    DescriptionLinks
    Static DNS Servers via GPOUpdate DNS static servers in your local Network (itdungeon.blogspot.com)
    DNS Server GPO doesn’t workDNS Server GPO Settings Invisible in IPConfig – CB5 Solutions LLC (cbfive.com)

    I hope you will find this helpful information for applying additional DNS servers via the GPO and PoweShell. I want to thank my friend Eqbal Hussian for his assistance and additional rounds of testing\validations. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari

    PowerShell – GPO Analysis – Search for a specific or list of GPO Setting across multiple GPOs within a domain

    20 Jul

    Suppose you’ve ever had to search for a particular or a list of GPO settings across a large number of Group Policy Objects (GPOs) within your domain. In that case, you know how tedious it can be to find specific settings across hundreds or thousands of GPOs. PowerShell comes to the rescue with a powerful script that can search for GPO settings across all your existing GPOs and generate an organized CSV output. In this blog post, we’ll walk you through the process and ensure you have all the prerequisites to get started.

    Usecase

    You have approx. 50 to 60 GPO settings from the Center of Internet Security (CIS) benchmark policies document (CIS Microsoft Windows Desktop Benchmarks/CIS Microsoft Windows Server Benchmarks), which you may want to search against your domain, whether they are already preconfigured\existing available within a GPO or not present in the environment. Instead of searching manually one by one, you may want to use the below PowerShell to get results like a champion.

    Prerequisites

    Before using the PowerShell script, ensure you have the following prerequisites in place:

    1. Windows PowerShell version 5.0 and above
    2. Active Directory Module for Windows PowerShell
    3. Permissions: Ensure you have sufficient permissions to access and analyze GPO settings. Typically, you need to be a member of the Domain Administrators group or have equivalent privileges.
    4. Execute the script from a member server that is part of the domain and has the necessary permissions.
    5. Prepare the input file (inputgpo.txt) and enter the GPO setting one per line and save the file. In my situation, it’s present in C:\Temp
    Relax minimum password length limits
    Allow Administrator account lockout
    Generate security audits
    Impersonate a client after authentication
    Lock pages in memory
    Replace a process level token
    Accounts: Block Microsoft accounts
    Interactive logon: Machine inactivity limit
    Microsoft network server: Server SPN target name validation level
    Network access: Remotely accessible  registry paths
    Network security: Configure encryption types allowed for Kerberos
    Audit Security State Change
    Do not allow password expiration time longer than required by policy
    Password Settings: Password Complexity
    Password Settings: Password Length
    Password Settings: Password Age (Days)

    PowerShell Script

    Now that you have the prerequisites in place, let’s dive into the PowerShell script. GitHub – avdwin365mem/GPOSettingsSearch at main · askaresh/avdwin365mem (github.com)

    • Enter the name of your domain (E.g askaresh.com)
    • Make sure the Input file is present in C:\Temp
    #Domain
    $DomainName = "askaresh.com"
    
    # Initialize matchlist
    $matchlist = @()
    
    # Collect all GPOs
    $GPOs = Get-GPO -All -Domain $DomainName
    
    # Read search strings from text file
    # A list of GPOs settings you want to search
    $SearchStrings = Get-Content -Path "C:\Temp\inputgpo.txt"
    
    # Hunt through each GPO XML for each search string
    foreach ($searchString in $SearchStrings) {
        $found = $false
        foreach ($gpo in $GPOs) {
            $GPOReport = Get-GPOReport -Guid $gpo.Id -ReportType Xml
            if ($GPOReport -match $searchString) {
                $match = New-Object PSObject -Property @{
                    "SearchString" = $searchString
                    "GPOName" = $gpo.DisplayName
                }
                $matchlist += $match
                $found = $true
            }
        }
        if (-not $found) {
            $match = New-Object PSObject -Property @{
                "SearchString" = $searchString
                "GPOName" = "No results found"
            }
            $matchlist += $match
        }
    }
    
    # Output results to CSV, Search results
    
    # This step will take time depending how many 100's or 1000's policies present in the enviornment
    $matchlist | Export-Csv -Path "C:\Temp\gposearch.csv" -NoTypeInformation
    

    Output (Results)

    The ouput will look like the following within CSV:

    I hope you will find this helpful information for searching GPO settings across 100’s and 1000’s of GPOs within your domain. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari

    Extract Highlighted Data from PDF using Python – Example CIS Windows Server 2022 Benchmark pdf

    10 Jul

    In this blog post, we will explore how to extract highlighted data from a PDF using Python. Before we go ahead lets understand what is the usecase, you have the (CIS_Microsoft_Windows_Server_2022_Benchmark_v2.0.0.pdf) which is 1065 pages and you are reviewing the the policy against your enivornment and highlighting the pdf with specific color codes. For example, I use four colors for the following purposes:

    • Red Color – Missing Policies
    • Yellow Color – All existing policies
    • Pink – Policies not applicable
    • Green – Upgraded policies

    Example of the highlighted text:

    You dont have to use the same color codes like I have done but you get the idea. Once you have done the heavy lifting of reviewing the document and happy with the analysis. The next steps is you want to extract the highlighted data into a csv format so that the teams can review and action them.

    Pre-requsites

    We will use the PyMuPDF & Pandas library to parse the PDF file and extract the highlighted text. Additionally, we will apply this technique to the CIS Windows Server 2022 Benchmark PDF as an example.

    Before we begin, make sure you have installed the necessary dependencies. You can install PyMuPDF and Pandas using pip:

    pip install fitz
    pip install pandas

    First, I created a small script to go within the document pdf and detect the colors. I had to do this although, to my eyes, the colors are RED, Yellow, etc., the RGD color codes seem slightly different.

    import fitz  # PyMuPDF
    
    # Open the PDF
    doc = fitz.open('CIS_Microsoft_Windows_Server_2022_Benchmark_v2.0.0.pdf')
    
    # Set to store unique colors
    unique_colors = set()
    
    # Loop through every page
    for i in range(len(doc)):
        page = doc[i]
        # Get the annotations (highlights are a type of annotation)
        annotations = page.annots()
        for annotation in annotations:
            if annotation.type[1] == 'Highlight':
                # Get the color of the highlight
                color = annotation.colors['stroke']  # Returns a RGB tuple
                unique_colors.add(color)
    
    # Print all unique colors
    for color in unique_colors:
        print(color)
    

    You will get the following output post executing the script make sure you put the exact name of the PDF file and within the IDE of your choice cd to the directory where the above (CheckColor.py) resides.

    Now we have the color codes it’s time to go ahead and extract the highlighted text. We iterate through each page of the PDF and check for any highlighted annotations. If an annotation is found, we extract the content and accumulate it in the extracted_text variable, followed by export to the csv.

    Main Code

    Replace "CIS_Microsoft_Windows_Server_2022_Benchmark_v2.0.0.pdf" with the actual path to your PDF file.

    import fitz  # PyMuPDF
    import pandas as pd
    
    # Open the PDF
    doc = fitz.open('CIS_Microsoft_Windows_Server_2022_Benchmark_v2.0.0.pdf')
    
    # Define the RGB values for your colors
    PINK = (0.9686269760131836, 0.6000000238418579, 0.8196079730987549)
    YELLOW = (1.0, 0.9411770105361938, 0.4000000059604645)
    GREEN = (0.49019598960876465, 0.9411770105361938, 0.4000000059604645)
    RED = (0.9215689897537231, 0.2862749993801117, 0.2862749993801117)
    
    color_definitions = {"Pink": PINK, "Yellow": YELLOW, "Green": GREEN, "Red": RED}
    
    # Create separate lists for each color
    data_by_color = {"Pink": [], "Yellow": [], "Green": [], "Red": []}
    
    # Loop through every page
    for i in range(len(doc)):
        page = doc[i]
        annotations = page.annots()
        for annotation in annotations:
            if annotation.type[1] == 'Highlight':
                color = annotation.colors['stroke']  # Returns a RGB tuple
                if color in color_definitions.values():
                    # Get the detailed structure of the page
                    structure = page.get_text("dict")
    
                    # Extract highlighted text line by line
                    content = []
                    for block in structure["blocks"]:
                        for line in block["lines"]:
                            for span in line["spans"]:
                                r = fitz.Rect(span["bbox"])
                                if r.intersects(annotation.rect):
                                    content.append(span["text"])
                    
                    content = " ".join(content)
    
                    # Append the content to the appropriate color list
                    for color_name, color_rgb in color_definitions.items():
                        if color == color_rgb:
                            data_by_color[color_name].append(content)
    
    # Convert each list to a DataFrame and write to a separate .csv file
    for color_name, data in data_by_color.items():
        if data:
            df = pd.DataFrame(data, columns=["Text"])
            df.to_csv(f'highlighted_text_{color_name.lower()}.csv', index=False)
    

    After running the script, the extracted highlighted text will be saved under multiple csv files like the below screenshot:

    You can now extract the highlighted text from the PDF using the above technique. Feel free to modify and adapt this code to suit your specific requirements. Extracting highlighted data from PDFs can be a powerful data analysis and research technique.

    I hope you will find this helpful information for extracting data out from any PDF files. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari

    Custom Enterprise Data with ChatGPT + Azure OpenAI and Azure Cognitive Search

    24 May

    In the world of AI, OpenAI’s ChatGPT has made a remarkable impact, reaching over 100 million users in just two months. The technology’s potential is vast, and users worldwide are exploring its application across a broad range of scenarios. One question that often arises is, “How can I build something like ChatGPT that uses my own data as the basis for its responses?” Today, I will demonstrate and guide you through the process of creating a ChatGPT-like experience using your own data with Azure OpenAI and Cognitive Search.

    Note – In this blog post I am not going to explain each of the Azure services. The best source is learn.microsoft.com to grab all those details.

    What You’ll Need

    To get started, you’ll need an Azure subscription with access enabled for the Azure OpenAI service. You can request access here. Additional the user should have Owner and Cognitive Services Contributor (Note if you dont add this role to your account the azd up command will come with a error why processing the preconfig.py files)

    • Github Codespaces – I am using this as its a preconfigured enviornment with all pre-requsites to run the code from the VScode IDE.
    • Install locally on your device – You’ll also need to have Azure Developer CLI, Python 3+, Node.js, Git, Powershell 7+ (pwsh) installed on your local machine.

    Azure OpenAI and Cognitive Search offer an effective solution for creating a ChatGPT-like experience using your own data. Azure Cognitive Search allows you to index, understand, and retrieve the right pieces of your data across large knowledge bases, while Azure OpenAI’s ChatGPT offers impressive capabilities for interacting in natural language to answer questions or engage in conversation.

    Getting Started with the sample project

    To begin, you’ll need to click on the Github Codespaces and login with your Github account.

    The project gets cloned and a new one gets created under Codespaces with everything preconfigured for you. Note this step takes approx. 15 mins to complete as its a 4 CPU, 8 GB RAM compute environment been created.

    Next Step create a folder in my case DellAz and further cd into the newly created folder within the terminal. Further login to the Azure Subcription.

    Initialize the Azure-search-openai-demo (In my case I already did that earlier)

    Uploading Your Data

    To upload your data, follow these steps:

    1. Upload Your Data: Within VScode do to you intialized project folder DellAz –> ./data and right click and select upload files and upload a few files. In my secnario I am uploading few Dell Azure Stack HCI pdf files. Azure Cognitive Search will be breaking up larger documents into smaller chunks or summarizing content to fit more candidates in a prompt.

    You can do this by running the azd up command, which will provision Azure resources and deploy the sample application to those resources, including building the search index based on the files found in the folder. I had recieved the error intially but after overcoming the permissions mentioned above it went smoothly as expected.

    Resources within Azure Subcription

    All the resources required will be deployed within the resource group. (App Service, Form recongnizer, Azure OpenAI, Search Service, App Service Plan and Storage Account)

    Storage Account (PDFs getting chunked)

    App Service (front end portal)

    Search Index using Azure Congnitive Search

    Interaction, Trustworthy Responses and User Experience

    One of the key aspects of creating a successful ChatGPT-like experience is ensuring that the responses generated by the model are trustworthy. This can be achieved by providing citations and source content tracking, as well as offering transparency into the interaction process. As you can see all the citations are from the documents I have uploaded to the ./data folder along with LLM making it easily consumable for an user.

    Conclusion

    Creating a ChatGPT-like experience using your own data with Azure OpenAI and Cognitive Search is a powerful way to leverage AI in your enterprise. Whether you’re looking to answer employee questions, provide customer support, or engage users in conversation, this combination of technologies offers a flexible and effective solution. So why wait? Start revolutionizing your enterprise data today!

    References

    Following are the references and important links worth going through for more details:

    DescriptionLinks
    Microsoft Sample Code whcih we are usingGitHub – Azure-Samples/azure-search-openai-demo
    Microsoft Blog on the same topicRevolutionize your Enterprise Data with ChatGPT: Next-gen Apps w/ Azure OpenAI and Cognitive Search
    [YouTube] ChatGPT + Enterprise data with Azure OpenAI and Cognitive Searchhttps://www.youtube.com/watch?v=VmTiyR02FsE&t
    [YouTube] Use ChatGPT On Your Own Large Data – Part 2https://www.youtube.com/watch?v=RcdqdWEYw2A&t

    I hope you will find this helpful information for creating a custom ChatGPT with your own enterprise data. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari

    PowerShell – Frontline Workers – Create Windows 365 Cloud PC Provisioning Policy

    23 May

    I have a blog post about creating a Windows 365 Cloud PC Provisioning Policy using PowerShell. In this post blog, I will demonstrate how to create the provisioning policy using PowerShell and MS Graph API with beta modules for Windows 365 Cloud PC – Frontline Workers.

    Windows 365 Frontline Worker

    Introduction

    I will not attempt to explain Frontline, but the best explanation is here: What is Windows 365 Frontline? | Microsoft Learn.

    Example – Each Windows 365 Frontline license can be shared with up to three employees. This means that if you have 30 employees, you only need to purchase 10 licenses to provision the CloudPC for all 30 employees with access over the day. However, note you are buying the frontline license based on the active sessions. You must purchase the license accordingly if you have more than 10 active workers in a shift.

    What happens when license are exhausted?

    In my demo tenant, I have two licenses for Frontline workers. When I try to log in to the third one (Note I have already logged into 2 active sessions and running them.) Get the following message.

    Connect to MS Graph API

    Step 1 – Install the MS Graph Powershell Module

    #Install Microsoft Graph Beta Module
    PS C:WINDOWSsystem32> Install-Module Microsoft.Graph.Beta

    Step 2 – Connect to scopes and specify which API you wish to authenticate to. If you are only doing read-only operations, I suggest you connect to “CloudPC.Read.All” in our case, we are creating the policy, so we need to change the scope to “CloudPC.ReadWrite.All”

    #Read-only
    PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.Read.All" -NoWelcome
    Welcome To Microsoft Graph!
    
    OR
    
    #Read-Write
    PS C:WINDOWSsystem32> Connect-MgGraph -Scopes "CloudPC.ReadWrite.All" -NoWelcome
    Welcome To Microsoft Graph!
    Permissions for MS Graph API

    Step 3 –  Check the User account by running the following beta command.

    #Beta APIs
    PS C:WINDOWSsystem32> Get-MgBetaUser -UserId admin@wdomain.com

    Create Provisioning Policy (Frontline Worker)

    We are creating a provisioning policy that involves the following: avdwin365mem/win365frontlineCreateProvPolicy at main · askaresh/avdwin365mem · GitHub

    • Azure AD Joined Cloud PC desktops
    • The region for deployment – Australia East
    • Image Name – Windows 11 Enterprise + Microsoft 365 Apps 22H2 (from the Gallery)
    • Language & Region – English (United States)
    • Network – Microsoft Managed
    • Cloud PC Naming format – FLW-%USERNAME:5%-%RAND:5% (FLW – Frontline Worker)
    $params = @{
    	displayName = "Demo-FrontLine"
    	description = "Front Line Workers Prov Policy"
    	provisioningType = "shared"
    	managedBy = "windows365"
    	imageId = "MicrosoftWindowsDesktop_windows-ent-cpc_win11-22h2-ent-cpc-m365"
    	imageDisplayName = "Windows 11 Enterprise + Microsoft 365 Apps 22H2"
    	imageType = "gallery"
    	microsoftManagedDesktop = @{
    		type = "starterManaged"
    		profile = $null
    	}
    	enableSingleSignOn = $true
    	domainJoinConfigurations = @(
    		@{
    			type = "azureADJoin"
    			regionGroup = "australia"
    			regionName = "automatic"
    		}
    	)
    	windowsSettings = @{
    		language = "en-US"
    	}
    	cloudPcNamingTemplate = "FLW-%USERNAME:5%-%RAND:5%"
    }
    
    New-MgBetaDeviceManagementVirtualEndpointProvisioningPolicy -BodyParameter $params

    Note – Post provisioning, you need to add the assignment of a AAD group consisting of all the frontline users. In the future I can demonstrate the API call for assignments. You can also use Andrew Taylors post around using Graph to create the Windows 365 Group – Creating Windows 365 Groups and assigning licenses using Graph and PowerShell

    Powershell Output

    Policy will show up in the MEM Portal

    Optional Properties

    If you are doing on-premise network integration (Azure Network Connection) , then the following additional property and value is required. In my lab, I am leveraging the Microsoft Managed Network, so this is not required.

    OnPremisesConnectionId = "4e47d0f6-6f77-44f0-8893-c0fe1701ffff"

    I hope you will find this helpful information for creating a frontline worker provisioning policy using PowerShell. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari

    Azure Virtual Desktop – Terraform – Create a Host Pool, Desktop Application Group and Workspace for Pooled Remote App aka Published Applications (Part 3)

    15 May

    In the previous blog post we look at creating the Personal Desktop (1×1 mapping) and Pooled Desktop (1 x Many) using Terraform Azure Virtual Desktop – Terraform – Create a Host Pool, Desktop Application Group and Workspace for Personal Desktop (Part 1) | AskAresh and Azure Virtual Desktop – Terraform – Create a Host Pool, Desktop Application Group and Workspace for Pooled Desktop (Part 2). In this blog post series I am going to demonstrate how to create the AVD Host Pool, Application Group and Workspace using Terraform for Pooled Remote App aka Published Applications (1xMany)

    We are going to create the following three types of configurations using Terraform:

    • Azure Virtual Desktop – Personal Desktop (1×1) – Part 1
    • Azure Virtual Desktop – Pooled Desktop (Multi-Session Full Desktop Experience) – Part 2
    • Azure Virtual Desktop – Remote App (Multi-Session Application aka Published Apps) – Part 3

    Note – We are creating the Pooled RemoteApp in this post and in the subsequent post the other types were. In this post In this post I will not show case the creation of service principal and secret please refer for the Part 1 for that activity.

    Pre-requisites

    Following are the pre-requisites before you begin

    • An Azure subscription
    • The Terraform CLI
    • The Azure CLI
    • Permissions within the Azure Subscription for using Terraform

    Terraform – Authenticating via Service Principal & Client Secret

    Before running any Terraform code the following powershell (Make sure run as administrator) we will execute and store the credentials as enviornment variables. If we do this via the environment variable we dont have to store the below information within the providers.tf file. In the future blog post there are better way to store the below details and I hope to showcase them:

    # PowerShell
    $env:ARM_CLIENT_ID = "9e453b62-0000-0000-0000-00000006e1ac"
    $env:ARM_CLIENT_SECRET = "Z318Q~00000000000000000000000000000000_"
    $env:ARM_TENANT_ID = "a02e602c-0000-000-0000-0e0000008bba61"
    $env:ARM_SUBSCRIPTION_ID = "7b051460-00000-00000-00000-000000ecb1"
    • Azure Subcription ID – Azure Portal Subcription copy the ID
    • Client ID – From the above step you will have the details
    • Client Secret – From the above step you will have the details
    • Tenant ID – While creating the Enterprise Apps in ADD you will have the details

    Terraform Folder Structure

    The following is the folder structure for the terrraform code:

    Azure Virtual Desktop Pooled RemoteApp – Create a directory in which the below Terraform code will be published (providers.tf, main.tf, variables.tf and output.tf)

    +---Config-AVD-Pooled-RemoteApp
    |   |   main.tf
    |   |   output.tf
    |   |   providers.tf
    |   |   variables.tf

    Configure AVD – Pooled RemoteApp – Providers.tf

    Create a file named providers.tf and insert the following code:

    terraform {
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = "3.49.0"
        }
        azuread = {
          source = "hashicorp/azuread"
        }
      }
    }
    
    provider "azurerm" {
      features {}
    }

    Configure AVD – Pooled RemoteApp – main.tf

    Create a file named main.tf and insert the following code. Let me explain what all we are attempting to accomplish here:

    • Create a Resource Group
    • Create a Workspace
    • Create a Host Pool
    • Create a Remote Application Group (RAG)
    • Associate Workspace and RAG
    • Assign Azure AD Group to the Desktop Application Group (RAG)
    • Assign Azure AD Group to the Resource Group for RBAC for the Session Host (Virtual Machine User Login)
    # Resource group name is output when execution plan is applied.
    resource "azurerm_resource_group" "rg" {
      name     = var.rg_name
      location = var.resource_group_location
      tags = var.tags
    }
    
    # Create AVD workspace
    resource "azurerm_virtual_desktop_workspace" "workspace" {
      name                = var.workspace
      resource_group_name = azurerm_resource_group.rg.name
      location            = azurerm_resource_group.rg.location
      friendly_name       = "${var.prefix} Workspace"
      description         = "${var.prefix} Workspace"
      tags = var.tags
    }
    
    # Create AVD host pool
    resource "azurerm_virtual_desktop_host_pool" "hostpool" {
      resource_group_name      = azurerm_resource_group.rg.name
      location                 = azurerm_resource_group.rg.location
      name                     = var.hostpool
      friendly_name            = var.hostpool
      validate_environment     = true #[true false]
      start_vm_on_connect      = true
      custom_rdp_properties    = "targetisaadjoined:i:1;drivestoredirect:s:*;audiomode:i:0;videoplaybackmode:i:1;redirectclipboard:i:1;redirectprinters:i:1;devicestoredirect:s:*;redirectcomports:i:1;redirectsmartcards:i:1;usbdevicestoredirect:s:*;enablecredsspsupport:i:1;redirectwebauthn:i:1;use multimon:i:1;enablerdsaadauth:i:1;"
      description              = "${var.prefix} HostPool"
      type                     = "Pooled" #[Pooled or Personal]
      preferred_app_group_type = "RailApplications" #[Desktop or RailApplications]
      maximum_sessions_allowed = 5  #[Tweak based on your vm tshirt size]
      load_balancer_type       = "DepthFirst" #[BreadthFirst or DepthFirst]
      tags = var.tags
    scheduled_agent_updates {
      enabled = true
      timezone = "AUS Eastern Standard Time"  # Update this value with your desired timezone
      schedule {
        day_of_week = "Saturday"
        hour_of_day = 1   #[1 here means 1:00 am]
      }
    }
    }
    
    resource "azurerm_virtual_desktop_host_pool_registration_info" "registrationinfo" {
      hostpool_id     = azurerm_virtual_desktop_host_pool.hostpool.id
      expiration_date = var.rfc3339
    }
    
    # Create AVD RAG
    resource "azurerm_virtual_desktop_application_group" "rag" {
      resource_group_name = azurerm_resource_group.rg.name
      host_pool_id        = azurerm_virtual_desktop_host_pool.hostpool.id
      location            = azurerm_resource_group.rg.location
      type                = "RemoteApp"
      name                = var.app_group_name
      friendly_name       = "RemoteApp AppGroup"
      description         = "${var.prefix} AVD RemoteApp application group"
      depends_on          = [azurerm_virtual_desktop_host_pool.hostpool, azurerm_virtual_desktop_workspace.workspace]
      tags = var.tags
    }
    
    # Associate Workspace and DAG
    resource "azurerm_virtual_desktop_workspace_application_group_association" "ws-dag" {
      application_group_id = azurerm_virtual_desktop_application_group.rag.id
      workspace_id         = azurerm_virtual_desktop_workspace.workspace.id
    }
    
    # Assign AAD Group to the Remote Application Group (RAG)
    resource "azurerm_role_assignment" "AVDGroupRemoteAppAssignment" {
      scope                = azurerm_virtual_desktop_application_group.rag.id
      role_definition_name = "Desktop Virtualization User"
      principal_id         = data.azuread_group.AVDGroup.object_id
    }
    
    # Assign AAD Group to the Resource Group for RBAC for the Session Host
    resource "azurerm_role_assignment" "RBACAssignment" {
      scope                = azurerm_resource_group.rg.id
      role_definition_name = "Virtual Machine User Login"
      principal_id         = data.azuread_group.AVDGroup.object_id
    }

    Note – The individual applications are not published yet. They can be published once you have the session host created. After which, using Terraform, the individual applications can be published too. The exe path of apps needs to be mapped within the operating system. I plan to create a separate blog post on session host creation via Terraform.

    Configure AVD – Pooled RemoteApp – variables.tf

    Create a file named variables.tf and insert the following code:

    variable "resource_group_location" {
      default     = "australiaeast"
      description = "Location of the resource group - Australia East"
    }
    
    variable "rg_name" {
      type        = string
      default     = "AE-DEV-AVD-01-PO-A-RG"
      description = "Name of the Resource group in which to deploy service objects"
    }
    
    variable "workspace" {
      type        = string
      description = "Name of the Azure Virtual Desktop workspace"
      default     = "AE-DEV-AVD-01-WS"
    }
    
    variable "hostpool" {
      type        = string
      description = "Name of the Azure Virtual Desktop host pool"
      default     = "AE-DEV-AVD-01-PO-A-HP"
    }
    
    variable "app_group_name" {
      description = "Name of the Azure Virtual Desktop application group"
      type        = string
      default     = "AE-DEV-AVD-01-RAG"
    }
    
    variable "rfc3339" {
      type        = string
      default     = "2023-05-20T12:43:13Z"  #Update this value with a future date
      description = "Registration token expiration"
    }
    
    variable "prefix" {
      type        = string
      default     = "AE-DEV-AVD-01-HP-"
      description = "Prefix of the name of the AVD HostPools"
    }
    
    variable "tags" {
      type    = map(string)
      default = {
        Environment = "Dev"
        Department  = "IT"
        Location = "AustraliaEast"
        ServiceClass = "DEV"
        Workload = "Host Pool 01"
      }
    }
    
    data "azuread_client_config" "AzureAD" {}
    
    data "azuread_group" "AVDGroup" {
      display_name     = "Win365-Users"  
    }

    Configure AVD – Pooled RemoteApp – output.tf

    Create a file named output.tf and insert the following code. This will showcase in the console what is getting deployed in form of a output.

    output "azure_virtual_desktop_compute_resource_group" {
      description = "Name of the Resource group in which to deploy session host"
      value       = azurerm_resource_group.rg.name
    }
    
    output "azure_virtual_desktop_host_pool" {
      description = "Name of the Azure Virtual Desktop host pool"
      value       = azurerm_virtual_desktop_host_pool.hostpool.name
    }
    
    output "azurerm_virtual_desktop_application_group" {
      description = "Name of the Azure Virtual Desktop DAG"
      value       = azurerm_virtual_desktop_application_group.rag.name
    }
    
    output "azurerm_virtual_desktop_workspace" {
      description = "Name of the Azure Virtual Desktop workspace"
      value       = azurerm_virtual_desktop_workspace.workspace.name
    }
    
    output "location" {
      description = "The Azure region"
      value       = azurerm_resource_group.rg.location
    }
    
    data "azuread_group" "aad_group" {
      display_name = "Win365-Users"
    }
    
    output "AVD_user_groupname" {
      description = "Azure Active Directory Group for AVD users"
      value       = data.azuread_group.aad_group.display_name
    }

    Intialize Terraform – AVD – Pooled RemoteApp

    Run terraform init to initialize the Terraform deployment. This command downloads the Azure provider required to manage your Azure resources. (Its pulling the AzureRM and AzureAD)

    terraform init -upgrade

    Create Terraform Execution Plan – AVD – Pooled RemoteApp

    Run terraform plan to create an execution plan.

    terraform plan -out mainavdremoteapp.tfplan

    Apply Terraform Execution Plan – AVD – Pooled RemoteApp

    Run terraform apply to apply the execution plan to your cloud infrastructure.

    terraform apply mainavdremoteapp.tfplan

    Validate the Output in Azure Portal

    Go to the Azure portal, Select Azure Virtual Desktop and Select Host pools, Application Group and Workspace created using Terraform.

    Clean-up the above resources (Optional)

    If you want to delete all the above resources then you can use the following commands to destroy. Run terraform plan and specify the destroy flag.

    terraform plan -destroy -out mainavdremoteapp.destroy.tfplan

    Run terraform apply to apply the execution plan.

    terraform apply mainavdremoteapp.destroy.tfplan

    The intention here is to get you quickly started with Terraform on Azure Virtual Desktop Solution:

    DescriptionLinks
    Setting up your computer to get started with Terrafor using PowershellInstall Terraform on Windows with Azure PowerShell
    AVD Configure Azure Virtual Desktophttps://learn.microsoft.com/en-us/azure/developer/terraform/configure-azure-virtual-desktop
    Terraform Learninghttps://youtube.com/playlist?list=PLLc2nQDXYMHowSZ4Lkq2jnZ0gsJL3ArAw

    I hope you will find this helpful information for getting started with Terraform to deploy the Azure Virtual Desktop – Pooled Remote App. Please let me know if I have missed any steps or details, and I will be happy to update the post.

    Thanks,
    Aresh Sarkari