Step-by-Step MSSQL Always On Install & Availability Group Config (Failover Cluster + Listener)

Hello everyone, in this video I will show you how to install MSSQL Always On and configure an Availability Group on the servers

·         In this video, we will install MSSQL Server on two servers: MSSQL1 and MSSQL2. These are our servers. I have already added them to the domain and created a specific Organizational Unit, placing these servers under that Organizational Unit.

·         This is MSSQL1.

·         This is MSSQL2.

·         Before installing SQL on these servers, I need to activate the Failover Clustering feature on them.

·         First, activate the Failover Clustering feature on the MSSQL1 server.

·         Click on Add Roles and Features.

·         Click Next.

·         Next.

·         Next.

·         Select the Failover Clustering feature.

·         Click Add Features.

·         Next.

·         Install.

·         Activate Failover Clustering on MSSQL2 in the same way.

·         Okay, wait for a few minutes to complete activating Failover Clustering on both servers.

·         To use the Failover Clustering feature, a dedicated network interface is used on both servers to handle cluster traffic. Using a dedicated interface on the servers is recommended.

·         Rename this interface to ‘Cluster’.

·         Rename the interface on the MSSQL2 server to ‘Cluster’ as well.

·         Ethernet0 is the management interface we use to handle server traffic.

·         If you plan to use MSSQL Always On, the SQL service must run on the servers using domain user accounts instead of service accounts. Therefore, I created a user in Active Directory.

·         Uncheck this box and check that box.

·         Alright, Failover Clustering has been enabled on MSSQL1.

·         Close.

·         Open Run and type control userpasswords2.

·         Add the user created for the MSSQL service to the local administrators group on both servers.

·         Finish.

·         Add that user to the local administrators group on MSSQL2 in the same way.

·         Failover Clustering has also been enabled on MSSQL2.

·         Now, it’s time to configure the Failover Cluster on these servers.

·         Open the Failover Cluster Manager.

·         Click on ‘Validate Configuration’ to check and verify the servers’ prerequisites and requirements before creating a cluster between these servers.

·         Next.

·         Here, we select the servers that we want to join the cluster and work as part of the cluster.

·         Check name.

·         Select.

·         Select both of the servers.

·         Ok.

·         Next.

·         Run All Tests.

·         Next.

·         The checking process is currently ongoing.

·         There is no need to run validation and cluster configuration on another server.

·         We received some errors and warnings, but because this is a test environment, they are not critical. These issues are typically related to network interfaces and server connections. In a production environment, you should ideally not encounter any warnings or errors.

·         After validating the configuration, it’s time to create the cluster.

·         Click on ‘Create Cluster’.

·         Next.

·         Select the servers to be part of the cluster.

·         Next.

·         Next, we need to assign a name and IP address to the cluster. This is for the Windows Cluster service and is not related to MSSQL.

·         Next.

·         Next.

·         When you look in Active Directory Users and Computers, you’ll find the cluster name object created in the Computers container, within the same Organizational Unit as your servers.

·         Okay, our cluster has been created.

·         Check the status of the cluster nodes.

·         Both servers are online.

·         As you can see, there is no witness configured.

·         Right-click the cluster, then go to More Actions and select ‘Configure Cluster Quorum Settings’.

·         Choose ‘Select the quorum witness’ and click Next.

·         Choose ‘Configure a File Share Witness’ and then click Next.

·         Here, choose a shared folder. I prefer to use a shared folder located on the Active Directory server.

·         Click ‘Browse’ to choose the Active Directory server.

·         Click ‘Browse’ again.

·         Since there is no shared folder, click on ‘New Shared Folder’.

·         The local path of the shared folder points to a folder on the Active Directory server.

·         As you can see, I am using a root folder on the C drive.

·         Specify the name of the shared folder to be created on the Active Directory server’s C drive, and assign read and write permissions to all users.

·         Ok.

·         Ok.

·         Next.

·         Finish.

·         Okay, the witness has been created.

·         The Windows Failover Cluster configuration is now complete and functioning correctly. Both servers monitor each other, so if one fails, the other takes over seamlessly. Services continue to run without interruption.

·         In the next step, we are going to install the MSSQL service on the servers.

·         Run the setup as an administrator to start installing SQL on MSSQL1.

·         Go to Installation and choose ‘New SQL Server stand-alone installation’.

·         Begin installing SQL on MSSQL2 at the same time as on MSSQL1. While this isn’t required, I run the setup on both servers simultaneously to save time.

·         The SQL Server installation steps are the same for both servers.

·         Enter your product key and click Next.

·         Accept.

·         Next.

·         Select the features you want to install based on your needs, but make sure to select Database Engine Services to install the SQL service on the server.

·         Next.

·         Repeat the same steps on MSSQL2.

·         Keep the default instance or modify it according to your requirements.

·         At this point, change the service account to the domain account we created earlier, which is already added as a local administrator on MSSQL1 and MSSQL2.

·         Enter the account password and set the startup type to Automatic.

·         Applying the same changes to the Database Engine service.

·         Leave the settings for SQL Server Browser as they are.

·         Select Mixed Mode from the authentication section and enter a strong password. This is your SQL Server SA account.

·         Also, click on ‘Add Current User’ to add the logged-in user as a SQL administrator. These settings can be adjusted based on your requirements.

·         Next.

·         Install.

·         Continue the installation on the MSSQL2 server.

·         SQL installation on both servers is complete. Next, we’ll configure SQL Server to work as Always On.

·         Open SQL Server Configuration Manager.

·         In SQL Server Services, right-click on SQL Server and choose Properties.

·         Select ‘Always On Availability Groups’ and check ‘Enable Always On Availability Groups’.

·         Ok.

·         You need to restart the SQL Server service for the change to take effect.

·         Repeat the same steps on the other server.

·         Ok.

·         Restart.

·         Alright, now open SQL Server Management Studio.

·         Connect to MSSQL1.

·         Now, we need to assign permissions to the Organizational Unit containing the MSSQL servers. Provide full permissions to the user running the SQL services, and also grant permissions to the computers and cluster objects to manage the Organizational Unit. This allows the SQL service to manage the cluster computers properly.

·         As you can see, the Security tab is not visible. To fix this, enable Advanced Features from the View menu.

·         Return to Management Studio.

·         In Always On High Availability, right-click on Availability Groups and choose ‘New Availability Group Wizard’.

·         Next.

·         Enter a name for the Availability Group.

·         Next.

·         Since we don’t have any database, I can’t proceed with the wizard. Let’s create a test database before we start.

·         Right-click on Databases and select ‘New Database’.

·         Enter a name for the database.

·         Go to the Options section and verify that the Recovery Model is set to Full.

·         To add any database to an Availability Group, the database recovery model must be set to Full.

·         Ok.

·         Additionally, before adding a database to the Availability Group, a full backup of the database must be taken.

·         Right-click the database, go to Tasks, and click Backup.

·         Set the backup type to Full.

·         Ok.

·         Ok.

·         Open the New Availability Group Wizard again.

·         Next.

·         Write the name of Availability Group.

·         Next.

·         Select Test database.

·         Next.

·         Here is MSSQL1. Click ‘Add Replica’ to add MSSQL2 as a replica and configure the availability group to synchronize data between both servers.

·         These are the availability modes, and here you can see the differences between them.

·         The mode you choose depends on your requirements.

·         I prefer to use synchronous commit.

·         And here you can see the endpoints.

·         Backup preferences are also shown here.

·         At this step, the Listener section is the most important.

·         Choose ‘Create an Availability Group Listener’.

·         Type the DNS name for the listener.

·         Specify the port to be used for SQL connections from clients.

·         Select the interface subnet that will listen for SQL connections.

·         Specify the virtual IP address that will be used for SQL Always On.

·         Since we granted permissions to the SQL service account and server computers, the listener DNS name and IP address will be created automatically.

·         Next.

·         Choose Automatic Seeding, then click Next.

·         Next.

·         Finish.

·         Alright, all tasks have been completed successfully.

·         As you can see, both servers are listed under Available Replicas.

·         Under Available Databases, you can find the databases added to Always On. They will function correctly during server failures, with data maintained on all replicas.

·         You can add more databases to this list later.

·         Databases added to Always On appear as synchronous.

·         Let’s verify the listener and try connecting to the database through the listener name.

·         Enter the listener name.

·         Alright, as you can see, we connect to the database via the listener name. Even if one server fails, our connection remains intact.

·         To check the replication status, right-click the Availability Group name and select ‘Show Dashboard’.

·         MSSQL1 is currently the primary server. I will do a failover to switch the primary server to MSSQL2.

·         Next.

·         Choose the new primary server.

·         Next.

·         Connect to the new primary MSSQL server instance.

·         Next.

·         Finish.

·         Alright, as you can see, the failover happened, and after a few seconds, the sync status will show green. During this time, the database remains fully functional.

·         Performing failover again to change the primary server back to MSSQL1.

·         Next, create a new database on MSSQL1 and include it in Always On.

·         The recovery model is set to Full.

·         Take a full backup.

·         Right-click on the available database.

·         Add database.

·         Choose the database you want to add to Always On.

·         Next.

·         Connect to MSSQL2 server instance.

·         Next.

·         Next.

·         Finish.

·         You can see that both databases are marked as synchronous.

How to fix yum install error on centos | Cannot find a valid baseurl for repo: base/7/x86_64

1. Update YUM

Updating YUM itself might solve some compatibility issues:

yum update

yum install wget 

2. Check Network Connectivity

Ensure the server has internet access, as YUM needs to download packages from repositories online. Test with:

ping google.com

If there’s no response, troubleshoot the network connection first.

3. Verify Repository Configuration

Check that your repository configuration files are correctly set up in /etc/yum.repos.d/ Sometimes, repositories may be disabled or misconfigured. Ensure all necessary repositories are enabled.

4. Install CentOS Base and Updates Repositories (Default Repos)

CentOS comes with default repositories configured in /etc/yum.repos.d/CentOS-Base.repo. This file contains sections for:

  • base: The main OS packages.
  • updates: Updates to the packages.
  • extras: Additional packages that complement the base OS.
  • centosplus: Extended packages not included in the base.

Make sure this file exists and has the necessary sections. You can edit it using a text editor like nano or vi:

vi CentOS-Base

[base] name=CentOS-$releasever - Base baseurl=https://vault.centos.org/7.9.2009/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1 [updates] name=CentOS-$releasever - Updates baseurl=https://vault.centos.org/7.9.2009/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1 [extras] name=CentOS-$releasever - Extras baseurl=https://vault.centos.org/7.9.2009/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1 [centosplus] name=CentOS-$releasever - Plus baseurl=https://vault.centos.org/7.9.2009/centosplus/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=0

insert mode changes to action mode

now type…

  : w q ! 

to save the file and close file editor.

5. Run Yum Update Again

yum update

okay fixed.

Migrate VMware to Proxmox in 3 EASY STEPS | Step By Step Migrate VMs from VMware to Proxmox


Migrating a VMware virtual machine (VM) to Proxmox involves a series of steps to convert and transfer the VM to the new environment. Here’s a detailed description of the process:

1. Prepare the VMware VM for Migration:
   – Shutdown the VM: Ensure the VMware VM is properly shut down to avoid any corruption during the migration.
   – Check Disk Format: Verify the format of the VMware VM’s disk files. VMware typically uses VMDK (Virtual Machine Disk) format, which will need to be converted for use in Proxmox.

2. Export the VM from VMware:
   – Export OVF/OVA: In VMware vSphere or Workstation, you can export the VM as an OVF (Open Virtualization Format) or OVA (Open Virtualization Appliance) package. This exports both the VM’s disk and configuration files.
   – Download the VMDK: Alternatively, if exporting to OVF/OVA isn’t an option, you can directly copy the VMDK file.

3. Convert the Disk Format:
   – Install qemu-img on Proxmox: Proxmox uses the QCOW2 or raw disk format, so the VMDK disk from VMware needs to be converted.
     – Run the following command on Proxmox to convert the disk:
       qemu-img convert -f vmdk -O qcow2 /path/to/source.vmdk /path/to/destination.qcow2
     – Alternatively, you can convert the disk to raw format:
       qemu-img convert -f vmdk -O raw /path/to/source.vmdk /path/to/destination.raw

4. Create a New VM on Proxmox:
   – Create a New VM: In the Proxmox Web UI, create a new VM with the same configuration as the original VMware VM (e.g., CPU, RAM, and network settings).
   – Attach Converted Disk: In the new VM settings, attach the converted disk file (QCOW2 or raw) by navigating to the “Hardware” tab and selecting the correct storage type.

5. Configure Network and Drivers:
   – Adjust Network Settings: Ensure the network settings in Proxmox match those from VMware, particularly IP addressing and VLAN configuration.
   – Install Proxmox Guest Tools: If necessary, install the Proxmox guest tools (similar to VMware Tools) to optimize performance and compatibility with Proxmox drivers.
6. Start the VM on Proxmox:
   – Boot the VM: Start the VM and verify that it functions as expected. Check if the OS boots properly and if all services are running correctly.
   – Install/Update Drivers: If the VM was using VMware-specific drivers (like VMware Tools), you might need to install the appropriate drivers for Proxmox/KVM to ensure optimal performance.

7. Post-Migration Checks:
   – Check Disk and Network Performance: Ensure that disk I/O and network performance are stable. Proxmox uses KVM/QEMU for virtualization, so some configurations might need tuning.
   – Remove VMware Tools: If applicable, uninstall VMware Tools from the guest OS to avoid conflicts.

Optional: Storage and Backup Integration:
   – Backup Configuration: If you’re using Proxmox’s built-in backup solution (or integrating with Veeam Backup), configure backups for the migrated VM.
   – Proxmox Cluster: If the Proxmox environment is clustered, ensure the VM is properly integrated into the Proxmox Cluster for High Availability (HA).

Backup and Restore Proxmox with Veeam Backup and Replication 12.2

1. Prerequisites:

  • A Proxmox VE cluster or standalone Proxmox server running.
  • Veeam Backup & Replication 12.2 installed on a Windows server.
  • Ensure Proxmox has the Veeam Agent for Linux installed if you’re doing an agent-based backup.

2. Backup Process:

A. Adding the Proxmox Host to Veeam:

  1. Open the Veeam Backup & Replication console.
  2. Go to “Inventory” > “Managed Servers.”
  3. Right-click and select “Add Server.”
  4. Choose “Linux,” as Proxmox is based on Debian.
  5. Enter the IP address or hostname of the Proxmox server and provide the SSH credentials (root or a user with appropriate permissions).
  6. Verify the connection, and Veeam will add Proxmox to its managed inventory.

B. Creating Backup Jobs:

  1. Go to the “Home” tab and select “Backup Job.”
  2. Choose “Virtual Machine” as the backup type.
  3. Select the Proxmox VMs you want to back up.
  4. Choose your backup repository (storage location for backups).
  5. Configure the backup schedule, retention policies, and any advanced options like encryption or compression.
  6. Save and start the backup job.

C. Monitoring Backups:

  • You can monitor backup jobs in the “History” or “Home” tab to ensure backups run successfully.

3. Restore Process:

A. Full VM Restore:

  1. In the Veeam console, go to the “Home” tab and select “Restore.”
  2. Choose “Entire VM” and select the Proxmox VM from your backup repository.
  3. Choose the restore point you want to use and follow the wizard to select the destination Proxmox server.
  4. Confirm and start the restore process.

B. File-Level Restore:

  1. Go to “Home” > “Restore” and select “Guest files (Linux).”
  2. Choose the backup point and select the VM you want to restore files from.
  3. Browse the file system and restore specific files or directories.

C. Instant Recovery:

  1. Select “Instant Recovery” to start the VM directly from the backup storage.
  2. This allows for minimal downtime while restoring the actual VM in the background.

4. Key Features of Veeam with Proxmox:

  • Incremental Backups: Efficient use of storage by backing up only changes after the initial backup.
  • Compression and Deduplication: Reduces backup size and storage requirements.
  • Instant VM Recovery: Allows quick recovery of critical VMs with minimal downtime.
  • Application-Aware Backups: Ensures consistency for applications like databases.

How to Build a Personal Cloud Server for Private File Storage and Video Call

Setting up your own free cloud server with features like voice and video calls, file sharing, and screen sharing is possible using Nextcloud. Nextcloud is an open-source platform that offers cloud storage and collaboration tools, making it an ideal choice for both office and home environments. Here’s an overview of how you can set it up:

1. What is Nextcloud?

Nextcloud is a self-hosted cloud platform that allows you to store files, share documents, and collaborate with others. It includes apps for productivity, communication, and team collaboration. Some of the key features include:

– File storage and sharing
– Collaboration tools (calendars, tasks, document editing)
– Communication tools (video and voice calls, chat)
– Screen sharing for meetings and remote support
– End-to-end encryption and strong security controls

2. Core Features for Office or Home Use

– File Sharing: Store your files securely and share them with your team or family members. You can set permissions and use password-protected links for sensitive documents.
– Voice and Video Calling: With the Nextcloud Talk app, you can host voice and video calls directly from your Nextcloud instance, eliminating the need for third-party services.
– Screen Sharing: Perfect for online meetings or remote support, you can share your screen with others during video calls using Nextcloud Talk.
– Collaborative Editing: You can edit documents collaboratively using integrated apps like OnlyOffice or Collabora Online.

3. How to Set It Up

Step 1: Choose Your Hosting Environment

– Self-hosted: You can set up Nextcloud on your own hardware, such as a server at home or in the office. This gives you full control but requires some technical know-how.
– Cloud VPS: If you prefer a managed solution, you can rent a VPS from providers like DigitalOcean, Linode, or Hetzner. Install Nextcloud on the VPS to make it accessible from anywhere.

Step 2: Install Nextcloud

– Linux Installation: Install Nextcloud on a Linux server (Ubuntu, Debian, CentOS, etc.). Follow the official installation guide, which includes setting up a web server (Apache or Nginx), database (MySQL or MariaDB), and securing it with HTTPS.
– Docker Installation: If you prefer containerized environments, you can use Docker to install and manage your Nextcloud instance.

Step 3: Configure Nextcloud

– Install Apps: After the basic installation, you can enhance Nextcloud by installing additional apps. For voice and video calls, install the Nextcloud Talk app. For document editing, install OnlyOffice or Collabora Online.
– Security Settings: Configure your security settings, including enabling SSL/TLS for encrypted connections, setting up a firewall, and using strong passwords.

Step 4: Set Up Communication Tools

– Nextcloud Talk: This app allows you to set up voice and video calls as well as screen sharing. You can create chat rooms, invite participants, and start video conferences directly within the Nextcloud interface. For additional functionality like STUN/TURN servers to improve connection reliability, you may need to configure a dedicated server.

Step 5: Customize for Office or Home

For Office Use: Set up group folders for department-specific file sharing, integrate calendars for scheduling, and use Nextcloud Talk for remote meetings and collaboration.
– For Home Use: Use Nextcloud to store family photos, share important documents, and stay connected with voice and video calls.

4. Why Choose Nextcloud?

– Free and Open Source: Nextcloud is free to use, with no licensing fees, and you can customize it according to your needs.
– Data Privacy: By hosting your own cloud, you retain full control over your data and privacy, unlike with third-party services.
– Extensibility: Nextcloud has a large app ecosystem that lets you add features like email integration, project management, password management, and more.

5. Conclusion

Nextcloud provides a powerful platform to create your own cloud service for both personal and business use. Whether you’re looking for a secure file sharing solution, a collaboration tool for your team, or a way to keep your family connected, Nextcloud can meet your needs.
By leveraging the built-in apps like Nextcloud Talk, OnlyOffice, and more, you can create a comprehensive communication and file-sharing platform that rivals commercial services, all while maintaining complete control over your data.

Step-by-Step Proxmox and Ceph High Availability Setup Guide | Free High Availability Storage

Step 1: Prepare Proxmox Nodes

  1. Update and Upgrade Proxmox VE on all nodes:

apt update && apt full-upgrade -y

2. Ensure that all nodes have the same version of Proxmox VE:

pveversion

Step 2: Set Up the Proxmox Cluster

  1. Create a new cluster on the first node:
    • pvecm create my-cluster
  2. Add the other nodes to the cluster:
    • pvecm add <IP_of_first_node>
  3. Verify the cluster status:
    • pvecm status

Step 3: Install Ceph on Proxmox Nodes

  1. Install Ceph packages on all nodes:

install ceph ceph-mgr -y

Step 4: Create the Ceph Cluster

  1. Initialize the Ceph cluster on the first node:
    • pveceph init --network <cluster_network>
  2. Create the manager daemon on the first node:
    • pveceph createmgr

Step 5: Add OSDs (Object Storage Daemons)

  1. Prepare disks on each node for Ceph OSDs:
    • pveceph createosd /dev/sdX
  2. Repeat the process for each node and disk.

Step 6: Create Ceph Pools

  1. Create a Ceph pool for VM storage:
    • pveceph pool create mypool 128

Step 7: Configure Proxmox to Use Ceph Storage

  1. Add the Ceph storage to Proxmox:
    • Navigate to Datacenter > Storage > Add > RBD.
    • Enter the required details like ID, Pool, and Monitor hosts.
    • Save the configuration.

Step 8: Enable HA (High Availability)

  1. Configure HA on Proxmox:
    • Navigate to Datacenter > HA.
    • Add resources (VMs or containers) to the HA manager.
    • Configure the HA policy and set desired node priorities.

Step 9: Testing High Availability

  1. Simulate node failure: Power off one of the nodes and observe how the VMs or containers are automatically migrated to other nodes.

Step 10: Monitoring and Maintenance

  1. Use the Proxmox and Ceph dashboards to monitor the health of your cluster.
  2. Regularly update all nodes to ensure stability and security.

Optional: Additional Ceph Configuration

  1. Add Ceph Monitors for redundancy:bashKodu kopyalapveceph createmon
  2. Add more Ceph MDS (Metadata Servers) if using CephFS:bashKodu kopyalapveceph createmds
  3. Tune Ceph settings for performance and reliability based on your specific needs.

By following these steps, you will have a robust Proxmox VE and Ceph high availability setup, ensuring that your VMs and containers remain highly available even in the event of hardware failures.

How to Run Any Specific Command or Script on Linux Startup

1. Using cron:

The cron method is convenient for running commands or scripts at startup. The @reboot directive in the crontab allows you to specify tasks to be run when the system starts.

Open the crontab file

crontab -e

Add the following line:

@reboot /path/to/your/script.sh

Save and exit the editor. This ensures that your script will run each time the system reboots.

2. Using rc.local:

The /etc/rc.local file is traditionally used to run commands at the end of the system boot process.

Open the rc.local file

sudo nano /etc/rc.local

Add your command or script just before the exit 0 line:

/path/to/your/script.sh

Save and exit. Make sure the file is executable:

sudo chmod +x /etc/rc.local

This method may not be available on all distributions, as some are moving away from using rc.local in favor of systemd.

3. Using systemd:

Systemd is a modern init system used by many Linux distributions. You can create a systemd service to execute your script at startup.

Create a new service file, for example, /etc/systemd/system/myscript.service:

[Unit]
Description=My Startup Script

[Service]
ExecStart=/path/to/your/script.sh

[Install]
WantedBy=default.target

Reload systemd and enable/start the service:

sudo systemctl daemon-reload
sudo systemctl enable myscript.service
sudo systemctl start myscript.service

This method provides more control and flexibility and is widely used in modern Linux distributions.

4. Using ~/.bashrc or ~/.bash_profile (for user-specific commands):

If you want a command or script to run when a specific user logs in, you can add it to the ~/.bashrc or ~/.bash_profile file.

Open the .bashrc file

nano ~/.bashrc

Add your command or script at the end of the file:

/path/to/your/script.sh

Save and exit the editor. This method is user-specific and will run the script when the user logs in.

Remember to replace /path/to/your/script.sh with the actual path to your script or command in each case. The appropriate method may vary depending on your distribution and system configuration.

HPE DL380 Gen10 Unboxing | Prepare Server to Install in DATACENTER

Unboxing the HPE DL380 Gen10:

  1. Inspect the Package:
    • Carefully inspect the external packaging for any signs of damage.
    • Ensure that the package includes all the components listed in the packing list.
  2. Open the Box:
    • Use a box cutter or scissors to carefully open the packaging.
  3. Remove Accessories:
    • Take out all the accessories such as power cables, documentation, and any additional components that come with the server.
  4. Inspect the Server:
    • Carefully take the server out of the packaging and inspect it for any physical damage.
    • Ensure that all components, including hard drives, are properly seated.
  5. Documentation:
    • Review the provided documentation, including the quick start guide and any safety information.

1. iLO Configuration:

a. Physical Connection:

  1. Connect to the iLO port on the rear of the server using a network cable.
  2. Ensure the iLO port has an IP address on the same network as your management system.

b. Access iLO Web Interface:

  1. Open a web browser and enter the iLO IP address.
  2. Log in with the default or provided credentials.

c. iLO Configuration:

  1. Change the default password for security.
  2. Configure network settings as needed.
  3. Enable iLO Advanced features if necessary.

1. Accessing Smart Array Configuration Utility:

  1. Power on the Server:
    • Ensure all necessary components, including hard drives, are properly installed.
  2. Access RAID Configuration:
    • During the server boot process, press the designated key (e.g., F8) to access the Smart Array Configuration Utility.

2. Creating a RAID 6 Array:

  1. Select/Create Array:
    • In the Smart Array Configuration Utility, choose an option like “Create Array” or “Manage Arrays.”
  2. Select Drives:
    • Choose the physical drives you want to include in the RAID 6 array. There should be at least four drives for RAID 6.
  3. Configure RAID Level:
    • Select RAID 6 from the available RAID levels.
  4. Set Array Size:
    • Define the size of the RAID array. Keep in mind that RAID 6 requires at least four drives, and usable capacity will be less than the total drive capacity due to the dual parity.
  5. Confirm and Save:
    • Review the configuration and confirm to save the RAID 6 array settings.

3. Installing an Operating System:

  1. Boot from Installation Media:
    • Insert the installation media for your operating system (e.g., Windows Server, Linux) and boot from it.
  2. Select Installation Drive:
    • During the OS installation process, you will be prompted to select the logical drive created by the RAID 6 configuration.
  3. Complete OS Installation:
    • Follow the on-screen instructions to complete the operating system installation.

4. Additional RAID 6 Management:

  1. RAID Monitoring:
    • After the OS is installed, monitor the RAID status through the HPE Smart Storage Administrator or other management tools provided by HPE.
  2. Expand or Modify RAID:
    • If needed, you can later expand the RAID 6 array or modify its configuration through the Smart Storage Administrator.

2. ESXi Installation:

a. Obtain ESXi Installer:

  1. Download the ESXi ISO image from the VMware website.

b. Prepare Boot Media:

  1. Create a bootable USB drive with the ESXi installer using tools like Rufus or UNetbootin.

c. Install ESXi:

  1. Insert the bootable USB drive into the server.
  2. Power on the server and boot from the USB drive.

d. ESXi Installation Wizard:

  1. Follow the on-screen prompts to install ESXi.
  2. Select the installation disk (usually the local storage on your server).

e. Configure ESXi:

  1. Set a password for the ESXi host.
  2. Configure management network settings (IP address, subnet mask, gateway, DNS).

f. Complete Installation:

  1. Allow the ESXi installer to complete the installation process.
  2. Reboot the server.

3. Post-Installation ESXi Configuration:

a. Access ESXi Web Interface:

  1. Open a web browser and enter the ESXi host IP address.
  2. Log in with the credentials you set during installation.

b. Configure Networking:

  1. Verify and configure networking settings as needed.

c. License ESXi:

  1. Apply a license to your ESXi host if required.

d. Create Datastores:

  1. Configure storage settings by creating datastores on your server’s storage.

e. Virtual Machine Management:

  1. Create and manage virtual machines through the ESXi web interface or vSphere Client.

f. Monitor and Manage:

  1. Monitor the ESXi host health, performance, and other settings through the web interface.

4. Additional iLO Integration:

  1. Back in the iLO interface, you can integrate iLO with the ESXi host for enhanced management features.
  2. Configure iLO settings to enable remote console access and other management features.

Attach QNAP iSCSI Disk to Windows | Connect to Storage Without HBA Interface

Certainly, attaching a QNAP iSCSI disk to a Windows system involves several steps. Below is a general guide, but please note that specific steps may vary depending on the QNAP NAS model and the version of QTS firmware. Always refer to the documentation provided by QNAP for your specific model.

1. Configure iSCSI on QNAP NAS:

  • Log in to the QNAP NAS web interface.
  • Go to “Control Panel” > “Storage & Snapshots” > “iSCSI Storage.”
  • Create an iSCSI target and specify the settings, such as the target name and access permissions.
  • Create an iSCSI LUN (Logical Unit Number) within the target, specifying its size and other relevant parameters.
  • Note the iSCSI Target IQN (iSCSI Qualified Name) and the IP address of your QNAP NAS.

2. Connect Windows to the iSCSI Target:

  • On your Windows machine, open the iSCSI Initiator.
    • You can open it by searching for “iSCSI Initiator” in the Start menu.
  • In the iSCSI Initiator Properties window, go to the “Targets” tab.
  • Enter the IP address of your QNAP NAS in the “Target” field and click “Quick Connect.”
  • In the Quick Connect window, select the iSCSI target from the list and click “Connect.”
  • In the Connect to Target window, check the box next to “Enable multi-path” if your QNAP NAS supports it.
  • Click “Advanced Settings” to configure CHAP (Challenge-Handshake Authentication Protocol) settings if you have set up authentication on your QNAP NAS.
  • Click “OK” to connect to the iSCSI target.

3. Initialize and Format the iSCSI Disk:

  • Once connected, open the Disk Management tool on your Windows machine.
    • You can open it by searching for “Create and format hard disk partitions” in the Start menu.
  • You should see the new iSCSI disk as an uninitialized disk.
  • Right-click on the uninitialized disk and choose “Initialize Disk.”
  • Right-click on the newly initialized disk and select “New Simple Volume.”
  • Follow the wizard to create a new partition, assign a drive letter, and format the disk with your preferred file system.

4. Access the iSCSI Disk:

  • After formatting, the iSCSI disk should be accessible through the assigned drive letter.
  • You can now use the iSCSI disk for storage purposes, and it will behave like any other locally attached storage device.

Remember to follow best practices for iSCSI security, such as enabling CHAP authentication and restricting access to specific IP addresses, especially if your QNAP NAS is accessible over the internet. Always refer to the specific documentation for your QNAP NAS model for accurate and up-to-date instructions.

Login to ESXi with Domain User | VMware ESXi Active Directory Authentication

Configuring VMware ESXi for Active Directory (AD) authentication involves joining the ESXi host to the Active Directory domain and configuring user permissions accordingly. Here are the steps:

1. Access the ESXi Host:

  • Connect to the ESXi host using the vSphere Client or vSphere Web Client.

2. Configure DNS Settings:

  • Ensure that the DNS settings on the ESXi host are correctly configured, and it can resolve the Active Directory domain controller’s name. You can set the DNS configuration in the ESXi host under “Networking” > “TCP/IP Configuration.”

3. Join ESXi Host to Active Directory:

  • In the vSphere Client, navigate to the “Host” in the inventory and select the “Configure” tab.
  • Under the “System” section, select “Authentication Services.”
  • Click “Join Domain” or “Properties” depending on your ESXi version.
  • Enter the domain information, including the domain name, username, and password with the necessary permissions to join the domain.
  • Click “Join Domain” or “OK.”

Example:

  • Domain: example.com
  • Username: domain_admin
  • Password: ********

4. Verify Domain Join:

  • After joining the domain, you should see a success message. If not, check the credentials and network connectivity.

5. Configure Permission:

  • Go to the “Permissions” tab in the “Host” section.
  • Add the AD user account to the appropriate role (e.g., Administrator or a custom role).

Example (PowerCLI):

New-VIPermission -Principal "EXAMPLE\domain_user" -Role "Admin" -Entity $esxiHost

6. Test AD Authentication:

  • Log out of the vSphere Client and log in using an Active Directory account. Use the format “DOMAIN\username” or “username@domain.com” depending on your environment.

Example:

  • Server: esxi.example.com
  • Username: example\domain_user
  • Password: ********

7. Troubleshooting:

  • If authentication fails, check the ESXi logs for any error messages related to authentication or domain joining.
  • Ensure that time synchronization is correct between the ESXi host and the domain controller.
  • Verify that the Active Directory user account has the necessary permissions.

Note: Always refer to the official VMware documentation for your specific ESXi version for the most accurate and up-to-date information. The steps might slightly differ based on the ESXi version you are using.