I’ve been a big fan of Schneider EcoStruxure Data Center Expert because it’s a practical tool for monitoring data center power, humidity, and temperature. I recently bought a Schneider Easy Rack PDU metered outlet version (EPDU1116SMBO), which allows me to monitor the power consumption of individual sockets. This means I can monitor each home lab…
Silent Cooling Solution for the Nvidia L4 24 GB GPU
I am keeping this post very short, with mostly photos. I tested the cooling performance with different games. The GPU’s max power is 72W, though during my tests, it exceeded 75W. It’s also possible to limit it to 30W. I tested the GPU by running games like Black Myth: Wukong, Cyberpunk 2077, Uncharted 4: A…
Nvidia L4: Powerful Low-Power GPU for Nvidia AI Enterprise and Virtual GPU
I’ve been searching the internet for a long time to find a versatile GPU for AI and video graphics workloads that also supports vGPU and Nvidia AI Enterprise. Some of the GPUs I considered were the RTX 6000 Ada, A2, A10, L4, T4, A40, and A16. I was most drawn to the RTX 6000 Ada…
Expanding Core-ESXi RAM from 128GB to 593.72GB Using Samsung 970 EVO Plus NVMe SSD
With the release of vSphere 8.0 Update 3, VMware introduced Memory Tiering over NVMe as a Technical Preview. This feature enables users to expand memory capacity on a host using locally installed NVMe devices. Several experts, including William Lam, have already tested this feature, and it appears to be working seamlessly. Like many other home…
Deploying and Configuring Nvidia DLS for AI Enterprise and vGPU: Step-by-Step Guide
NB! At the end of the blog post, there is a YouTube video and an eBook – a photo-based step-by-step guide Download Nvidia vGPU Drivers for ESXi Download Nvidia vGPU License Server: Installing Nvidia vGPU Drivers on ESXi Deploying the NvidiaDLS OVA to vSphere Configuring Nvidia DLS (License) Server Installing Nvidia Drivers on a Windows…
Overcoming PCIe Slot Compatibility Challenges for Nvidia Tesla P4 GPU Installation
I bought an Nvidia Tesla P4. It was an unused GPU and came with a 3D-printed cooler and fan. I played around with this GPU on my AI/ML server, and it worked fine. Then I decided to move it to my other server, which runs 24/7. The reason is simple: I have jump hosts and…
Quick and Easy Guide to Installing Meta Llama 3.1 405B, 70B, 8B Language Models with Ollama, Docker, and OpenWebUI
I will show how easy and quick it is to install Llama 3.1 405B, 70B, 8B, or another language model on your computer or VM using Ollama, Docker, and OpenWebUI. It is so simple to install that even a grandmother or grandfather could do it. This is private AI, not cloud-based. All data is on…
Meta Llama 3.1 405B: GPU vs. CPU Performance Evaluation and RAM Considerations
It’s time to start testing various Private AI models, and fortunately, the timing is just right. Meta has just released six new AI language models. These models run on-premises and do not interact with the cloud or OpenAI’s ChatGPT. Llama 3.1 405B competes with leading models like GPT-4, GPT-4o, and Claude 3.5 Sonnet, while smaller…
Solving Real Business Problems with Private AI: Unlocking Efficiency and Productivity
I’m on a mission to find a company facing a real problem that can be solved using Private AI—AI that operates entirely within your own data center, without relying on cloud services. I need your help to identify the most challenging, time-consuming, or tedious issues within your company that could greatly benefit from an AI…
Home Lab 07.2024: A Snapshot of its Current State
Servers: TOTAL:CPU 144 Core/ 288 + 16 CoreRAM 1.8 TB + 128 GBNVMe 95.44 TBSSD 4.4 TBHDD 4.5 TBNetwork 100 GbE + 10 GbE AI/ML/vGPU Server CPU: Intel® Xeon® Platinum 8490H 60 Core 120THCPU Cooler: CPU NOCTUA NH-U14S DX-4677Motherboard: GIGABYTE MS33-AR0Network NIC 2x RJ45 10GBase-T portsNetwork NIC 1x RJ45 1GBase-T portsNetwork PCI-E 5.0 x16 100GbE…