I have often found it difficult to find commands that actually work because it seems that Dell Switch commands not only differ by OS version but can also vary based on the model, and of course, I do not rule out that there may be some differences according to the firmware version. This is a…
Category: Home
Silencing My Server Rack: A Simple Solution
Right now, I’m on a quest to create a silent data center in my apartment. I’ve been reaching out to various experts and companies to gather insights and recommendations. While I’m still piecing together the best overall solution, one immediate challenge has been quieting my server rack. Every new device I buy, I am always…
Monitoring Home Lab Power Consumption with Schneider EcoStruxure Data Center Expert and Easy Rack PDU
I’ve been a big fan of Schneider EcoStruxure Data Center Expert because it’s a practical tool for monitoring data center power, humidity, and temperature. I recently bought a Schneider Easy Rack PDU metered outlet version (EPDU1116SMBO), which allows me to monitor the power consumption of individual sockets. This means I can monitor each home lab…
Silent Cooling Solution for the Nvidia L4 24 GB GPU
I am keeping this post very short, with mostly photos. I tested the cooling performance with different games. The GPU’s max power is 72W, though during my tests, it exceeded 75W. It’s also possible to limit it to 30W. I tested the GPU by running games like Black Myth: Wukong, Cyberpunk 2077, Uncharted 4: A…
Nvidia L4: Powerful Low-Power GPU for Nvidia AI Enterprise and Virtual GPU
I’ve been searching the internet for a long time to find a versatile GPU for AI and video graphics workloads that also supports vGPU and Nvidia AI Enterprise. Some of the GPUs I considered were the RTX 6000 Ada, A2, A10, L4, T4, A40, and A16. I was most drawn to the RTX 6000 Ada…
Expanding Core-ESXi RAM from 128GB to 593.72GB Using Samsung 970 EVO Plus NVMe SSD
With the release of vSphere 8.0 Update 3, VMware introduced Memory Tiering over NVMe as a Technical Preview. This feature enables users to expand memory capacity on a host using locally installed NVMe devices. Several experts, including William Lam, have already tested this feature, and it appears to be working seamlessly. Like many other home…
Deploying and Configuring Nvidia DLS for AI Enterprise and vGPU: Step-by-Step Guide
NB! At the end of the blog post, there is a YouTube video and an eBook – a photo-based step-by-step guide Download Nvidia vGPU Drivers for ESXi Download Nvidia vGPU License Server: Installing Nvidia vGPU Drivers on ESXi Deploying the NvidiaDLS OVA to vSphere Configuring Nvidia DLS (License) Server Installing Nvidia Drivers on a Windows…
Overcoming PCIe Slot Compatibility Challenges for Nvidia Tesla P4 GPU Installation
I bought an Nvidia Tesla P4. It was an unused GPU and came with a 3D-printed cooler and fan. I played around with this GPU on my AI/ML server, and it worked fine. Then I decided to move it to my other server, which runs 24/7. The reason is simple: I have jump hosts and…
Quick and Easy Guide to Installing Meta Llama 3.1 405B, 70B, 8B Language Models with Ollama, Docker, and OpenWebUI
I will show how easy and quick it is to install Llama 3.1 405B, 70B, 8B, or another language model on your computer or VM using Ollama, Docker, and OpenWebUI. It is so simple to install that even a grandmother or grandfather could do it. This is private AI, not cloud-based. All data is on…
Meta Llama 3.1 405B: GPU vs. CPU Performance Evaluation and RAM Considerations
It’s time to start testing various Private AI models, and fortunately, the timing is just right. Meta has just released six new AI language models. These models run on-premises and do not interact with the cloud or OpenAI’s ChatGPT. Llama 3.1 405B competes with leading models like GPT-4, GPT-4o, and Claude 3.5 Sonnet, while smaller…