Skip to content
  • Content »
  • Kartaverse »
  • Immersive Pipeline Integration Guide »

Display Solutions, GPUs, Video Cables, Converters/Adapters

Scrivener Export - Reformatting Needed!

This article is an export of a Scrivener document. It will definitely need at least some reformatting to work in Obsidian and MkDocs. Delete this note once the article's formatting has been fixed to some extent.

PCoIP Thin Clients

PCoIP Thin Clients

Remote workers in the enterprise end of the media sector often use thin client systems driven by "PCoIP" hardware like HP branded Teradici remote access terminals.

https://www.teradici.com

This thin client gear passes USB (keyboard, mouse, graphics tablet), sound, and monitor signals in an encrypted fashion over a conventional high-speed internet connection. The remote employee never has access to raw files over this thin client connection and only sees the visual image on the monitor which makes studios happy for security reasons.

On the data center side of things, the host server system for the thin client session provides hardware accelerated graphics using NVIDIA GRID GPU drivers. Often the system provides more than 4 concurrent user sessions per rack-mounted server case.

Workstation Reference Hardware

Workstation Reference Hardware

The average visual workstation in the enterprise end of the Film & TV / immersive sector has a NVIDIA CUDA graphics card with an RTX 2000/3000 series GPU hardware for freelancers, and NVIDIA A6000 or newer series GPUs for 3D animators, and game artists inside of large corporate studio settings.

Most workstations have a small dedicated SSD/NVME drive for the OS boot volume.

Then the artists' programs are run from a shared file server. It is worth mentioning that temp files generated by programs are not commonly written to the boot volume. All user data is read/written from a 10 Gig Ethernet connected network file path that is specific to the current show they are working on.

A separate dedicated computer (that is not the file server) is used as a license server for all of the workstations on a local area network. This license server system will have all the required hardware license dongles attached to it, and floating software licenses are bound to that system's unique hardware IDs as well.

These days some studios require/expect to have the ability to run a license server on a VMWARE vSphere based virtual machine, or on an Amazon AWS EC2 cloud hosted instance. For companies that are going for fully cloud-based workstations, "login based" licensing is common.

Dummy HDMI Plugs for Headless GPU Render Nodes

Dummy HDMI Plugs for Headless GPU Render Nodes

When setting up headless render nodes that need to run 24x7 with hardware accelerated GPU rendering tasks it is important to plug a "dummy HDMI plug" dongle-like device into the GPU. These are readily available from marketplaces like OWC, Amazon, eBay.

This allows the graphics card to correctly auto-sense the EDID resolution parameters so macOS, Windows, and Linux window managers operate correctly. It also allows screen sharing programs to work more reliably.

Some remote access programs, like Parsec, also benefit from having a spare mouse plugged into the USB port if you want to have a hardware cursor that works on Windows.

PCIe Riser Ribbon Cables

PCIe Riser Ribbon Cables

If you need to get creative with how you build your workstation to be able to fit in multiple GPUs a less common option is to use LinkUP brand PCIe flex cables. Amazon is a good source for them when they are available.

https://linkup.one/ultra-4-0-pcie-riser-cables/

The flex cables come in up to 30 cm long lengths while still being able to function on a PCIe Gen 4 bus at 16 lanes of bandwidth. It is possible to "fan out" several large GPUs like NVIDIA RTX 3090s in a wider fashion than the mechanical limits of PCIe motherboard slot spacing with LinkUP cables.

Kartaverse/Immersive Pipeline Integration Guide/img/image2.jpg

image109.jpg

Note: The "LinkUP flex cables" shown above are not bandwidth-throttled like the commonly available discount cables on Amazon or eBay that come from the more common "cryptocurrency mining" style of single-lane PCIe cables and risers. Discount cables must be avoided at all costs as this format of hardware is not effective for GPU rendering use.

MSI Afterburner on Windows GPU Performance Tuning

MSI Afterburner on Windows GPU Performance Tuning

MSI Afterburner allows you to optimize the thermal cooling and performance of your GPU.

You can improve the stability of GPU rendering workflows by making small changes to the core clock, memory clock, power limit, and fan speed settings.

https://www.msi.com/Landing/afterburner/graphics-cards

Kartaverse/Immersive Pipeline Integration Guide/img/image31.png

Green With Envy on Linux Single GPU Performance Tuning

Green With Envy on Linux Single GPU Performance Tuning

https://gitlab.com/leinardi/gwe

It is possible to persistently enable fan speed control in nvidia-settings using:

Option "Coolbits" "28"

# Toggle the prefs for all GPUs connected:
sudo nvidia-xconfig --enable-all-gpus

# Edit xorg
cp /etc/X11/xorg.conf $HOME/xorg.conf.bak
sudo gedit /etc/X11/xorg.conf

# For a Dual GPU setup paste the following into the xorg.conf file:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 460.73.01

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
    Screen      1  "Screen1" RightOf "Screen0"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/input/mice"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Unknown"
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce RTX 3090"
    BusID          "PCI:1:0:0"
    Option         "Coolbits" "28"
    Option         "AllowEmptyInitialConfiguration"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce RTX 3090"
    BusID          "PCI:33:0:0"
    Option         "Coolbits" "28"
    Option         "AllowEmptyInitialConfiguration"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Finally you can check the NVIDIA prefs to see if the preference was defined correctly:

sudo nvidia-settings

IPMI Remote Management Interface

IPMI Remote Management Interface

If you are building a render farm with second-hand "surplus" business and industrial server gear purchased from eBay, you might notice the presence on server systems from vendors like Super-Micro of a low-level device management interface that was known as an IPMI interface.

This interface allows you to modify BIOS settings remotely via a dedicated ethernet IP network connection that is separate from the server's network interface used by the running operating system.

If the server is an older generation like a quad AMD G34 CPU powered system, you may have to run the IPMI management utility in a virtual machine that runs an older release of Windows ranging from Windows XP, Vista, or 7. This was due to using the common requirement of Internet Explorer 6 and ActiveX controls.

https://www.supermicro.com/en/solutions/management-software/ipmi-utilities

NVME Storage Raid Controller Cards

NVME Storage Raid Controller Cards

KartaVR v5's volumetric workflows were developed using a file server that had a High-Point SSD7540 8x NVME raid array controller card. The card is compatible with Windows, Linux, and macOS systems which is excellent.

The disk throughput is quite phenomenal and it reduces the pain of working with large media assets like tons of image sequences and per-frame photogrammetry reconstructed mesh sequences.

image29.jpg

image7.jpg

Networking Gear

Networking Gear

image351.jpg