HP server configurator tool. HPE ProLiant MicroServer Gen10 Plus Ultimate Customization Guide

Use saved searches to filter your results more quickly

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

FreeRADIUS Server Configuration Tool


This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Sign In Required

Please sign in to use Codespaces.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.


Failed to load latest commit information.


FreeRADIUS Server Configuration Tool

Developed for the Linux operating system and written in the python programming language. The purpose of the program is to configure the FreeRADIUS server easily and quickly.

To get a grasp of what FreeRADIUS is, it would help to firstly understand what the concept RADIUS stands for:

RADIUS (the acronym for Remote Authentication Dial In User Service’) it is a protocol devised to perform the AAA (authentication, authorization, accounting) i.e. performing the management of identification verification, providing the permissions and users’ data accounting, for those users who provide remote access to other networks. The protocol was developed in 1991 by the manufacturer Livingstone to verify identification and to follow up accounting and was later implemented as a standard by IETF (Internet Engineering Task Force). With its perfect support and wide usage it is being used by ISPs (Internet Service Providers) and establishments to manage access to Internet, Intranet, wireless network and integrated e-mail services.

At the application level RADIUS is a server / client protocol which uses UDP (User Datagram Protocol) for transmission. They are densely used for network access like RAS (Remote Access Server) and network gateways VPN (Virtual Private Network) servers. They, basically, have three functions:

ID verification of users before providing access to the network The authentication of these users or devices for certain services Keeping an account of the usage data of these services FreeRADIUS: FreeRADIUS is a modular, rich in features, highly efficient in performance version, or model, of the RADIUS protocol mentioned above. The FreeRADIUS which is open source code software can run under various operating systems (AIX, Cygwin, FreeBSD, HP-UX, Linux, MAC OS-X, NetBSD, OpenBSD, Solaris gibi). With its multiple AAA servers, it has wide range applications that provide service to millions of users. The server supports LDAP (Lightweight Directory Access Protocol), SQL(Structured Query Language) and other database types and has been operating with EAP (Extensible Authentication Protocol) since 20001 and PEAP (Protected Extensible Authentication Protocol) and EAP-TTLS (EAP-Tunneled Transport Layer Security) since 2003. Currently, the FreeRADIUS supports all ID authentication protocols and data bases.

The FreeRADIUS whose 2.0.0 version was released at the beginning of 2008 has its latest version, 2.1.6, since its release in September 2009.

Note: To run the program as an authorized user with sudo authority is required. Apart from this freeraduıs server software must be installed.

Cloning an Existing Repository ( Clone with HTTPS )

git clone https://github.com/radiushub/FreeRADIUS-Server-Configuration-Tool.git

Cloning an Existing Repository ( Clone with SSH )

git clone git@github.com:radiushub/FreeRADIUS-Server-Configuration-Tool.git
root@ismailtasdelen:~# python run.py

HPE ProLiant MicroServer Gen10 Plus Ultimate Customization Guide

For the past few weeks, we have been working on the HPE ProLiant MicroServer Gen10 Plus. We started with why it is worth getting excited about. We then started our hands-on series with the HPE ProLiant MicroServer Gen10 Plus v Gen10 Hardware Overview followed by our formal HPE ProLiant MicroServer Gen10 Plus Review. As we were preparing for our review, it became obvious that we wanted to test a lot more in terms of NICs, CPUs, memory, and other options that are not on HPE’s official options list. So we did what STH does and ordered two more units. With our trio of testbeds, we were able to iterate on how one can customize the HPE MSG10 to a fairly solid degree. In this article, we are going to go into what you can do to take this compact platform to the next level.

HPE ProLiant ML30 Gen10 iLO Configuration | ilo 5 configuration | configure ilo port on hp server

Before we get too far into this guide, let us first talk power. The external LiteOn DC power supply is rated for only 180W. In our testing, we found that with 2x 16GB DIMMs, a single SSD and the Intel Xeon E-2224 CPU we could push over 110W in a worst-case scenario. That leaves at most, 70W of headroom for customization, and realistically only 52W if one wants to leave a 10% margin. There are going to be people customizing this system with all kinds of outlandish configurations, but one must do a max power load test to ensure they all work together. Booting a system is fine to show the “wow” factor, but it also needs to run stable.

HPE ProLiant MicroServer Gen10 Plus Ultimate Customization Video

We have a video called the Ultimate Customization Guide to the HPE ProLiant MicroServer Gen10 Plus. Feel free to check it out here:

We are going to keep the piece on this page updated with more up-to-date information as we get it but that is a good starting point for those who want to listen instead.

Also, as a quick note, we tested ten popular OSes that were mostly not on the HPE ProLiant compatibility list. You can reference that here.

HPE ProLiant MicroServer Gen10 Plus 64GB RAM and Non-ECC

HPE options the machines with either 8GB or 16GB of RAM in a single DIMM stick configuration. For 2 core and 4 core systems, 8-16GB of RAM is very reasonable. If you just need this machine to be a simple NAS, then that is all you need to do.

Technically, the Intel Xeon E-2200 series supports 32GB Unbuffered ECC memory in each DIMM slot and up to 2 DIMMs per channel. There are only two slots on the MSG10 so we wanted to do a quick matrix of 4, 8, 16, and 32GB modules in ECC and non-ECC configurations.

As you can see, all of our configurations worked as expected using the Xeon E-2224 base CPU. If you simply want more RAM, you can add DIMMs and get to 64GB without issue.

For your reference, we were using 32GB DDR4 PC4-21300 / DDR4-2666, CL19, Dual Ranked, x8, Unbuffered ECC and non-ECC DIMMs from Crucial in our testing on the 32GB sizes.

The power consumption impact of adding a second DIMM is under 5W at the wall and much less when idle. We are using 5W simply to calculate our power budget.

HPE ProLiant MicroServer Gen10 Plus iLO 5 Advanced w/o Enablement Kit

When one utilizes the iLO 5 Enablement kit on the HPE ProLiant MicroServer Gen10 Plus, it opens up the ability to not just use the dedicated port for out-of-Band management but also allows one to use a shared network port. Installing this kit also adds the iLO 5 Essentials license level. Since the new MSG10 has a quad-port NIC, we wondered if one could remove the iLO Enablement Kit after applying the Advanced license, and simply use the first port as a shared network iLO port.

  • Installed iLO Enablement Kit
  • Changed to shared network on Port 1
  • Restarted iLO controller
  • Activated iLO 5 Advanced Key and restarted iLO controller
  • Verified iLO responsive on Port 1 and powered off the machine
  • Removed iLO Enablement Kit
  • Powered on machine
  • Tested if iLO responsive on Port 1 or Port 2 (the other option)

The result of the procedure above is that iLO was not responsive on either port. Our summary after we purchased a license for iLO 5 Advanced, installed the license on the MSG10, and tried removing the enablement card is that this is currently not working functionality. One needs the iLO Enablement Kit even if a higher license level is purchased and a shared network-only port is used on one of the quad Intel i350-am4 NIC ports.

This is too bad since it would have made better use of the hardware in many situations. There is also a small power impact to using the enablement kit that this could have saved.

Next, we are going to have CPU options. The following page will have hard drive options, then SSDs, and on the last page we will discuss NIC options.

Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.

Kioxia SSDs Hopping to Space in the HPE Spaceborne Computer-2

171 Комментарии и мнения владельцев

My god. You might’ve just single handedly better documented the Gen10 Plus than the community did with the Gen8 we all love. And screw the man. That iLO Advanced should work w/o enablement kit on a shared port. You’ve created a system where you’ve got a paid iLO Advanced license but you can’t access the features of it.

When using the HPE Quad 25GbE Qlogic Adapter, can it still work when the adapter is set to 4 25 and inserted into the 10GbE port?

Are there any options for Higher Power Supply Bricks. (Maybe not from HP but 3rd Party), That will work with the Gen10Plus to safely run a possible full config (All 4 Drives Max Ram (64Gb) PCIe 10GB and the E-2288G CPU). Going all out here! Thank you for all the hard Work getting all this info recorded.

Hi Jadehawk – we will be updating this piece with that information. We do not recommend using a higher wattage power brick. See https://forums.servethehome.com/index.php?threads/hpe-proliant-microserver-gen10-plus-ultimate-customization-guide.28013/#post-258541 Chitose Ikeda – It will negotiate the 25GbE links to 10GbE as well if that is what you are asking.

I currently have a Gen8 with 4x8TB drives in the bays and a SATA SSD in the ODD bay to boot from. Are you saying I could move my 4x8TB’s over, and buy a StarTech PEX4M2E1 and an NVMe drive to boot from, and it would work fine? Love the sound of this, but want to confirm before I spend nearly £700

For the ECC 32GB Stick do you have the part number? Thats the only thing preventing me from buying one of those for a remote location.

Super Article, thank you. You mentioned in the comparison with the MS10 that the MS10 does not make use of the GPU in the stock Pentium CPU due to Firmware/BIOS restrictions. Is that the same for all CPUS? Wondering if I could make use of the HW acceleration in the Xeon E-2246G and let the server run some rendering work over night.

Any chance to let us know the size of the fan? Not even HP support was able to tell me I guess 80mm

Very nice good. I went to the online VMWare Hardware Compatibility guide. I filtered for ESXi 7.0 and in the Keyword text field, I entered Microserver. The search came up with two entries for the MicroServer Gen10: 1. Intel Xeon E-2200 (4 or 6-core) Series 2. Intel Xeon E-2200 (8-core) Series I find it very interesting that currently, HPE only offers the 4-core version of the Xeon (in addition to the Pentium CPU, which isn’t compatible with VMWare). So, I wonder why the VMWware compatibility guide mentioned 6-core and 8-core. Are these CPUs planned for the future? Also, you had a “top pick” for CPUs. I assume that would work fine with VMWare as well?

We covered a bit of this in the Pentium G5420 review, but it is a reduced instruction set chip. That is why you see some differences. On the E-2246G, that is a 6 core part. I hope HPE looks at putting that into a MSG10 as a virtualization platform.

Hi There, wonderful article! I’m really thinkin about getting one Gen10plus. But i would like to Add maximum RAM to it – but when i’m searching for “32GB DDR4 PC4-21300 / DDR4-2666, CL19, Dual Ranked, x8, Unbuffered ECC”, i can only find registered modules. Can you please post a link to the RAM you used? Thank you!

Great review – thanks! Is it possible to go from stock-Pentium-CPU to an Xeon E-2234, or are there any restriction because of differences in cooling or chipset?

mantis – We have not tested the Pentium model. I did, however, ask the HPE product team and the units are identical except for the CPU and memory load-out. So if you buy the Pentium box, you should be able to remove the Pentium (sell it if needed) and replace it with a Xeon E-2234 without issue. I would suggest if you go Xeon E-2234 you probably want to add RAM as well since 8GB for 8 CPU threads is a bit low by modern standards.

Which RAM did you use for the 64GB configuration. I bought a stick and the system boots but throws a error for an unsupported RAM configuration. Could you provide the product number of the ram sticks that you tested?

Thanks for nothing Patrick. You are clearly missing your audience here… For all who are interested: 80x80x38mm measurements of the fan. Apparently too loud for the living room

Is it possible to get the product, brand or model number of the 32GB ram you used. I know this the past Microservers are very picky about the ram. Wrong one and you’ve made a big investment in nothing. Thank you!

@James Crawford @Karakal @Max Reading Patrick’s description of the RAM used I could find these part numbers (@Patrick: please correct me if I’m wrong) 16GB unbuffered non-ECC – Crucial CT16G4DFD8266 16GB unbuffered ECC – Crucial CT16G4WFD8266 32GB unbuffered non-ECC – Crucial CT32G4DFD8266 32GB unbuffered ECC – Micron MTA18ADF4G72AZ-2G6B2 Hope this helps, Bart

Thank you, Bart Welvaert. So, if i understand correctly, also this RAM should work? Samsung – 32GB.DDR4-2666MHz-CL19-unbuffered ECC: M391A4G43MB1-CTD https://www.Samsung.com/semiconductor/dram/module/M391A4G43MB1-CTD/ I couldn’t find the Micron MTA18ADF4G72AZ-2G6B2 to buy. Is Samsung ok for server ram?

Thanks for the great review. Is there any performance impact if the two memory slots don’t use the same size memory? For example, can this configuration work: 8GB 32GB or 16GB 32GB? Also, how hard is it to get at and replace the CPU? Just standard remove heat sink, replace CPU, put some themal paste and put heat sink back on?

Randman we did not try, but it should be similar to what one sees on other desktop and server (Xeon E-2100/ E-2200 platforms.) On the heatsink, that is the right idea. Open the chassis. Undo the four screws and there is a standard socket underneath.

Hi. Out of interest does anyone know if you can boot from an NVME PCIe Add in card? I need to have my boot system and certain vm’s on an SSD and I don’t want to take a HDD a lot up.

I’m looking to get this server as well as the Intel Xeon E-2246G. Thanks for checking out these CPUs, and I can sleep knowing my server won’t catch on fire :-). Great article! A couple of questions if I were to upgrade the CPU to the Intel E-2246G: 1. I like that the E-2246G supports Quick Sync. Is it safe to assume that the E-2246G’s Quick Sync functionality will be available from the OS (and any integrated graphics in the server won’t interfere with the CPU’s Quick Sync)? 2. The E-2246G has integrated graphics – would the DisplayPort output of the server take advantage of the E-2246G’s graphics? Regards.

Great article! Shame about the iLO enablement kit… But Thanks for taking the time to review, and provide the CPU table matrix!

Hi, i received my new Gen10 and i would like to upgrade the RAM. Is this RAM model CT32G4LFD4266 (from Crucial) is working with my Intel XEON E-2246G CT32G4LFD4266 ? I would like to get 2×32 GB RAM size. Mémoire interne: 32 Go Type de mémoire interne: DDR4 Fréquence de la mémoire: 2666 MHz composant pour: PC/serveur Support de mémoire: 288-pin DIMM Disposition de la mémoire (modules x dimensions): 1 x 32 Go ECC: Oui Latence CAS: 19 Niveau de mémoire: 2 Mémoire de tension: 1.2 V Configuration de module: 4096M x 72 Couleur du produit: Vert Certification: CE

Hi Patrick, can you tell me why the Xeon E-2244G is not recommended in the cpu overview? It only takes 116 watts, so why is it not recommended? Thanks!

Thank you for this amazing review. Very good job ! Is there a chance that it works with the Core i3-8300T or i3-9300T without any issue ? Thanks !

Hi. Great guide, but why no core-i5 (9500) or i7 (9700) in your test ? is it compatible ? they are 65watts no ?

Incredible guide. Thank you for this. How difficult is a CPU swap? The stock gen10 plus is very close to what I had in mind for a system, except that 4 cores and no hyperthreading is too lean. The ability to swap CPUs makes all the difference in the world to me.

Given all that, I’m inclined to purchase the Pentium G5420 with 8 gb, add the iLO board and immediately replace the CPU with either E-2236 or E-2246G and substitute a pair of 32 gb memory cards for the 8 gb. Does that seem like a reasonable thing to do or would I be better off just buying on of the other tower form factor boxes in the ProLiant line?

Further to my last comment/question. Do I need any specific thermal paste or equivalent to properly install the replacement CPU or special tools to swap the CPU? Can you point me to instructions for the procedure that you trust?

I answer myself, i just got my new RAM : Crucial CT2K32G4DFD8266 64Go Kit (32Go x2) (DDR4, 2666 MT/s, DIMM, 1.2V, CL19) and it works, 64 GB for this little guy

Benjamin, Also, I think this is non-ECC. Did you also look into ECC (wondering what cost differential would be)?

Fantastic review! Really! All info in single one place. Patrick – what could be the damage, if a more powerfull power suply would be added, keeping in the same time the voltage but instead of 180w to have 300 for example? And a second question – what voltage does provide the charger as output? Thanks

@Rand : The full specs : https://www.crucial.com/memory/ddr4/ct2k32g4dfd8266 I’w waiting my ILO card, but in the BIOS, it’s 64 Gb without warnings. I tried to find Crucial ECC memory but they were nowhere to be found at the moment in my Europeen shops.

Thanks, Benjamin. I also looked everywhere online and couldn’t find any ECC 32GB UDIMMS. I only found 32GB RDIMMs. I’m also waiting for the out of stock iLO Enablement kit. Setting up a ProLiant without iLO will be a first for me.

@Rand Same feeling. It was awkward but just 10 mn with a monitor to set up Esxi 7 and then SSH and Web to manage remotely this new toy.

I haven’t tried it but here is an advertised 32 GB ECC UDIMM: https://memory.net/product/m391a4g43mb1-ctd-Samsung-1x-32gb-ddr4-2666-ecc-udimm-pc4-21300v-e-dual-rank-x8-module/ Samsung lists it as “Sample” for product status: https://www.Samsung.com/semiconductor/dram/module/M391A4G43MB1-CTD/ Would be curious if it works. Really want to have 64 GB for an ESXi server… There must just not be a market for 32 GB ECC UDIMMS. Everything is registered DIMMS.

Well done! Looks like a promising product, but I would prefer another PCI slot or two as well as more power. I’d appreciate a link to the next size up, even if it is a more standard server. Gracias!

So I picked up a ProLiant Gen10 Plus with the Intel Pentium G5420 with a view to swapping that CPU for an Intel XEON E-2246G. I removed the Pentium and replaced it with the XEON, cleaned the thermal paste residue from the heat sink and applied Arctic Silver 5 to the 2246G. When I attempt to power on the system the system health and network light show green but the system will not boot. After a little bit the fan kicks on high and that’s about it. I’m at a loss for how to troubleshoot this. If I pull the XEON and replace it with the Pentium I get the very same result — the system health and network lights are solid green, the system will not boot and after a bit the fan kicks on high.

@Benjamin, this is the first HPE ProLiant I’ve had to connect a KVM to. A little inconvenient, since I have to put it in my office instead of the basement. Also, when I connected the MicroServer Gen10 to my monitor using DisplayPort, I got no signal at all. It seems that maybe an active DisplayPort cable might be required? Anyway, a week before I got this server, I was going to throw out an old Gateway (remember them?) monitor that’s been sitting unused in my basement for years. Fortunately, I kept, since it has a VGA input that works with the HPE MicroServer Gen10 while waiting for the iLO Enablement Kit. Having to create bootable USB sticks from HPE iso’s (such as to apply an SPP) is also inconvenient compared to doing virtual mounts via iLO. @fellow – the next size up I believe is the ML30 Gen10. @Richard – I replaced my E-2224 with the E-2246G using Arctic MX-4. Sorry, I can’t help hear, since I didn’t have any issues. Maybe you found this already, but just in case, take a look at the Troubleshooting Guide in case there’s any useful debugging tips: https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_USdocId=emr_na-a00017522en_us

Is anyone who has the server with a E-2XXXG CPU able to confirm that Plex is able to use QuickSync? You can test this by transcoding something in the web player and looking for “Transcoding (hw)” on the Activity page. There are reports that the server doesn’t expose QuickSync but I would like to know for sure from someone who has one.

@Richard Robbins : Did you unplugged the cable to set up the new CPU ? When i did this, i somehow managed to re-plug the cable in the wrong way … The system did not boot and only a red led flashing. @AJ : My plex shows for my differents devices : iOS : SD(H264)-Transcoder Chrome : Live stream

I just got my Gen10 Plus yesterday and realized the S100i SR doesn’t support ESXi… Any suggestion of Raid controller for Gen10? Would like to get a cheap Raid controller instead of getting e208i-p on the official support list which is very pricey…

Hi there, I use an Intel X550 T2 10GbE PCIe card, found that there is no option about SR-IOV setting in BIOS. And ESXi always shows a tips ‘Enabled / Needs reboot’. I confuse that whether Gen10 support SR-IOV or not.

on the Gen8, I5 worked, so why in this generation, i5 will not work ? I know the chipset are not “consumers”, but is there any reason it should not work ? (except what intel says)

i answer my question myself, as i received my unit. I5 worked perflectly well on it; despite what someone said !

@Benjamin, any issues running your 64 GB non-ECC RAM you posted above (Crucial CT2K32G4DFD8266 64Go Kit (32Go x2) (DDR4, 2666 MT/s, DIMM, 1.2V, CL19)? I have them in my cart just waiting for your results

This and your previous article on the Microserver Gen 10 Plus drove me to buy one, but the DIMM I got is reportedly not supported (the memory POST throws an error and halts). The memory I got was Crucial CT32G4RFD4266. I’m running the Pentium Gold version of the Microserver, but both stock processors seem to have similar memory support, so I don’t think that’s the problem. Any chance you can be more specific on which memory you were using? Anyone else have any suggestions for a 32G ECC DIMM that’ll work with this server? I realize it’s officially not supported, but I was hoping for 64G, and this article gave me hope.

Replying to myself… As others in the Комментарии и мнения владельцев mention, my problem is probably that I got registered memory and need unbuffered which apparently quite difficult to find.

Also just tested that 32 GB of (2x 16 GB) Crucial UDIMMs work with this system without issue. Am going to replace with 64 GB (2x 32 GB) instead. In meantime, does anyone know of a PCIe card that supports 2 NVMe M.2 drives in the Gen10 Plus? Couldn’t get the Supermicro aoc-slg3-2m2 to be recognized by the G10. However the Silverstone ECM22 works no problem, only it supports just 1 NVMe drive.

For anyone else wondering I can confirm that Crucial CT32G4DFD8266 (Crucial 32GB Single-Rank Unregistered non-ECC DDR4 2666 CL19) works just fine.

Hi, Bought the cheapest – HP Proliant MicroServer Gen10 Plus (P16005-421) I immediately changed the processor to – Intel Xeon E-2234 OEM RAM changed to – 2x16Gb DDR4 2666MHz Samsung ECC (M391A2K43BB1-CTD) Everything started up and running perfectly! Thanks for your article, it helped me a lot. During operation, a question arose. I tried to run a system stability test in AIDA64. The processor overheats in 3-5 minutes and trolling begins. At the beginning of the test, the cooling fan reaches its maximum speed, but after 20-30 seconds it reduces the speed and ceases to effectively cool the processor. Please tell me where to set the operation mode of the cooling fan so that it works at full speed until the processor load decreases.

I solved the problem In BIOS: System Configuration BIOS/Platform Configuration (RBSU) Advanced Options Fan and Thermal Options – Enhanced CPU Cooling (when there is no load, it works quietly as in Optimal Cooling mode, at 100% processor load it increases the fan speed by sensations up to 60-80% and keeps them)

Mine just arrived today and I really like it so far. The main issue I have at the moment is I am unable to get it to boot from the NVMe drive using a PCIe adapter card. In the BIOS I can see the card/drive listed in the boot order and controllers but when I switch to legacy mode to boot from USB it fails. I am assuming I need to keep it in UEFI mode and need to get a signed OS Install. When I boot in legacy mode I can boot of the USB and the Windows install sees the 1TB SSD but tells me it can’t install as the system won’t boot from that drive and to check the BIOS. Anyone have any suggestions? Thanks

@Jason Sorry comment was moderated. No problem with my RAM CT2K32G4DFD8266 since March. PCI Card for double NVMe : PCI Express SSD M.2 / startech / PEX8M2E2

Anyone have any other non-switch dual PCI-E card recommendations that work with the boards bifurcation support? I also was about to purchase the Supermicro aoc-slg3-2m2, until I saw Jason’s comment here. I’m trying to optimise on power and reusing some existing m.2 drives and not having much luck. Might have to revert to the Startech PEX8M2E2 that Benjamin mentioned.

Thanks for the long and great review. I have my Gen10 plus since 3 days now and I can’t get any SSD SATAIII drive to be recognized from the BIOS or boot menu. I saw a similar complaint in a forum. Did you never have a problem when trying different SSDs in the unit ? Is the HPE SSD you used so much different then other third party SSDs ?

Hi, so i got a problem. I recently bought a Microserver gen 10 plus. My setup is 3x 8tb wd red setup in a raid 5 configuration with a 1x 1tb Crucial 1TB X8 USB 3.2 Gen2 SSD for the OS. I’ve been trying to get it running but for some reason it doesn’t want to boot from the SSD you guys recommended. I tried in other ways. It did want to boot from a normal tumbstick but obviously thats slow and i bought the SSD for it. Can you guys help me solve this problem? I have almost tried everything.

Hi, I can confirm that the Memory Crucial CT2K32G4DFD8266 is running very fine on my HP Microserver GB available under ESXi 6.7 running about 19 VMs on one host. Thanks for testing and recommending everyone!

FYI the bifurcation in the BIOS only supports 8×8 and NOT 4x4x4x4. That is why the Supermicro aoc-slg3-2m2 is NOT working. If you want 2xNVMe support you have to go with the Startech PEX8M2E2. I’ve been in contact with HP about this. They only support bifurcation in this server for their own NICs (and then 4x4x4x4 is not needed from what I understood). I was assured it is a firmware/BIOS issue rather than a hardware one. So support might be added in the future. However they could not say if this was planned or if it ever will be added.

I took delivery of a MS G10 today and it boots debian buster fine from a thumbdrive, but I also can’t get it to boot from a USB SSD using the same setup (it wouldn’t even boot the installation media from the SSD). Did anybody have any luck? In principle, I’m fine with a thumbdrive, but I’m afraid that /var/log will fry it eventually …

Hi, i just got my second G10 server, first one is running with 32gb. I ordered 2xCT32G4DFD8266 32GB DIMMs for this one and it only runs with 1. If I install the 2 dimms I get a error 00000000 saying that I need to re-sit the DIMMs or upgrade ROM. Has anybody run into this and knows if a BIOS upgrade will fix it?

Quick update: BIOS U48 2.16 does not fix it. I can run with one CT32G4DFD8266 DIMM, boot get the 0000000 mayor/minor error code if I boot with 2.

Hi Patrick, I tried to get the NC523SFP NIC running with the value version (Pentium Gold) of this server. To no avail. As soon as the card is plugged in the server doesn’t even get to boot the OS. server boots up fine without the NIC. NIC itself runs fine in an DL380 G6 and G7. I’d like to use this NIC since I have some of them lying around waiting for some work to do. Any hints or tips on this? br Steffen

@Brendan Richman: I can confirm the StarTech PEX4M2E1 https://www.startech.com/HDD/Adapters/pci-express-m2-pcie-SSD-adapter~PEX4M2E1 works with the MSG 10 Plus. The PEX4M2E1 only works with NVME disks, not with SATA M2 disks. I have it now with an Samsung 970 1 TB SSD. Even can boot from the PEX4M2E1

Loving these articles, waiting for mine to come in. Want to add an NVME when I get but also want to add an additional 4 drives and make an unRAID server (8 drives in total plus NVME and USB boot). What are the recommendations to add these drives? I have powered USB enclosures that do support eSATA as well. are there cards that would allow eSATA or SAS with single or dual NVME?

Updated to latest System ROM but system will not boot with any of my drives installed using the default included onboard SATA controller. Contacted HP and of course they said they only support their drives which cost as much as the entire server for 1 4TB drive. These drives do not work: – 6TB Seagate Ironwolf NAS ST6000VN0033 – Micron RealSSD C400 128GB

@GrandMasterV yeah… I’m in the same boat. My Gen10 Plus will literally not POST if a drive is installed and I’ve tried a myriad of different drives with no success. To cover my bases I ordered a cheap HPE HDD (801882-B21) to help determine if there’s an issue w/ the storage controller. If I can’t make this work I’ll have to return the Gen10 Plus and look at alternative products without strange HDD restrictions.

We did not try either of those drives GrandMasterV. The larger capacity WD drives and Samsung/ Intel SSDs we usually use on systems all worked when we did this piece.

Patrick, Thanks for your amazing reviews of this server! It made me buy one. Any chance you could give a hint on which Crucial ECC 32GB sticks you used on your tests ?

FYI on memory: my setup works well with a Xeon E-2234 and 2 of Micron 32GB ECC UDIMM (MTA18ADF4G72AZ-2G6B2). The Micron memory were purchased from mitxpc.com.

Patrick, I added a Toshiba MK1001TRKB 2TB SAS 7200RPM 16MB Enterprise Hard Drive to the Gen Plus 10 but the drive was not detected. Thought I could use a SAS drive without a controller. Can you confirm that SAS drives are natively support and the models that was tested.

For the SAS drive you need this controller HP 804394-B21 Smart Array E208i-p SR Gen10 C HPE Smart Array E208i-p SR Gen10 Ctrlr MPN: 804394-B21

Hello! Thank you very much for the very good article! I just buyed a “QNAP QM2-2P10G1TA” with a “Samsung EVO Plus 970 2TB” NVME SSD. If I try to boot the HPE Microserver 10 with the QNAP Adapter WITH the EVO 970 (it doesn’t matter if build on Slot1 or Slot2) I only see an error message in red letters: “RIP address out of range”. I googled about this and come to some articles which says that I have to disable “UEFI Optimized Boot”. I tried this but it wont work. I I try to boot only the “QNAP QM2-2P10G1TA” without a NVME SSD everything is okay and the Microserver Gen10 boots normally. Does anyone knows whats wrong in my setup? I hope only a BIOS/UEFI setting is wrong but I can’t find the wrong setting. Best regards!

In the Integrated Management Log I see: “Uncorrectable PCI Express Error Detected. Slot 1 (Segment 0x0, Bus 0x8, Device 0x6, Function 0x0). Uncorrectable Error Status: 0x100000”

I GOT IT. It is very simple: the NVME SSD from Samsung (Evo Plus) with 2 TByte is INCOMPATIBLE with the “QNAP QM2-2P10G1TA” and the HPE Microserver Gen10. I have a spare NVME: “Intel SSD 660p 2TB”. I just build this into the “QNAP QM2-2P10G1TA” and voila: The Microsrerver Gen10 boots without any problems and I see under VMware ESXi 7.0b the 10Gbit Aquantia Chip and also the Intel 2TB NVME SSD! I want to use the Samsung Evo Plus 970 2 TB as it is a very good combination of speed and reliability but if it isnt’t working from the ground up I will go with the Intel 660p and do some more backups. …so now I got 10Gbit in a tiny Microserver with the Intel 660p 2TB NVME SSD (a little bit slowlier than Samsung but this doesn’t hurt) and something new learned… So in sum: – 1 TByte Samsung EVO Plus NVME works very good – 2 TByte Samsung EVO Plus NVME do NOT WORK with the “QNAP QM2-2P10G1TA”-Adapter in the HPE Microserver 10.

Per AJ’s comment above, has anyone found applications to work with hardware acceleration or Intel QuickSync on their HPE Gen10 Plus server after upgrading to a Xeon E-2246G CPU? Or is the system designed to not utilize any hardware acceleration based on a limitation of the chipset/motherboard design?

Great write-up. However it is not clear to me whether or not non-HP-branded drives are supported by the MSGen10. I haven’t seen any restrictions on the old NL40 and the MSGen8 servers. Would be nice to get a definitive answer. I can’t even find an official statement on the HP datasheet.

I have a Gen10 Plus, it won’t recognise an Intel X540 or X550 10GBASE-T PCIe NIC that I install. The BIOS says the PCIe slot is unpopulated with both cards… LAN ports light up when I plug a cable in… heatsink fan fires right up. Any ideas on how to get it to recognise the NIC? Both cards work in lots of other machines/NAS that I have.

@easternnl How do you boot from the PEX4M2E1? Do you need to use UEFI? There is no option in the boot menu.

Update: switched to UEFI (used legacy BIOS for USB booting), installed Ubuntu server 18.04 but it won’t boot from it. I guess there is a bit more to it…

another update: silly me, NOW there is an entry for the boot order and just moving the NVME controller up will make it boot just fine.

final update: it seems that any pci card should allow you to boot from it, not just the PEX4M2E1, as long as the system can see the disk.

Me podrían indicar que SSD Internos no-HPE han probado hasta ahora y les ha dado resultado? Los que tuvieron problemas para detectar los SSD no-HPE, intentaron cambiar de RAID a AHCI? Mi microserver llega en unos días, no consigo SSD HPE y necesito de al menos 4 x 1TB (y HPE no tiene esa capacidad)

Could you tell me which non-HPE Internal SSDs you have tried so far and it has worked? Those who had trouble detecting non-HPE SSDs, tried to switch from RAID to AHCI? My microserver arrives in a few days, I can’t get HPE SSD and I need at least 4 x 1TB (and HPE doesn’t have that capacity)

My HPE Proliant Gen10 server failed today. I had upgraded it to the XEAM 2236 CPU and 64Gb of memory. It experience a critical power issue, the power supply test good. It is something on the MB. Anyone else having failures after upgrades?

Will the “Entry G5420” support 32 or 64 Gb. ram ? – The spec sheet says 32, but any chance for running 64 in this one ? Thanx BooX

Have tried three different Samsung evo adds. Server stops due to HDD temperature rises over limits as soon as they are being copied to. Vsphere did install without trouble. So they do not support any other disks than Hpe branded SAS. 200

Would almost a top CPU work if you only want RAM, processing and a boot SSD and all storage will be iscsi? (as a Virtual compute node)

I’m wondering – considering noone has provided a definitive “yes it’s working / no it’s not working” answer – is the QSV actually available upon PCU swap to one with integrated graphics? AFAIK HP has stated that the BIOS they’re using in the Microserver Gen10 Plus is using Intel’s SDS and that means QSV is by design/default not available regardless of the CPU in use due to Intel’s BLOB in the BIOS/firmware. So, according to HP HW accelerated transcoding using Intel QSV is out of the question… Then, the follow-up question – which GPU to use for HW accelerated decoding/encoding workloads with this device/appliance/server? Seems like Quadro P400 could be the next best thing with its lower power draw, TDP and LP form factor, but has anybody used it here? It’d take the PCIe slot so no 10G NICs or internal M.2s if planning to use 4×3.5″ HDDs slots…

As a follow-up, it seems like Intel’s DG1 could be the best solution for this thing – being more power efficient/low-powered it’d feel nice within available power envelope, plus Intel has planned to release it to major SIs, so HP surely is to get some… All they’d need to do is take the DG1 and put it into a low-power LP form, et voila… almost… One could then get ubiqitous QSV through the card without the Intel’s SPS interfering – great solution/purpose for the chips, chipmaker and manufacturer in this possible combo… Though I did hear in the press coverages mentions that DG1 will be restricted to specific CPU families/gens on specific boards with specific dedicated BIOS features. But still, HP could easily nudge Intel into loosening the restrictions noose for their Microserver and nettop/thinclient uses. It’s money for all after all… Furthermore, one could dream of a company going one step further and creating the GPU combo card – eg. by combining 2x10GbE NIC onto the board with GPU chip through PCI bifurcation to 8x/8x for both devices. Or maybe even better GPU dual M.2 combo – if future direct storage solutions come to fruition in GPU workloads having the SSDs right next to it could be beneficial – while currently one could then use the slot for both GPU accelerated purposes and faster storage. Heck, I wonder why companies are not trying to differentiate their products with such unique approaches to PCIe lane optimisations (yeah I’m aware of those Vega/Navi compute and pricey cards with SSDs added for extended VRAM purposes but these were just that – the colder VRAM extension not directly accessible/usable general storage. I for one think that such combo solutions could be even more sought after for Microserver homelab/mediaserver builds than the iLO add-in NIC cards…

Any thoughts on the best way to get RAID1 when running Windows 10. AFAIK the HPE Smart Array S100i SR Gen10 Controller isn’t supported for W10? No idea whether the Windows Server drivers will install on W10? I don’t want to spend money on a PCI-E RAID card so am thinking of perhaps trying softRAID. FYI, My plan is for the OS to be on single m.2 storage (thus not RAID) and have 2 x 3.5″ for storage with the RAID1. Thoughts appreciated (BTW, this is for a home/office setup).

Another RAID1 issue – I’m having all sorts of issues installing Windows Server (2016 and 2019) Essentials into a RAID1 configuration (WD Gold 4TB drives) using the baseline S100i SR controller. Manual OS install fails, (it does not see the logical RAID1 drive) and trying to inject downloaded drivers (https://support.hpe.com/hpesc/public/swd/detail?swItemId=MTX-652001b1e4d142bb968146b571) during installation does not work. OS installation also fails when trying to use Intelligent Provisioning or Rapid Setup processes. RAID0 also did not work, but installing to a non-RAID single drive was not a problem. If anyone out there has RAID1 working through the S100i SR for WinServer 2019, your install process and which drivers used would be appreciated.

Regarding installing 2.5” drives in the 3.5” bay, in the UK at least it is just as cost effective to get the HPE specific part, 870213-B21 which is designed for these bays and works very well.

Has anyone been able to get a 4 TB SATA SSD to work in one of the internal SATA slots? I have it in a 2.5 to 3.5 drive adapter but even after rebooting, Smart Storage Administrator doesn’t see it at all. I have an internal NVME drive installed as the boot (in a PCIe adapter) and two 8TB drives installed in bays 3 and 4, respectively. The 4TB SATA SSD is a brand new Samsung 860 Evo.

FYI – in reference to my previous post, I don’t think the drive adapter matters. I tried connecting the 860 Evo isn’t detected even when I connected it directly to the SATA port with no adapter at all. Very disappointing.

In case someone was waiting for the verification concerning the PEX8M2E2: I’ve just added the PEX8M2E2 with 2x Samsung 970 EVO 2TB NVME in my microserver 10 plus and it works

server, tool, proliant, microserver, gen10

I got three 4TB 860EVOs and one 4TB 870EVO. If I only install one of these 4 SSDs, Gen10 can normally POST and get the disk recognized. However, if more than 2 of these EVOs are installed at the same time, the disk light on the front panel will stay on and at least 1 disk won’t be recognized. I’ve even tried switch these disks in different SATA order but it simply didn’t work. So I decided to order four 3.84TB PM883s and guess what? It simply works! Thus I highly suspect there may be a disk whitelist or something.

I bought one the “G” CPU for the graphics / hardware encoding. However, I can’t get to work. Digging a little, it appears that it’s not supported (office answer from an HPE product manager). So what is the point of upgrading to a “G” cpu? Might want to edit the article to indicate there is no point. https://community.hpe.com/t5/ProLiant-Servers-Netservers/Microserver-Gen10-Plus-and-Intel-Quick-Sync/td-p/7084407

Hi, Great article. What do you think of Synology card that use pcie3 8x for 2m.2 and 1 10gbe ? https://www.synology.com/en-us/products/E10M20-T1#specs

Came across this HPE advisory on drive compatibility: Note that power consumption of the storage devices should not exceed either of the total power budget numbers provided below (max drive counts are four): Total power budget for 2.5″ SATA SSD cannot exceed 2.23W. (Per each drive) Total power budget for 3.5″ SATA HDD cannot exceed 12W. (Per each drive) https://support.hpe.com/hpesc/public/docDisplay?docId=a00106659en_usdocLocale=en_US

Great article, just wondering if i could us NVMe on the PCIe riser to run my operating system EG Windows?

I have a configuration with a Smart Array E208i-p 12Gb PCIe SAS-Controller and 2 original HP 240GB SATA 6G SFF SSDs. Now I tried to add 2 Samsung QVO 8TB, but they were not visible in the RAID Configuration Utility. So I tried an EVO with 2 TB, but still the same. Only 1TB and smaller EVOs [maybe SSDs in general] were detected. Now I added 2 Seagate 18TB Exos X X18, which can be configured as RAID1.

Hey there! Speaking of the Intel X710-DA4 SFP card anyone knows the exact manufacturer’s part number/EAN of the card fitting into the case/slot? Kind regards Martin

Hi – You have the E-2278G as untested and also as over 130w and Recommend no. There are other CPUs with 80w TDP that you don’t mark as over 130W. Is there something not on your chart that pushes it over, or otherwise makes it not suitable (other than not being tested). Thanks, Cheers, Liam

Hi Bart Welvaert, My system is able to run one of this Samsung – 32GB.DDR4-2666MHz-CL19-unbuffered ECC: M391A4G43MB1-CTD. if I put two stick onto the system it will show 221 Error. what can be the cause ?

Sorry for bumping in. There seem to be still a question open: If replacing the Pentium or Xeon with a Xeon with an iGPU, does iGPU work or is that still not usable?

I am working on mine gen10 plus now. it has VMware enterprise on a SSD I shoved into one of the 4 slots. I have 2 4gb nas SAS drives… its it better to use the single pcie slot for a sas card than networking? This is in my house and no need for 10gb 25gb nics. I have a Windows VM, a freenas VM and that’s it for now. I am getting 2 32gb of the ecc crucial sticks used in this to start off and want to work on storage. or should I just got ALL USB storage? Its going to run blue iris for cams, MY bbs (lol), and software to stream my scanner feed but i do want a shit load of storage on it.

Hi Steven, I’ve seen others claim 2 of those modules work without any issue, is it possible you have a bad module? Considering getting a couple of these but now not convinced they’ll work!

I have been running Crucial CT2K32G4DFD8266 64Go Kit (32Go x2) (DDR4, 2666 MT/s, DIMM, 1.2V, CL19) in my HPE MicroServer Gen10plus for a while now and I’m seeing some unusualness that I was hoping someone might have some suggestions about. First off, ILO reports that it has an “unknown” memory config. dmidecode reports that there is memory there but can’t show anything else. I am using unraid as my operating system and, while all the memory in the system is usable, it displays as having no RAM installed. Since this isn’t a supported configuration, I’m not sure that there is anything that i can really do to resolve the issue, but I was hoping someone might have a suggestion.

This is incredible, many thanks. As current Xeon availability and pricing are somewhat mad, I am wondering- if the i3-9100 works as I read from the table, I assume the i5-9400 would work, as well? It seems a reasonable upgrade for me. They are available for 175 EUR and provide 6C/6T within a 65W package. Has anyone tested this, or is there an indicator it will not work?

@STH, thank you for the comprehensive review. I’m curious to know if an i3-9100T / 9300T will work in this server.

I hope they will make a new version with nvme support and 12th gen intel cpu. this machine was born old, still has 1151 socket! The processors available are old, slow and not efficient. Any low power i5 would blow them away

Any chance to fit into it NVE drive and 10Gb NIC? I see only one PCIe, but maybe there is some splitter or 10Gb USB 3.0 NIC (if it negotiates 10Gb and practically offers 2Gb/s then I’m satisfied)

Thanks for this super interesting and detailed review of the HPE ProLiant MicroServer Gen10 Plus and its capabilities. It would have been ideal to add an Amazon link to the tested products in a legend. Otherwise, however, a 1A contribution. I am thinking about getting one of these servers as a Hyper-V host (LAB) and have done some research on what I need. Thereby I came across the following info on the Intel site: https://ark.intel.com/content/www/us/en/ark/products/191036/intel-xeon-e2224-processor-8m-cache-3-40-ghz.html According to Intel’s website, the max. supported memory is for the E-2224 processor with 128 GB RAM. Does HP limit the use to 64 GB via BIOS? Has the operation of the HPE ProLiant MicroServer Gen10 Plus been tested with 128 GB, or is too much power required for the operation of 128 GB RAM?

Hi, I have a question, has anyone have recommendation for good/not expensive RAID controller? I’m building VMware esxi and I need HW RAID option.

Can anyone recommend a GPU for this unit if you can’t take advantage of the Intel QuickSync on the HPE Gen10 Plus?

You can’t honestly recommend a server that you can’t even find SSD drives that work consistently! These servers are a pain the butt when trying to configure since I basically have to go to sites like this to see, through apocryphal statements, what SSD might work. The only thing that makes sense is the 2.23W max requirement for SSD. And we all know how all SSD drive manufacturer make it easy to find wattage for their drives!

How do you get this Microserver to boot from USB SSD? Under boot options I only see my sata drives and network devices.

I’ve tried at least a dozen times installing Proxmox to my crucial x8 (in both legacy mode and uefi mode), install goes fine but after install on boot, the Microserver tells me that it can’t find a bootable drive. I just tried installing Proxmox to one of my HDDs and that worked fine. I can’t figure out why when I install to my Crucial x8 why it won’t boot from it.

I tried installing Debian to my USB SSD and also ran into a problem. After install debian boots to grub. When I do ‘ls’ I’m not seeing my USB SSD, I only see my (4) sata hdds.

Guys, has anyone tried i5-9400f?? I would give it a try despite recommendations (above someone tried i5-9400 and reported it works) but before investing my money it would be great to gry 100% confirmation.

hi guys, someone can help me? i just buy 1x32gb crucial CT32G4DFD8266 but when i install it i receive error 232-DIMM initialization Error. someone can help me?

I can confirm the i7-9700f works. COnsidering above saying the i5-9500 works the i5-9400f should be fine.

I plan to use this server at home. As I saw there is a loud case fan but CPU is fanless. Can somebody tell is it possible to substitute case fan wih some heatsink to make the server completely fanless ? And is i possible to power off/on the fan via iLo ?

Hello, I have a problem with my ram (Crucial mta36asf4g72pz-2g6d1qi – 32go – ECC) I have a proliant gen10 with G5420 processor. “error code 00000000 Dim is not supported in the system.” is this ram reference not supported? thank you

ANother data point to add on the SSD issue; 4TB Samsung 870 QVOs were installed as non-boot storage. Initially worked fine, but then intermittently dropped out at reboot time. The HDD activity LED stayed lit when this occurred. Approx 1 boot in every 3 or 4 this would not occur and the drives would be detected. If the drive ‘passed’ the initial POST check was detected, it would remain detected and fully operational until the next boot, regardless of how hard it was used. I believe this is down to the ludicrous 2.23w power limitation on SSDs; HDD drives in the same SATA port using the same power brick have a 12w limitation. It would appear that this limitation is an arbitrary one designed to limit SSDs to the single HPE 250Gb Sata SSD they make available. The 4TB 870 QVOs have a claimed average read power draw of 2.2w; this probably accounts for the intermittent nature of the issue. Does anyone know of any SATA SSDs that consistently draw less than 2.23w at boot-up and could therefore pass the ridiculous 2.23w limit?

Hi, If anyone interested I can confirm that Gen10 plus works fine with: 2 x Kingston Server Premier – DDR4 – module – 32 GB – DIMM 288-pin – 2666 (KSM26ED8/32ME) and Xeon E-2234 processor.

After several years with 2 Gen10Plus of these I can comment on several points: I9 9900 processor (65w) performs really well compared to xeon 2224. I have tried non ecc udimm memories and with the latest BIOS they work very well. The integrated controller has a terrible performance, however if HPE Smart Array E208i-p SR Gen10 is added, the operation is as expected in this type of equipment. I have also tested various GPUs and recommend the Quadro P400 and Quadro P1000.

I bought a Samsung pm9a3 nvme but I didn’t realize it was a 22110 type. Would you recommend a cheap adapter compatible with this HP microserver for this NVME memory?

Has anyone tried using the i9-9900k on this? I can’t seem to find any i9-9900 anymore and wanted to upgrade the gen10 with a better cpu. I see lots of 9900kf, 9900k, and 9900t cpus still available these days.

Would qnap qm2-4p-384 ssd2pcie card be fitted to Microserver Gen10 Plus by length ? It is about 295 mm. length.

So I want to use this with 4x Seagate Exos 16TB, which are said to consume up to 10W each at the maximum. Also with QNAP QM2-2P10G1TB (10GBe Tbase 2x NVME slots) which unlike the card reviewed here is PCIe Gen 3. The specification says it draws 5W but surely that’s just idle? How do I calculate what the network will draw at its peak? I may just use single SSD in the end, currently Intel 660p 2TB (wasn’t sure if 670p will work, have anyone tried?) which I want to partition and use small bit for TrueNAS boot and the rest as a cache drive. Or would it be better to still have 2 SSDs, one say 256GB for the system and other one for cache?

Forgot to add it has 32GB of ECC RAM, stock CPU and iLO control board. Just worried about my wattages! Any advice much appreciated.

Hi, will there be a review/update to the new v2 version of the Microserver Gen10 Plus? The new one seems to have among other things PCIe Gen4, built in TPM, different USB port layout. Thank you!

Hi Everyone, I would like to use the HPE MSG10 with Smart Array E208i-p controler for RAID purpose and Internal SSD mount with HPE 3.5 to 2.5 cage converter. I would like at least 2To Raid aivailable space and also visible/boot by esxi. Does anyone know/tested other 2.5 SSD models that fit the 2.23w hpe limit ? Thanks

For those looking to use an NVMe card, I’ve got the Sabrent EC-PCIE (NVMe M.2 SSD to PCIe X16/X8/X4 Card) and a Samsung 980 PRO NVMe drive working in my Gen10 Plus V2 (boot drive for Proxmox). Currently considering a RAM upgrade, trying to find a pair of 32GB Unbuffered ECC 3200MHz UDIMMs which will work in this thing.

This site uses Akismet to reduce spam. Learn how your comment data is processed.


ServeTheHome is the IT professional’s guide to servers, storage, networking, and high-end workstation hardware, plus great open source projects.

Advertise on STH DISCLAIMERS: We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Please be advised of our data usage for machine learning and AI inference and training purposes which can be found here along with our other editorial and copyright policies.

How to Configure NIC Teaming on HP Proliant Server using HP NCU

T he word “Teaming” refers to the concept of combining two or more network ports to a single network port. We configure network teaming to get network fault tolerance and load balancing. NIC Teaming (Network Teaming) means you are grouping two or more physical NICs and it will act as a single virtual NIC. You may call it as a Virtual NIC. Minimum number of NICs which can be grouped (Teamed) is Two and the maximum number of NICs which you can group is Eight.HP Servers are equipped with Redundant Power Supply, Fan, Hard drive (RAID) etc. As we have redundant hardware components installed on same server, the server will be available to its users even if one of the above said components fails. In the similar manner, by doing NIC Teaming (Network Teaming), we can achieve Network Fault tolerance and Load balancing on your HP Proliant Server.

HP Proliant Network Adapter Teaming (NIC Teaming) allows Server administrator to configure Network Adapter, Port, Network cable and switch level redundancy and fault tolerance. Server NIC Teaming will also allows Receive Load balancing and Transmit Load balancing. Once you configure NIC teaming on a server, the server connectivity will not be affected when one of the Network adapter fails, Network Cable disconnects or Switch failure happens.

To create NIC Teaming in Windows 2008/2003 Operating System, we need to use the HP Network Configuration Utility. HP Network Configuration Utility (HP NCU) is a very easy-to-use tool available for Windows Operating System. This utility is available for download at Driver Download page of your HPServer (HP.com).

To configure NIC teaming on your Windows based HP Proliant Server, you need to download HP Network Configuration Utility (HP NCU). If you are using Windows 2012 Server Operating System on your HP Server, then you could not use HP Network Configuration Utility. We need to use the inbuilt network team software of Windows. Please install the latest version of Network card drivers before you install the HP Network Configuration Utility. In Linux, Teaming (NIC Bonding) function is already available and there is no HP tools which you need to use to configure it.

Different ways to open HP Network Configuration Utility:

HP NCU is used for Network Teaming and it has a easy to use graphical interface. Using NCU, you can create and dissolve NIC/Network teaming. If you have not installed NCU on your server, you need to download and install it from your HP Server driver download page. NCU is also included in HP Service Pack bundle. Once you download and install NCU, you need to launch the NCU to create a network team. So let let us see how to open Network Configuration Utility on your HP Server. You can open NCU in different ways.

server, tool, proliant, microserver, gen10

Open HP NCU from System Tray:

Please check whether NCU is listed in System Tray, refer below said screen to understand better.

Open HP NCU from Run window:

Try searching/running the command cpqteam.exe at Start-Search or Windows Run menu as shown below. cpqteam is the name of the HP NCU executable program.

Open HP NCU from Network Properties:

Open Local Area Connection Properties on your Server, it will show HP Network Configuration Utility listed. Then select HP Network Configuration Utility and click on Properties. This should open NCU on your Server.

Open HP NCU from Program Files folder:

You could find the NCU program, I mean cpqteam. exe file in C:\Program Files\HP\NCU folder. If none of the above methods work for you, then you can try this step.

NOTE: If you are using Windows 2008 Core Editon, then you may have to open Command Prompt and CD to C:\Program Files\HP\NCU. Then run the command hpteam.cpl

By performing any of the above said methods, you can open HP NCU on your Server. As you can see from below provided screenshot, your Network Adapters will be listed in HP NCU. You can select multiple Network Adapters and click the Team button to form a network a teaming on your server.

HP NCU allows you to configure different types of Network Team, here are the few types of network team which can be configured using NCU.

  • Network Fault Tolerance Only (NFT)
  • Network Fault Tolerance Only with Preference Order
  • Transmit Load Balancing with Fault Tolerance (TLB)
  • Transmit Load Balancing with Fault Tolerance and Preference Order
  • Switch-assisted Load Balancing with Fault Tolerance (SLB)
  • 802.3ad Dynamic with Fault Tolerance

Ultimate Guide for HPE ProLiant MicroServer Gen10 Plus Customization

Network Fault Tolerance Only (NFT):

In NFT team, you can group two to eight NIC ports and it will act as one virtual network adapter. In NFT, only one NIC port will transmit and receive data and its called as primary NIC. Remaining adapters are non-primary and will not participate in receive and transmit of data. So

If you group 8 NICs and create a NFT Team, then only 1 NIC will transmit and receive data, remaining 7 NICs will be in standby mode. If the primary NIC fails, then next available NIC will be treated as Primary, and will continue the transmit and receive of data. NFT supports switch level redundancy by allowing the teamed ports to be connected to more than one switch in the same LAN

Network Fault Tolerance Only with Preference Order:

This mode is identical to NFT, however here you can select which NIC is Primary NIC. You can configure NIC Priority in HP Network Configuration Utility. This team type allows System Administrator to prioritize the order in which teamed ports should failover if any Network failure happens. This team supports switch level redundancy.

Transmit Load Balancing with Fault Tolerance (TLB):

TLB supports load balancing (transmit only). The primary NIC is responsible for receiving all traffic destined for the server, however remaining adapters will participate in transmission of data. Please note that Primary NIC will do both transmit and receive while rest of the NIC will perform only transmission of data. In simpler words, when TLB is configured, all NICs will transmit the data but only the primary NIC will do both transmit and receive operation.

If you group 8 NICs and create a TLB Team, then only 1 NIC will transmit and receive data, remaining 7 NICs will perform transmission of data. TLB supports switch level redundancy.

Transmit Load Balancing with Fault Tolerance and Preference Order:

server, tool, proliant, microserver, gen10

This model is identical to TLB, however you can select which one is the Primary NIC. This option will help System Administrator to design network in such a way that one of the teamed NIC port is more preferred than other NIC port in the same team. This model also supports switch level redundancy.

Switch-assisted Load Balancing with Fault Tolerance (SLB):

SLB allows full transmit and receive load balancing. In this team, all the NICs will transmit and receive data hence you have both transmit and receive load balancing. So if you group 8 NICs and create a SLB Team, all the 8 NICs will transmit and receive data. However, SLB does not support Switch level redundancy as we have to connect all the teamed NIC ports to the same switch. Please note that SLB is not supported on all switches as it requires Ether Channel, MultiLink Trunking etc.

802.3ad Dynamic with Fault Tolerance

This team is identical to SLB except that the switch must support IEEE 802.3ad Link Aggregation Protocol (LACP). The main advantage of 802.3ad is that you do not have to manually configure your switch. 802.3ad does not support Switch level redundancy but allows full transmit and receive load balancing.

How to create Network Team using HP NCU:

Once you open NCU, you will find all the installed network cards are listed in it. As you can find from below provided screenshot, we have 4 NICs installed. Here, we will go ahead and team two Network cards in NFT mode. Later we will discuss how to assign IP address to the teamed NICs.

NOTE: HP NCU can be used as a Virtual switch or vSwitch. You can use the VLAN button on the NCU window and create multiple virtual Network Card interface in the Operating System.

The HP Network Configuration Utility Properties window will look like the one provided below.

Select 2 NICs by clicking on it and then click Team button.

HP Network Team #1 will be created as shown below. 4. Select HP Network Team #1 and click on Properties button to change team properties.

The Team Properties Window will open now.

Here you can select the type of NIC team you want to implement (See below screenshot).

Here, I will select NFT from the Team Type Selection drop down list. 8. Click OK once you selected the desired Team type.

Now you will be at below provided screen now. Click OK to close HP NCU.

You will receive confirmation window prompting you to save changes, Click Yes.

HP NCU will configure NIC teaming now, the screen may look like the one provided below.

This may take some time, once Teaming is done, below provided window will be shown.

Open HP NCU, you could find that HP Network Team is in Green color.

NOTE: Whenever you update NIC drivers on your HP Servers, ensure that you dissolve the existing NIC teaming before you do it. You will find Dissolve button in HP NCU window.

How to set Static IP address for Teamed Network Adapters:

As you have created NIC Teaming on your server, you may go ahead and assign a static IP to it Or you can leave the settings in default and your DHCP server will assign an IP automatically. You need to perform below provided steps only if you are setting any static IP for your NIC team. If you want to use DHCP, then you do not need to perform below said steps because your DHCP Server will automatically assigns new IP address to the Virtual NIC created by the teaming software.

To assign an IP, you need to open control panel window and open Network connection window. You will find a new network card with name “HP Network Team #1”. When we create a Network Team, this Virtual NIC will be created. To assign an IP for the Teamed NICs, we have to assign IP address for this Virtual NIC “HP Network Team #1”.

To set an IP address for HP Network Team #1, please open Network Connection window from Control Panel.

Right-click on Local Area Connection 5 and click on Properties.

Select Internet Protocol Version 4 (TCP/IP), and then click on Properties.

Select the option “Use the following IP address” and set a Static IP for the Teamed NICs now.

HP NCU can be used for fine tuning your Network Team. You could control various settings of the NICs using this wonderful utility. By the way, what do you think about my article? Were you able to team the NICs? Let us know.

Nutanix and HPE hyperconverged solutions bring the Cloud to you

Nutanix and HPE partner to deliver the industry’s broadest choice of how to power private, hybrid and multicloud environments. Watch the following video to learn why Nutanix and HPE are better together

Highlighted Solutions Validated for Nutanix on HPE

Enjoy predictable performance and simplified management on a consumption-based platform for End User Computing workloads that accommodates the future of work.

Deploy and manage apps from anywhere — with the agility and flexibility of the Cloud and the security, performance, and cost predictability of an on-premises infrastructure.

Modernizing and consolidating enterprise database estates with Nutanix Database Service, a feature-rich, Database-as-a-Service (DBaaS) solution and HPE GreenLake, for a consumption-based service model.

Run operationally simple, high-performance computing to the edges of network communication, where large data volumes abound but compute capability for quick insights has been limited.

Featured Assets

How IT Modernization Accelerates Digital Transformation

Exploring how Nutanix, HPE, and AMD drive agility and efficiency

Nutanix and HPE Simplify Journey to Hybrid Multicloud

Learn how you can modernize your data center infrastructure with best-of-breed technology.

Nutanix Enterprise Cloud with HPE ProLiant DX Appliances

Nutanix on HPE® ProLiant® DX Appliances provide powerful turnkey building blocks for your enterprise Cloud.


These best practices walk you through key system design issues.

HPE GreenLake with Nutanix

Gain the flexibility of a monthly payment based on usage with no up-front cost.

How Nutanix Works on HPE ProLiant® DX

An Inside look at Nutanix Enterprise Cloud software and how it works on HPE ProLiant® DX appliances.

HPE GreenLake with Nutanix

Experience the flexibility of a public Cloud with the security, compliance and control of on-prem.

Modern Cloud Infrastructure For Dummies®

Learn about the evolution and value proposition of a modern Cloud infrastructure.

Increasing patient-centric care, reducing cost and complexity.

Removing barriers to imagination with a flexible infrastructure.

Increasing time to value, reducing risk profile with HPE and Nutanix.

Delivering Customer Satisfaction

Technology Service Provider

GreenLake allowed our DRaaS operations to achieve better cost transparency—and competitiveness—than was possible when we bought our own servers.

– DRaaS Product Manager

International Retailer

By reducing database deployment times from several weeks to a few hours, HPE GreenLake with Nutanix Database Service for databases has driven a 2-3x reduction in total data storage needs through eliminated redundancies.


One of our flagship systems was run on an exceptionally high performing machine, but we took the leap, moved it over to Nutanix and now we are getting better performance than we were on the other system – again hugely impressed

San Francisco District Attorney’s Office

Since putting it into production, my team has been able to FOCUS more time bringing online new applications and services and much less time managing underlying infrastructure.


The synergy between Nutanix HCI software and HPE servers has been good for our business. Since implementation, we’ve been able to rapidly scale our IT infrastructure with minimal management and reduce our operating costs, which is vital in today’s economic environment.


Nutanix just ticked all the boxes and has made a huge difference to our ability to do business. VMs that used to take weeks to provision can now be brought online in minutes and workloads sized on-demand as customer requirements change.


The Nutanix solution has more than lived up to our expectations. Providing the on-demand scalability needed to cope with growing business demand while also enabling us to bring forward plans for DevOps and the adoption of containers, Kubernetes and other Cloud native technologies in pursuit of more agile and flexible IT moving forward.

– Ilan Stark, Project Leader, Viollier AG