Welcome to our big Gaming Graphics Card Test 2019. Here we present you all of our tested Gaming Graphics Cards. We have compiled detailed background information and added a summary of customer reviews on the web. We aim to help you make your buying decision much easier and find the best gaming graphics card for you. You can also find answers to frequently asked questions in our guide. If available, we also offer you exciting test videos. On this page, you will also find some vital information that you should be aware of if you want to buy a gaming graphics card.
The most important in brief
A gaming graphics card is perfect for playing high frame rates in video games. Especially 4K and 3D images require high computing power.
Depending on the type, a distinction is made between gaming graphics cards and gaming graphics chips. Let a professional advise you to find the most suitable gaming graphics card for you.
The memory type, chip speed, hard drive size, slot and integrated ports of a gaming graphics card you should definitely consider when buying.
Gaming Graphics Card Test: The Ranking
- Gigabyte GeForce RTX 2080 Gaming Graphics Card
- MSI Gaming GeForce GTX 1660 Graphics Card
- MSI GAMING GeForce RTX 2060 Graphics Card
- ASUS ROG Strix Radeon Rx 590 Graphics Card
- ASUS ROG Strix Radeon RX 570 Graphics Card
Guide: Questions to consider before buying a gaming graphics card
What makes a good graphics card?
Graphics cards, especially gaming graphics cards, can be relatively expensive. Therefore, you should check in advance whether the desired device also meets the requirements of the games. To make your search a little easier, we have summarized a few of the most important points.
Did you know that the first graphics system was developed in the ’50s?
In 1951, MIT developed a flight simulator called Whirlwind for the American Army. It was the world’s first 3D graphics system. A graphics card as we know it today was first used in series production in the 70s on the Apple II microcomputer.
Newer graphics cards are often advertised in horrendous price ranges. But why is this so and is it justified? For gaming graphics cards, there are only two big providers in everything: AMD and Nvidia. The discussion about which provider would take better care of the players has been raging for ages.
Actually, the two hardly differ, except that AMD is often a little cheaper and rather the latecomer, while Nvidia brings new technology and performance on the market.
But why are graphics cards so expensive when they’re new? Due to the thematic monopoly of the two, the price can be chosen arbitrarily.
Although there are mostly only minor improvements, new gaming graphics cards are advertised by all means of art. As they get older, prices tend to drop dramatically, while performance is still excellent.
That’s why you should always wait for a little before buying a card that’s too expensive and would only cost half as much after a few months.
Does memory on a graphics card matter?
“The higher the VRAM, the better the graphics card.” This is what you hear and read more and more often, but not every time. Besides the memory, the bandwidth is also very important. It determines how much power can be provided.
A GDDR5 memory is integrated into many gaming graphics cards. But since 2015 Nvidia prefers to use the improved version GDDR5X and thus improves the bandwidth from 176 GB/s to 320-448 GB/s.
Graphics card processor (GPU)
AMD and Nvidia use different GPU architectures. Despite some differences, both are a reasonable choice.
For AMD and Nvidia, it is important to differ from their counterparts in such technology. AMD uses a Polaris architecture for their Radeon 400 series, while Nvidia uses a Pascal architecture for the GeForce 10 series. You can read more about both on their respective websites.
Which graphics card do I need for 4K gaming?
In order to achieve a resolution of 3840 x 2160 pixels, a graphics card must meet a few requirements. In the beginning, you need a monitor that supports the 3840 x 2160 pixel resolution. But this is only the beginning. To achieve this resolution with a refresh rate of 60 Hz, one needs the right connection to the graphics card.
This means that at least one HDMI 2.0 or DisplayPort 1.2 connection must be integrated. But so far only Nvidia uses HDMI 2.0 and DisplayPort 1.2. AMD still uses the somewhat older variant of the HDMI connection and can therefore only offer a 4K resolution via DisplayPort.
How much do gaming graphics cards cost?
According to various technology websites that have conducted tests, current good gaming graphics cards are between 150 and 450 dollars. As described above, the newer generation is much more expensive than older devices. Often, however, older models and generations are sufficient to perform well in current games.
When should I replace my gaming graphics card?
New games also bring new requirements. Older generations sometimes don’t lag anymore.
Many players know it: A new game is released, and it has too high requirements for its hardware. Now there are two possibilities:
Play the games on bad graphic levels and low frames per second.
Install a new graphics card.
As soon as current games have to be played with a bad performance, or the graphics card overheats due to overloading, you can think about an upgrade. Of course, it’s up to you, but as soon as the performance gets too bad, you should buy a new device.
Decision: What types of gaming graphics cards are out there and which one is right for you?
As with graphics cards for “commercial” use, there are two major manufacturers for gaming that dominate the market:
Gaming graphics card from Nvidia
Gaming graphics card from AMD
Besides, these can be divided into the following subcategories:
The different manufacturers deliver their graphics cards with additional features and free apps. Which provider you choose is a matter of taste and for some, even a question of faith. In the following section, we will compare the two market leaders and go into the respective advantages and disadvantages in more detail.
What is behind an Nvidia graphics card and what are its advantages and disadvantages?
Nvidia is one of the best-known companies in the gaming and graphics sector. Nvidia was founded in 1993 and has been producing graphics cards and chips ever since. They were the first to introduce the GPU (Graphic Processing Unit) to the market.
The main focus is on desktop graphics cards. However, there are other variations of graphics chips that are suitable for notebooks (GeForce Go) and even mobile phones (GoForce).
You’ll also find these graphics solutions in the world of game consoles. For example, the Sony Playstation 3 is also equipped with an Nvidia graphics card.
Among gamers, Nvidia graphics cards and chips are generally very popular. They convince with excellent performance and additional features, like the “Shadow Play”. This allows you to play and record your gameplay at the same time.
This is also possible with an AMD graphics card. However, Nvidia has its own software for this and thus simplifies the configuration effort considerably.
If you decide to play your games in 3D, you should also use Nvidia. They have already installed the necessary software directly in the driver. At AMD you have to buy this software first.
What is behind an AMD graphics card and what are its advantages and disadvantages?
AMD was founded in 1969 and originally produced integrated circuits. In 2006, AMD acquired ATI Technologies and has since manufactured its own graphics cards and components for these graphics chips.
AMD graphics solutions are on average cheaper than those from Nvidia. The reason for this is that AMD has acquired several technology companies. This enables them to save costs by supplying and manufacturing important parts themselves.
Similar to Nvidia, AMD also offers different graphics cards and chips. The cards that are best suited for gaming are called AMD Radeon. A class above this is the AMD FirePro series, which is used in the professional sector.
AMD graphics cards can also be found in some game consoles. For example, the Sony Playstation 4 and the Xbox One S rely on AMD graphics solutions.
With AMD, video recording during gameplay works similarly. However, you will need a few more clicks and install additional software.
Another option with AMD graphics cards is to pair multiple monitors together to form a large monitor. So you can connect three monitors side by side and create an extended horizontal field of view.
On average, AMD gives you more graphics cards for less money or more FPS (Frames per Second) for less money. Both cards have advantages and disadvantages. It depends on which requirements you set for your card.
Whether you choose an Nvidia GeForce or an AMD Radeon graphics card is a question of taste.
Purchase criteria: Based on these factors you can compare and rate gaming graphics cards
When comparing gaming graphics cards, there are always many different and confusing data. The following paragraph will help you to clarify which data is important for you and how you compare them.
In summary, this is:
hard disk size
VGA/ DVI/ HDMI/ DisplayPort
In the following part, you can see how to observe these criteria and how they are optimal for you.
The so-called Graphics Double Data Rate (GDDR) is another name for a DDR memory. These are used as the standard for the main memory in PCs.
The GDDR can be divided into the following types:
This type is the first generation and, with a clock rate of 166 – 400 MHz, is the least powerful main memory.
GDDR2 is a little-used further development of GDDR. Due to high heat generation at high frequencies, it has rarely been used.
Nevertheless, it has been implemented in a few graphics cards, such as the GeForce FX 5800 Ultra. Although a maximum clock frequency of 500 MHz is possible, most manufacturers used a clock frequency below 400 MHz.
This type can be found not only in Sony’s PlayStation 3 but also in Microsoft’s Xbox 360. The GDDR3 was optimized to process high memory clocks and can offer a data throughput of 83.2 GB/s with its 700 – 1300 MHz.
By doubling the prefetch, this type can offer a significantly higher frequency. With frequencies above 1 GHz, it stands out from its predecessors at the beginning.
However, although GDDR4 is used in some graphics cards, such as the Radeon HD 3870, it was difficult to establish itself on the market. Its predecessor soon reached similar clock frequencies and thus outbid it with cheaper offers.
The currently most popular type on the market is available between 512 Mbit – 8 Gbit.
With clock frequencies up to 4 GHz, a transfer rate of up to 8GT/s can be achieved. In addition to graphics cards, such as the ATI Radeon HD 4870, the PlayStation 4 from Sony also uses this memory. Large companies now produce these memories in mass production.
The latest model introduced last year was the GDDR5X with the GeForce GTX 1080. Planned transfer rates are up to 448 GB/s and would, therefore, offer significantly higher performance than the GDDR5. This model is recommended for an optimal gaming experience.
The chip clock indicates how many repetitions per second are possible. This is relatively important because it automatically improves the line. At a clock rate of 1000MHz, we have 1 billion repetitions.
Often one can also increase the chip clock. This is called overclocking (OC). In games, this can often lead to up to 10% improved performance, if not even more. Here should be put for an excellent Ingame Performance particularly value!
ATTENTION! Because overclocking also increases heat formation. So be sure in advance that your PC and the system elements are sufficiently cooled and cannot overheat. This can lead to system errors and crashes, which can damage not only the hardware but also the software.
Hard Disk Size
The hard disk size is also alternatively called Video Random Access Memory (VRAM). Although improved technologies replaced the original VRAM, the memory of a graphics card is often still called VRAM.
Through the RAM or VRAM, you can achieve a good classification of a graphics card. With sizes of 16 GB RAM, as it can be found in the GeForce Titan X, one can definitely speak of a “high-end” card.
But also the GeForce Dual-GTX1060 from Nvidia, which has a RAM of 6 GB, still belongs to the “high-end” products.
Mid-range graphics cards usually use 2 to 4 GB RAM. Although there are still isolated 4 GB “high-end” cards, the majority can be rated somewhat lower. Often 2 – 4 GB is sufficient for normal use.
The Asus GeForce GTX660 and Gigabyte GeForce GTX 1050 Ti are two good examples for the middle class.
All cards with less than 2 GB RAM are counted as subclasses. Although they often perform reasonably well, they are no longer convincing.
For a standard PC, a subclass is still ok, although many would dissuade it. The Gigabyte GTX 660 is one of these devices. The big advantage is, of course, the price. With only about 100 dollars the GTX 660 is one of the cheapest cards.
For gaming, the more recent models are recommended, i.e., “high-end.” Other models are usually sufficient but may have problems with current games. Subclass graphics cards are rather unsuitable for gaming.
With models such as ISA and AGP now a thing of the past, PCI has also been ousted from the market. The newer model PCI-Express is now available in almost all devices.
The PCIe managed to dominate the previous standard PCI, as this model now offers serial data transfer. Its predecessor, however, only had parallel data transfer.
The PCI-Express 3.0 has been available since 2012. This model is still used as a standard. In general, a comparison should take into account which version of the PCIe is used and the number of lanes.
The higher the version and the number of lanes, the better the bandwidth and transmission speed.
To make this clear: a PCIe with only one lane reaches a speed of 985 MB/s.
With 8 lanes, the data transfer rate is 7877 MB/s. An extreme example is a PCIe x32 with 32 lanes. Here the speed of 31508 MB/s is possible.
This year, another generation is to be launched on the market. The PCIe 4.0 should enable a data transfer of 1969 MB/s per lane. Until this appears, the PCIe 3.0 should be available in any case.
VGA/ DVI/ HDMI and Display Port
As already described above, these are different connections. These have certain advantages and disadvantages.
As already listed above, the DVI connection is divided into three different standards. Due to the better bitstream transmission, it offers a better resolution and can be used up to a resolution of 1920 x 1200 without any problems.
The first standard (VGA) is now very outdated and is no longer used with modern graphics cards. If you still have a VGA connection, a cable length of less than two meters is advisable, as otherwise, a too high loss of quality can occur. Also, the DVI cable can be up to 10 meters long before it leads to a more significant loss of quality.
The three standards are DVI-D, DVI-A, and DVI-I. DVI-D can only transmit digital signals and can also be converted to HDMI with an adapter.
The DVI-A connector, on the other hand, only transmits analog signals and is therefore only used as a VGA adapter. DVI-I combines these two and can transmit both analog and digital signals.
DVI connectors are usually available in modern graphics cards. In addition to DVI connections, other connections are often integrated:
The downward compatible successor HDMI is, similar to DVI, always to be found on a good graphics card. But here again, the warning that there are different types of HDMI connections that have different bandwidths and other connectors.
DisplayPort, also to be defined as standard, should always be integrated. The ability to transfer resolutions at 2560 x 1600 with a 10-bit color depth sets it apart from its predecessors. The additional channel for external devices, such as microphones and cameras, is also a great advantage of this connection.
As described before, some factors are needed to achieve a 4K resolution.
In addition to the correct monitor, the connection of the graphics card must also allow this resolution. Therefore there is the HDMI 2.0 and DisplayPort 1.2. Both can display a resolution of 4K (3840 x 2160). But watch out, because AMD doesn’t use the HDMI 2.0 port yet. Nvidia uses both and can, therefore, guarantee a safe 4K resolution.