teraflops

(redirected from TFLOPS)
Also found in: Acronyms, Encyclopedia.
Related to TFLOPS: teraflop, Petaflop, gigaflop

ter·a·flops

 (tĕr′ə-flŏps′) or ter·a·flop (tĕr′ə-flŏp′)
n. pl. teraflops
A measure of computing speed equal to one trillion floating-point operations per second.

American Heritage® Dictionary of the English Language, Fifth Edition. Copyright © 2016 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved.

ter•a•flops

(ˈtɛr əˌflɒps)
n.
a measure of computer speed, equal to one trillion floating-point operations per second.
[1985–90; see flops]
Random House Kernerman Webster's College Dictionary, © 2010 K Dictionaries Ltd. Copyright 2005, 1997, 1991 by Random House, Inc. All rights reserved.
References in periodicals archive ?
The Intel Stratix 10 FPGA has more than 5,000 hardened floating-point units (DSPs), over 28MB of on-chip RAMs (M20Ks), integration with high-bandwidth memories (up to 4x250GB/s/stack or 1TB/s), and improved frequency from the new HyperFlex technology, thereby leading to a peak 9.2 Tflops in FP32 throughput.
Building on over 60 years' worth of Fujitsu-developed microarchitecture, this chip offers peak performance of over 2.7 TFLOPS, demonstrating superior HPC and AI performance.
Located in Virginia, United States, Jefferson Lab's Seneca Data Cluster boasts 117.2 TFLOPS total processing power and also claimed the number 194 spot in the Green500 list
The next version, Cray's XE6 with 758 TFLOPS, worked on two separate systems a "Haeon" for operation in practice and "Haedam" for analysis and backup.
Vendor Model HPC Memory Bandwidth Performance NVIDIA K40 yes 12 GB 288 GB/s 4.29 TFLOPS NVIDIA Titan Black no 6 GB 336 GB/s 5.12 TFLOPS AMD FirePro W9100 yes 16 GB 320 GB/s 5.24 TFLOPS AMD Radeon R9 290X no 6 GB 320 GB/s 5.63 TFLOPS Intel Xeon Phi 7120 yes 16 GB 352 GB/s 2.40 TFLOPS Table 2: Reconstruction times in fps for various image sizes with side length N and fixed versus variable oversampling ratio [gamma].
We used the authoring software NVIDIA DIGITS for deep learning, a deep learning optimized machine with two gTx1080 Ti GPUs with 11.34 TFlops single precision, 484GB/s memory bandwidth, and 11 GB memory per board.
Junichiro's GRAPE-4 cluster uses 1692 pipeline and achieves 1.08 TFlops [7].
PKDGRAV3's performance on Titan Nodes [N.sub.p] Mpc TFlops Time/ Particle 2 1.0 x [10.sup.9] 250 1.2 125 ns 17 8.0 x [10.sup.9] 500 10.3 14.7 ns 136 6.4 x [10.sup.10] 1,000 82.2 1.84 ns 266 1.3 x [10.sup.11] 1,250 152.5 1.00 ns 2,125 1.0 x [10.sup.12] 2,500 1,230.3 0.124 ns 7,172 3.4 x [10.sup.12] 3,750 4,130.9 0.0365 ns 11,390 5.4 x [10.sup.12] 4,375 6,339.2 0.0236 ns 18,000 8.0 x [10.sup.12] 5,000 10,096.2 0.0150 ns
It hosts the NVIDIA GeForce GTX-950M processor which offers 640 CUDA cores, 1,271 TFLOPs of floating point performance, and 4 GB of GDDR5 graphics memory.
We can also calculate the performance of these two methods in TFLOPS. Table 3 shows the effective TFLOPS of cuDNN SGEMM and 3D WMFA method.
So far, the S4 O2R environment, with close to 4,700 computing cores (60 TFLOPs) and 1,700-TB disk storage capacity, has been a great success and consequently was recently expanded to significantly increase its computing capacity.
He also claims in his report that the Nvidia chip is going to feature a floating-point performance of around 1 TFLOPS, as per (https://mynintendonews.com/2016/10/30/rumour-japanese-journalist-details-nintendo-switch-nvidia-architecture/) MyNintendoNews.