理研計算科学研究センター長 Director RIKEN R-CCS, 東工大特定教授 Prof. Tokyo Tech. ACM/ISC/JSSST/IPSJ Fellows, IEEE Fernbach(2014)&Cray(2022) Awards, 令4紫綬褒章 Purple Ribbon Medal 2022
Internal @intelMemo Leak Reveals They’re Impressed with @AMD
Competition from AMD is especially tough in #HPC: performance driven by # of cores and # of memory channels (or memory bandwidth). Intel is challenged on both fronts.
eteknix.com/intel-internal…pic.twitter.com/Xi5NiSmkYC
Researchers have proposed a new proactive fault-tolerant mechanism using reinforcement learning to optimize energy efficiency and performance in NoCs, which allows routers to switch among four different fault-tolerant operations.
ieeexplore.ieee.org/document/87148…pic.twitter.com/ECOPDZgS2k
Norrod: "So in thinking about it, in the original strategy, Milan was where we expected to be back to IPC (or better) parity across all workloads. Even with Intel's road map delays, we have continued to execute within a quarter of when we said we were going to execute." twitter.com/anandtech/stat…
Average tech salaries in 10 U.S. cities:
San Francisco: $145,000
Seattle: $138,000
New York: $133,000
Los Angeles: $128,000
Austin: $125,000
Boston: $127,000
Washington: $123,000
Denver: $117,000
Chicago: $114,000
San Diego: $113,000
Source: Hired
Patterson Keynote:
Current Neural Network Architecture Debate:
TPU, GPU, FPGA, CPU, .....
Everyone and his uncle wants to build you an #AI chip 😊
This slides does not even mention the Intel Xe #GPU!pic.twitter.com/VR1FxSRtb8
Patterson Keynote:
Pitfall #8: Not leveraging large batch size
Fallacy #9: ML ASICs are inflexible
Fallacy #10: Can keep increasing training compute at same rate as the past
- 10X per year to push the state of the training accuracy
#HPC#AIpic.twitter.com/zEjckAkNFo
On this day in 2010: iPhone 4 released
iPhone apps that didn’t exist then:
Instagram
Snapchat
Netflix app
Lyft
Siri
FB Messenger
WeChat
Wish
Bitmoji
Slack
Apple Maps
Alexa
Tinder
TikTok
Candy Crush
Apple Music
UberEats pic.twitter.com/WJZay8HvfH
Alan Turing FRS, the mathematician and founding figure of computer science and artificial intelligence was born #Onthisday in 1912. pic.twitter.com/dODm8q6Znp
This is a scalable DNN acceleration architecture. Huawei already uses Da Vinci in their Ascend series of chips that span from 1 mW to 300 W and from 20 MOPS to 300 TOPS. Ascend chips are found inside Huawei's Atlas platform which includes modules, PCIe cards, and edge stations.
Right before the weekend Huawei announced the Kirin 810. Though the SoC is largely incremental over the 710, the interesting tidbit is the new NPU which no longer uses Cambricon's IP but Instead their own NPU based on the Da Vinci arch (Huawei will present the arch at Hot Chips).