MOUNTAIN View, Calif., April 15, 2022 /PRNewswire/ — Flex Logix® Systems, Inc., provider of quick and economical edge AI inference accelerators and the main supplier of eFPGA IP, introduced currently that it will be talking at two important business demonstrates in April: The Linley Spring Processor Convention on April 20-21st and the Personal computer Vision Summit on April 27th. The talks will focus all over the company’s InferX™ AI inference accelerator, production boards and computer software answers, which deliver the most economical AI inference acceleration for sophisticated edge AI workloads these as Yolov5.
Linley Spring Processor Meeting Presentation 1:
- Presentation title: Conference the Serious Challenges of AI
- Keep track of: Session 1 Edge-AI Design
- Speaker: Randy Allen, Vice President of Software program for Flex Logix
- Summary: Equipment Learning was first explained in its present-day form in 1952. Its modern re-emergence is not the consequence of technical breakthroughs, but as an alternative of available computation energy. The ubiquity of ML, nevertheless, will be determined by the number of computational cycles we can productively implement subject to the constraints of latency, power, location, and price tag. That has demonstrated to be a tricky obstacle. This chat will talk about strategies to producing parallel heterogeneous processing units that can satisfy the problem.
- When: Wednesday, April 20th
- Locale: Hyatt Regency Hotel, Santa Clara
- Time: 10:20am-12:20pm
Linley Spring Processor Conference Presentation 2:
- Presentation title: Large-Performance Edge Vision Processing Using Dynamically Reconfigurable TPU Technological innovation
- Observe: Session 5 Edge AI Silicon
- Speaker: Cheng Wang, CTO and Co-Founder of Flex Logix
- Abstract: To reach large precision, edge laptop or computer vision needs teraops of processing to be executed in fractions of a 2nd. In addition, edge programs are constrained in conditions of ability and charge. This discuss will current and exhibit the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and contrast it to existing GPU, TPU and other methods to providing the teraops functionality demanded by edge eyesight inferencing. We will assess latency, throughput, memory utilization, power dissipation and all round alternative expense. We are going to also exhibit how present skilled models can be quickly ported to operate on the InferX X1 accelerator.
- When: Thursday, April 21st
- Spot: Hyatt Regency Hotel, Santa Clara
- Time: 1:05pm-2:45pm
Computer Vision Summit Presentation 1:
- Presentation title: The Evolving Silicon Foundation for Edge AI Processing
Speaker: Sam Fuller, Head of AI Inference Item Administration for Flex Logix
Abstract: To reach higher precision, edge AI involves teraops of processing to be executed in fractions of a next. Moreover, edge units are constrained in terms of electricity and value. This discuss will existing and show the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and distinction it to existing GPU, TPU and other strategies to delivering the teraops computing necessary by edge eyesight inferencing. We will compare latency, throughput, memory utilization, electrical power dissipation and all round option value. We are going to also present how current qualified designs can be effortlessly ported to operate on the InferX X1 accelerator. - When: Wednesday, April 27th
- Area: San Jose Marriott
- Time: 10:00am
Pc Eyesight Summit Presentation 2:
- Panel Discussion: Developing Scalable AI Methods
Speaker: Sam Fuller, Head of AI Inference Item Administration for Flex Logix
Abstract: In this session, panelists will talk about the problem of rolling out CV purposes to have real effects. - When: Wednesday, April 27th
- Place: San Jose Marriott
- Time: 12:00pm
About Flex Logix
Flex Logix is a reconfigurable computing enterprise providing AI inference and eFPGA answers centered on program, programs and silicon. Its InferX X1 is the industry’s most-productive AI edge inference accelerator that will deliver AI to the masses in large-quantity applications by offering a great deal higher inference throughput for every greenback and for every watt. Flex Logix’s eFPGA system permits chips to flexibly deal with modifying protocols, requirements, algorithms, and buyer needs and to put into practice reconfigurable accelerators that speed critical workloads 30-100x when compared to normal function processors. Flex Logix is headquartered in Mountain View, California and has offices in Austin, Texas. For far more information, visit https://flex-logix.com.
MEDIA CONTACTS
Kelly Karr
Tanis Communications
[email protected]
+408-718-9350
Copyright 2022. All legal rights reserved. Flex Logix is a registered trademark and InferX is a trademark of Flex Logix, Inc.
Supply Flex Logix Technologies, Inc.