Official response is: "when its ready" ... informal response is that if we dont have new protos in July sometime then were in trouble ;)
Oak FPGA Toolchain
(66 posts) (14 voices)-
Posted 5 years ago #
-
As someone with 15 years of FPGA development I would encourage you NOT to waste your time attempting to develop an open source FPGA development tool chain (for your own good).
You would need to..
1) Develop your own synthesis tool
2) Develop your own place and route tool (which will be impossible without proprietary device technology data)
3) Develop your own timing analysis tool to verify that the outputs of (2) will meet timing constraints (again you'll need proprietary device technology data here).Without wanting to sound arrogant or rude I'd say that if you're considering this then you don't understand the scale of the undertaking.
All the FPGA vendors provide more than adequate tools and all (AFAIK) bundle in capable simulation tools. (Testbenching/simulation is essential if you want any non trivial design to stand a chance of working). With the Altera tools if you need the subscription software you're targeting FPGAs that cost $$$ each (ie the one I'm targeting now is $1500). The entire Cyclone family is supported by the free tools.
_Please_ continue what you're doing with the boards and forget developing FPGA tools.
Nial.
Posted 5 years ago # -
Youre not wrong. We in no way want to link the success of our hardware platform with the results of an effort to build an alternative hdl->bitfile toolchain. In fact, our initial response to that idea was exactly what you said.
Even after several years of defending the "dont bother until a proper vendor leads the effort to build a FOSS toolchain" position, I was eventually persuaded to at least "consider doing something, no matter how simple, to take steps in the right direction." I dont think any of us doubt that liberation from proprietary vendor tools is the "right direction." Its just a matter of timing, resources, and the small issue of closed hardware specifications. Existing literature makes a compelling argument that its certainly possible to gain some understanding of the bitstream file format for certain vendors - to the point that simple configurations are possible. See
@inproceedings{1344729,
author = {Note, Jean-Baptiste and Rannaud, \'{E}ric},
title = {From the bitstream to the netlist},
booktitle = {FPGA '08: Proceedings of the 16th international ACM/SIGDA symposium on Field programmable gate arrays},
year = {2008},
isbn = {978-1-59593-934-0},
pages = {264--264},
location = {Monterey, California, USA},
doi = {http://doi.acm.org/10.1145/1344671.1344729},
publisher = {ACM},
address = {New York, NY, USA},
}and others. Of course, it would be impossible/improbable to fully reverse engineer the bitstream format - and your understanding will be obsolete again with the next revision of the chip. We might not ever know how to properly use some of the chip's finer features like built in DCM's and multipliers. And therefore, any FOSS toolchain will always be 'inferior' (at least in terms of resource utilization) than the vendor tools. But this is, IMHO, one of those situations where something is better than nothing - and it might take a while to get it.
So for the record, the Oak project is NOT at all founded on building a compiler. In fact, it has always been the plan to rely on xilinx tools on at least some level, although there are other ways to skirt that issue besides homebrewing a toolchain. However, looking longer term - the number of competent engineers interested at taking a real crack at this problem is growing, and its a bit too late for "not bothering"
Posted 5 years ago # -
There's a big difference between reverse engineering a bitstream to a simple netlist and starting from scratch and running the whole process from HDL to bitstream.
Altera is a fabless IC company, their only 'worth' is the IP behind the design of their devices. I don't think there's any chance they will release enough information to allow a third party to implement a place and route tool.
You'd also need this information to attempt synthesis, it's an impossible task without this.
How would you attempt timing analysis?
[quote] We might not ever know how to properly use some of the chip's finer features like built in DCM's and multipliers. And therefore, any FOSS toolchain will always be 'inferior' (at least in terms of resource utilization) than the vendor tools. But this is, IMHO, one of those situations where something is better than nothing - and it might take a while to get it. [/quote]
But DCMs/DLLs and multipliers are essential for almost any non trivial FPGA design. If the resulting tools weren't capable of implementing these then I'd argue that something isn't better than nothing (where nothing means using the vendors tools).
?
Nial
Posted 5 years ago # -
I think there was initially a desire to have an open source tool-chain but it has been pointed out by myself and others that this is not practical. The idea is to create building blocks that can be plugged together to generate a system. This effort is to shorten the learning curve for a FPGA newbie. At least initially I see this building blocks as SPI, UART, PWM, etc... interfaces much like the Arduino has libraries to run similar modules on the AVR or in LeafLab's case STM32. I'm pretty sure that there is no way totally remove the need for at least basic Verilog or VHDL coding to tie things together and similarly I don't think it is possible to isolate the user from the arcane terminology and quirks of Xilinx (or Altera) tools. Every effort should be made to create tutorials that walk a user through the FPGA process without letting them get too lost in all the configuration items possible. The idea is to bring FPGA development down to the average users level to get them started and then they can learn more and do more until they can make full use of the FPGAs capabilities. I would think that it is greatly in Xilinx and Altera's interest to support something like this as the more people that understand FPGA development the more customers that they have,
Posted 5 years ago # -
Ah what goes around comes around!!! Having worked on analog computers, the predecessor to the FPGA in the late 1960's I can see a misunderstanding of the limitations of the FPGA currently developing. Analog computers were hardwired to solve a specific problem. Simular to today's programming of FPGA's.
1. FPGA's can be reprogrammed by a local attached processor based on changing conditions BUT at a cost of increased processor overhead to detect the change in condtions, select the proper reprogramming then execute the reprogramming.
2.Once a FPGA solution is derived then the attached processor is required to have increased program logic to move into a monitoring only mode otherwise their will be an endless oscillation condition.
3. Programming Tool chains used with processor/FPGA solutions should be able to generate a solution to the problem in the main processor, debug the solution in the main processor then program the FPGA with the solution by a simple binary conversion process.A highly optimized FPGA solution is not necessary because of the processor being interconnected and able to reprogram the FPGA. With this in mind a compiler/linker/lib solution would not be necessary.
I personally see one processor connected to mutiple FPGA's which are used for input and output signal conditioning/filtering and reprogrammed on a decreasing frequency.A toolchain of Arduino like extensions to convert the processor binary code to FPGA binary code should require little overhead and fast development time.
Just my $0.02
DocPosted 5 years ago # -
> A toolchain of Arduino like extensions to convert the processor binary code to FPGA
> binary code should require little overhead and fast development time.You don't configure FPGA's with binary 'code' but a binary 'configuration file' which needs a PC level processor to generate (and the vendor tools).
> 2.Once a FPGA solution is derived then the attached processor is required to have
> increased program logic to move into a monitoring only mode otherwise their will
> be an endless oscillation condition.> I can see a misunderstanding of the limitations of the FPGA currently developing.
Hmmm.
Nial.
Posted 5 years ago # -
The idea is to create building blocks that can be plugged together to generate a
system. This effort is to shorten the learning curve for a FPGA newbie. At least
initially I see this building blocks as SPI, UART, PWM, etc... interfaces much like
the Arduino has libraries to run similar modules on the AVR or in LeafLab's case
STM32. I'm pretty sure that there is no way totally remove the need for at least
basic Verilog or VHDL coding to tie things together and similarly I don't think it
is possible to isolate the user from the arcane terminology and quirks of Xilinx
(or Altera) tools.Altium have tried to remove users a step from the vendor tools like this but I'm not
sure this is an approach worth taking.As soon as you need functionality outside what's provided you have to get your hands
dirty and start writing VHDL/Verilog to do what you want.Also if there are any problems with a build you will end up digging in the vendor
tools to find out what's wrong.Every effort should be made to create tutorials that walk a user through the FPGA
process without letting them get too lost in all the configuration items possible.
The idea is to bring FPGA development down to the average users level to get them
started and then they can learn more and do more until they can make full use of
the FPGAs capabilities. I would think that it is greatly in Xilinx and Altera's
interest to support something like this as the more people that understand FPGA
development the more customers that they have,I'd agree this is a good approach. There is perhaps room for a step by step
tutorial series taking people through the various stages involved with FPGA
design steadily increasing the complexity of what's being done but laying good
foundations for solid FPGA design.Nial
Posted 5 years ago # -
Agreed, I don't think there is much hope in avoiding VHDL/Verilog in the end or the complexity of the toolchains bu there is a lot that can be done to bridge the gap between reading a VHDL/Verilog book and getting much more than a blinking LED. The tutorials would walk the user through the steps required and the code modules would be example of how to preform various functions etc...
Posted 5 years ago # -
I think we should be actively separating in our heads the standard (professional) FPGA user and his needs/toolchain from the newbie/experimenter/hobbyist. I will not debate that to develop professional IP core of any complexity there will, for the forseeable future, be a dependence on vendor tools. FPGA programming is hard, the chips are complicated, and the companies take an active role in keeping details of those complexities secret. Working engineers need to meet design constraints that can force them to walk down the entire tree of potential configuration options. There's a reason the EDK is such a bloated monolith. Its because professionals (sometimes) need to fallback on those features that allow one to tweak the fine grain details. At 250MHz, stupid naiive implementations get glitchy. Hell, that can happen at 72MHz on an FPGA. Star thinking about power consumption, code reuse, system integration, IP parametrization and coregen, device constraints (and constraints files...) and..well, you get the idea. Vendor tools arnt going anywhere in the near term if you hack on FPGA's for a living.
That doesnt mean you cant have fun with FPGA's completely without the vendor tools, however. For example, fixed core bitfiles and libraries are an option we're seriously considering as part of the oak package. If all you need is 75 pwm channels, why not just download pwm.bit from the internet, and call "fpga_pwm_write(channelID,value)" from C. Im sure theres more than a handful of people that would call that an FPGA win, without ever having to touch a line of HDL or a vendor tool. What if there was a bitfile for image processing? Or what if we could generate some tools to mix and match routed finished cores in to the same FPGA without actually starting back with a fresh top level hdl file? Certainly this latter choice is possible on the virtex series, which supports this exact feature via partial reconfiguration. (for the record, I dont expect to be able to do this on spartan 3, although the guys at http://www.recobus.de/ seem to have done some very impressive reconfiguration work with spartan 3s)
How about non gatelevel reconfiguration? We all know (did you?) that conway's game of life is turing complete. I learned a while back you can do a GOL cell in 38 gates (http://www.reddit.com/r/programming/comments/8bag7/in_the_game_of_life_what_is_the_minimum_number_of/). How big of a 2d GOL grid can you fit on an spartan 3 250E? I dont know, but im sure lots of folks would have lots of fun with the finished bitfile if you could read/write the individual cells from the processor next door.
Ok, so GOL is a little unrealistic as a usable computer (well...). But there are other reconfigurable machines that dont require actually changing he bitfile. How about a dataflow machine as a matrix of ALU's, a systolic array, or even a very slow, very small FPGA-on-FPGA to allow people to dig in to the concept a bit more without getting lost.
I imagine the hacker and hobby communities will embrace FPGA's in some combination of writing custom IP in HDL, sharing HDL and relying on vendor tools just for compilation, or sharing .bit files straight up - given that a certain platform could gain enough traction to share bitfiles for it (that trick doesnt work across chips after all...). Of course, for simple/slow designs, it is possible to actually generate working bitfiles without vendor tools. That might go farther than you think.
In summation, if I try and imagine a successful FPGA platform, in the spirit of arduino, I dont see vendor tools as necessarily being the center piece of everyone's workflow.
As an aside, I completely agree that a good first step is to simply wrap up the vendor tools in such a way that you can handhold users and steer them away from the minutia they probably dont care about (yet). I recently found myself having to make a minor modification to some IP core on a new board and I had to use the xilinx edk, which I didnt yet have an install for on this machine. Between the 5GB downloads, the version compatibility issues, the 3 independent IDE's (ISE,EDK,SDK), and the compile times - it took me a VERY full day to change one line of code. 5+ hours to blinky is unacceptable on any arduino-level board. For example of 5-hours to blinky, see this review of the nano board - http://fpgacomputing.blogspot.com/2009/11/altium-nanoboard-3000-day-1.html.
Of course the nanoboard isnt an arduino-class product. Its targeted at professionals. The great thing about dev-baords, FPGA's or just uC's is that if you target the hobby and student communities, you dont have to exclude the professionals. I hope Oak ends up being that way. A lot of abstraction, wrapping, and a bit of handholding by deafult. But there nothing stopping you from diving head first into VHDL and EDK 12.1, well even give you a .ucf and .bsb files!
As far as all of this talk applies to LeafLabs, Oak will be released initially as just the hardware, with some base IP core (like the cortex-FPGA interface), .ucf files, some documentation, and some examples. Everything else is still being designed. Perhaps we should call the hardware only version sapling.
Also, theres alot of jargon in this thread. I just assume everyone participating knows what were talking about. But if youre seeing .ucf files and EDK's and ISE's and thinking wtf? Just say something. I love this thread, I dont want to drown it with terminology (vendor specific terminology too, yuk!)
Posted 5 years ago # -
poslathian - I agree with you.
I am also strongly of the 'we need to make progress towards Open Source FPGA toolchains, even if progress is slow'. I have no problems using free vendors toolchains. As you've explained, it needs to be a pleasant, 5-10 minute, out-of-box-experience.
I've spoken to lots of folks about Arduino. There are two main camps of people who've never tried it:
- many professional embedded folks don't "get" the Arduino because they can design something "better" in a couple of hours, and because they need something more sophisticated, or
- many people don't believe it can be so easy to use until they try it, then become a bit addicted.IMHO the Arduino is successful because it significantly lowers the barrier to initial success. Blinking two or more LEDs *is* fun!
As you've said, to reach a wide audience, the FPGA experience will need to be much easier than it is currently. It may be that it is not a traditional technology problem; it is a packaging and support problem. IMHO Linux reached a wider audience when a few distributions, e.g. Ubuntu, focused on ease of installation and simplified support. The same may be true for FPGA's.
I don't know what the 'killer apps' for Oak will be. My interests are robotics and image processing. I think real-time video processing might be fun, but it might be something from left-field, like an extended Karaoke game, I'm not sure what will turn folks on. Some undergraduates at a UK University wrote their own soft-GPS using a AD Blackfin as a hobby project. I know folks (including me) who'd like to play with Software Defined Radio's.
Some folks built a software radio for WiFi using 'only' one or two Intel cores:
http://www.usenix.org/events/nsdi09/tech/full_papers/tan/tan_html/index.html
sounds like a possible job for an FPGA!I would like to point out some of the work by Adam Megacz of UC Berkeley back in 2007:
http://www.cs.berkeley.edu/~megacz/research/papers/megacz-fccm07-slides.pdfHe used Atmel's FPSLIC (FPGA+8-bit AVR). He designed and built a board (SLIPWAY).
He also built some of his own software tools.He was able to get a lot of performance from a low-cost device by using it as an asynchronous computing device, with the sub-100MHz part reliably counting events at 600MHz.
A friend is measuring events to sub-nano-second using a modest FPGA, and some sneaky, affordable, electronics.
The VPRI folks have experimented with a form of domain specific language to generate hardware executing on FPGA's. They've had interesting results, see "FPGAs and “Hardware by Programming” in http://www.vpri.org/pdf/tr2008004_steps08.pdf
Some other VPRI stuff at:
http://www.vpri.org/vp_wiki/index.php/Main_PageMaybe the 'killer app' is emulating an old games console, so you can play the original 'Tank Commander'.
It is important to me that it isn't too far away from 'professional' tools, or it has got to have some clear advantages, so that experienced engineers can be 'seduced' to use Oak.
I avoided the 'easy' microcontrollers because of their 'flow-chart' programming tools (don't get me started :-). I had no qualms about using the Arduino because it uses C/C++, which I was already very comfortable using. When we found that a six year old understood what Arduino programs were doing, then other reservations disappeared (I should add, I do like Scratch as a technology for programming, and maybe something strongly visual is a viable way to introduce FPGA's).
There are lots of areas to explore. It needs to be low-cost, and straightforward to get started. I'd characterise it as: worth taking a risk because it's no more expensive than a good dinner, or a video game; I can afford the loss, and I might learn something.
There needs to be some 'packaged' IP to get folks off the ground, and a few inspiring projects. IMHO, it is important that folks can contribute and share IP. I like the Scratch home page with it's "Check out the 1,203,546 projects from around the world!"My $0.02 - GB
Posted 5 years ago # -
poslathian wrote:
That doesnt mean you cant have fun with FPGA's completely without the vendor tools,
however. For example, fixed core bitfiles and libraries are an option we're seriously
considering as part of the oak package. If all you need is 75 pwm channels, why not
just download pwm.bit from the internet, and call "fpga_pwm_write(channelID,value)"
from C.That's great as long as all you want is 75 PWM channels at the output frequency the predesigned image provides! (Which they probably won't)
Im sure theres more than a handful of people that would call that an FPGA win, without ever having to touch a line of HDL or a vendor tool. What if there was a
bitfile for image processing?What sort of image processing? There are so many possibilities the chances of keeping
everyone happy are slim, people would quite quickly want to add their own bits.It would be like providing a software environment where users can only use the functions that are provided, they can't implement their own logic or routines.
As an aside, I completely agree that a good first step is to simply wrap up the vendor tools in such a way that you can handhold users and steer them away from the minutia they probably dont care about (yet)
One approach might be to follow roughly what they have done here...
http://www.demandperipherals.com/
You select the interfaces you want then can generate the top level image automatically,
you still need the vendor tools installed but you can hide much of the detail.For use with Oak you'd have to implement some sort of internal uC interface/bus that the interfaces could be hung off and a memory map implemented but I'm sure that's an order of magnitude less than starting from scratch.
If your interface cores were well written/ commented they could be a good example
for beginners to hack to their own ends. A very simple interface following good
design practice and heavily commented could be a good 'start here' introduction to
FPGA design.[Off topic]
> Of course the nanoboard isnt an arduino-class product. Its targeted at professionalsAs an aside I'm not sure how much use the nanoboard is, as above as soon as you want
something that isn't on it you have to roll your sleeves up and get stuck into an
HDL. I have the Altium tools but still a separate FPGA development process.Also, if a new design doesn't build correctly (which they never do first time) you have to dig down to Quartus/ISE to trace the problem so you might as well be using it 'raw' in the first place.
[/Off topic]
gbulmer wrote:
As you've said, to reach a wide audience, the FPGA experience will need to be much easier than it is currently. It may be that it is not a traditional technology problem; it is a packaging and support problem.
Perhaps, but you have to get people to understand that they're not 'writing FPGA code', they're describing how the FPGA fabric will be configured when they download the bitfile. Everything is happening all the time (or every clock edge)!
Is it a design approach problem?
It is important to me that it isn't too far away from 'professional' tools, or it has got to have some clear advantages, so that experienced engineers can be 'seduced' to use Oak.
In that case you probably can't move things too far from the base HDL/tools (IMHO).
Nial.
Posted 5 years ago # -
Great comments! Just to provide a consistent framing for this conversation, we would never proactively (by writing walled-gardens, closed source protocols/software, signature checking, DRM, or any other junk) restrict our users access to the hardware. At the lowest level we provide you a well designed and tested board, with as much documentation as we can afford to write (which grows with time...). If the natural use environment for Oak is ground up hdl, with a vendor-tool centric workflow then go for it. For Maple we even released a makefile based build environment, for hackers who need not the IDE. Every piece of our workflow has been released - our JTAG based programming scripts, debug linker scripts, low level libraries. We think the stripped down IDE has value, so weve worked on it a lot. However, we believe that providing a fancy frontend application in lew of low level access is always worth less. However, providing that frontend as a supplement to the foundation tools can often be worth more. I like to think of it as a walled garden without the wall. So..err...just a garden.
The question is what should we build on top of that. Certainly the start is to provide well documented HDL that does some of the core peripheral driving (like the MCU/FPGA interface) and some example applications. Beyond that, there are many options, as weve all been discussing.
Posted 5 years ago # -
I like the fpga-on-fpga idea. Kinda like any of the languages that use byte-code.
Posted 5 years ago # -
The question is what should we build on top of that. Certainly the start is to provide well documented HDL that does some of the core peripheral driving (like the MCU/FPGA interface) and some example applications.
If you need any assistance here (ie writing a VHDL mcu/fpga interface) let me know, I'd be happy to contribute.
Nial.
Posted 5 years ago #
Reply »
You must log in to post.