I realise Oak is still in the works, so I don't expect you to give much detail, and I understand what you say may change, but ...
Can you say what toolchain you are planning to use for the FPGA? Will it be Open Source?
A place to share, learn, and grow...
I realise Oak is still in the works, so I don't expect you to give much detail, and I understand what you say may change, but ...
Can you say what toolchain you are planning to use for the FPGA? Will it be Open Source?
Currently, pretty much any FPGA work on any platform requires some dependence on proprietary vendor tools. We intend to use Xilinx chips on oak, and so to start there will certainly be some dependence on xilinx compilers (free as in gratis, not FOSS) to generate the .bit bitstream files from verilog.
There are three ways to reduce this dependency, and we have discussed them all at length. We'd love your input on this issue.
1) Build a library of precompiled FPGA core, with an appropriate c library. Thus, you simply drop the core onboard using our tools, and now you have access to c functions like fft() or make_audio_pipeline()...or whatever. This is a lot of work in premaking cores, and its the most limiting for users. Obviously, we would never force people to stay in this sandbox. People that want to muck about with writing verilog and using the propietary tools to make new bitfiles are more than welcome to do so.
2) Build a set of FOSS tools for compiling verilog to bitstream. This project is officially on our stack for a number of reasons. However, the final product will likely be a very limited/low quality compiler. The vendor tools are very aware of undocumented/propietary information about how the chips actually work and are also the result of a ton of research and development. But it is important to have an open source alternative, at least to prove the concept, make a point, and get the ball rolling on more open FPGA development.
3) Design a non-gate level reconfigurable fabric of some kind. In this scheme, users dont code in verilog, but rather some higher level language/environment which then adjusts the runtime configuration of the fpga. For example, we pre-configure the fpga to be a 2xd matrix of ALU's, and users play with this architecture by writing software - never having to compile any verilog to bitstream. The problem is that "playing with alternative architectures" is exactly what we want our users to be doing! Why should we force some suboptimal something on to them.
So you see, the short answer is that for the foreseeable future, there will be some dependence on proprietary, gratis, vendor tools. However, we hope to reduce this dependence as much as possible as work progresses.
I was hoping (with less than 0.00032 probability of success) that you were going to write "oh, we picked up fpgaC, and did a lot of work. It isn't as good as the vendors tools, but it works for Xilinx, Atera, Lattice, Cypress, ..., and uses a C-like syntax like Impulse C, so it isn't hard to go from Arduino C++ to it" :-)
I accept the use of the vendors tools in the near term, but I think the hacker community might bring a lot of interesting ideas and value to "reconfigurable computing". Hackers like FOSS. As we've seen with Linux, and a few other projects like gcc. If Hackers put the work in to make something good, eventually industry will contribute too. Even if the toolchain isn't pushing research boundaries, increasing the breadth of knowledge and use by 10x would, IMHO, have a very positive effect on the engineering community. IMHO, GCC is a fundamental piece of software for the entire FOSS community, and toolchains for hardware may have a similar impact.
I use a Mac, Linux, or Solaris. So some vendors exclusive focus on Windows is a real problem for me. I do not want to pay a couple hundred $'s to MS just to run 'free' vendor software.
So, I would prefer a FOSS toolchain. At least something that will run on my MacBook, or Linux box.
I agree with your analysis that hiding the architecture is the antithesis of 'the whole point'. The Arduino libraries hide a lot of the underlying hardware and they are not, IMHO, a 'bad thing'. A couple of years ago, we had to go digging through the datasheets just to make a square wave of a specific frequency with a timer. The important point for me was that was still possible. The Arduino team had taken nothing away, but had instead reduced the learning curve, and provided useful support for the other 70% of a project. They did it so well that a 14 year-old can start using an Arduino in 5-10 minutes, and having fun.
So I would like a toolchain that lets me do anything the FPGA can do. I don't care too much what the language looks like, as long as it isn't mad (IMHO, programming using flowcharts is 'not a good thing'). I would like something industrially recognisable if possible so that I can read books and tutorials. Also, I'd like something recognisable so that folks who learn it can talk about it to prospective Universities and employers, and not get treated with suspicion !-)
Part of the delight of the Arduino libraries, for me, was the helpful abstractions, and choices of defaults. For example PWM and ADC are very easy to start using; they save a lot of digging into the daunting, sometimes confusing, datasheets when you're starting out (I imagine this helped 90%+ of the Arduino community to get started). The setup and loop functions don't get in the way, so they don't harm my ability to enter loop once (almost like main) if I choose.
I am very interested in the Cypress PSoC 3 and 5 'Universal Digital Blocks', which come in 8-bit wide units, but can be 'stacked' to make 16, 24, and 32 bit units. Each block has simple ALU, state, latches and routing. They also offer some higher-level peripheral IP, like SPI, I2C and USART. I think Atmel FPSLIC looked more FPGA like, and so looked a bit more daunting to me to get started.
I have only read PSoC documents, and watched video's, so this is a facile opinion, but it looks like a good way to get started. In the case of PSoC, it looks like that structure is fixed, so tools which offer those 8-bit blocks may be offering everything. With an FPGA, I think I'd like to (eventually) have the flexibility to do anything. But that isn't essential on 'Day 1', as long as it is on the path, and I can understand (and hence believe) that it will happen. Giving me easier-to-use 8-bit blocks would probably be fine to start with.
I should add that I am interested in the Actels' SmartFusion (Cortex-M3 + FPGA + Analogue) too, but I know even less about that.
I would like a toolchain to:
- offer convenient, higher-level abstractions, while having the ability to get 'close to the metal' for a *part* of a project,
- run on Mac OS X or Linux,
- be free, and ideally FOSS,
- be recognisable to current experts (to make it easy for them to get involved), and
- be acceptable to potential employers so that it could help get jobs, and contribute to industrial product development.
I don't want a toolchain which:
- is only free to education,
- is restricted in the size or complexity of designs it will allow
(I accept, though, there is a practical and economic upper limit to what I might realistically do, I just don't know what that might be, so limits make me wary.)
- uses a weird way to define FPGA's (unless it is clearly an improvement on Verilog/VHDL/..., and ideally has some industrial credibility)
- only runs on MS Windows
- is reliant on one person for its prolonged existence
Maybe wrapping Verilog in a DSL, while still having an 'asm' back door?
I agree with you completely. The problem 'now' is given that we want something that looks like your "pro" list in the long run, and that could take a LONG time, what do we release first - as in towards the end of this summer. The answer is I dont know, but its going to start with just a well packaged toolchain based around the vendor supplied stuff, perhaps bundled as VM images for portability. But in the future, something epic like:
fully FOSS, development environment supporting the arm+fpga fabric being pushed by actel and xilinx (we prefer the xilinx version - dual core cortex A9's with tight integration with the fpga for dynamic reconfig...woot!) with perhaps a new high level HDL and library packages that makes the "5 minute FFT" the new blinky project...
You get the idea. Thanks for the thoughtful comments! Keep em coming. This is pretty much our #1 long term issue right now.
I would be a misleading you if I didn't add that I like Erlang (& DTrace:-), interested in the VPRI fonc stuff, and starting in on Go langauge (golang), so a "Pro" approach is the weakest requirement for me personally. But, I think attracting existing folks with FPGA expertise is more important than my tastes.
I'd much prefer the single Cortex+FPGA+analogue chip approach of Xilinx, Actel, and even Cypress PSoC 5, to separate MCU + FPGA. In some ways, even FPSLIC has attractions over two separate parts.
BTW - The Actel stuff would be more than enough for me for now, I can afford their development kit ;-)
yea, the single MCU+fpga was nonexistent or VERY new when we first started planning all this. I think were still going to do a two chip device to start, but eventually move to a single die once we get experience working with the right chip. At this point its a matter of experience and cost, we have invested a lot of time working with the stm32 line and its a good set of chips. Its worth it in the long run but not in the short to bring up a whole new chip for oak.
Thank you for the thorough and measured replies.
Yes, much better to do something useful sooner (and recoup investment), rather than something 'almost perfect' later :-)
I think we'll be happy with STM32F high-density devices (for DAC and memory) for some time to come. STM32F is such a step up from ATmega, that STM32F feels distinctly liberating!
I would love to learn about FPGA's to do some real-time signal processing, and playing with reconfigurable architectures, but that isn't my high priority. Oak or PSoC 5 will probably be enough for me as a learning platform to make progress on those interests.
While a tool to create FPGA designs would be very cool, it must be understood that hardware implementations are very different than software. Software is serial i.e. task1, task 2, task 3, while hardware can be serial or parallel in which case you could run the 3 tasks in parallel and depending on when each completes and if they share data the results can be different depending on how things are implemented. Many have tried to replace VHDL/Verilog but nothing has made any real progress in displacing those standard languages. The best tack would be to develop modules not unlike the Arduino concept which has code for each peripheral, except now a peripheral could be a GPIO, serial port, FFT, FIR filter, buffer etc... . No mater what you do you can't make everyone happy there are just too many ways to implement FPGAs but a good set of basic building blocks would be helpful.
As far as FOSS Icarus Verilog works fine as a simulator for code development and has some level of synthesis (mostly for Xilinx). I don't think there is anyway to get away from vendor tools for back-end (fitting etc...) but both Xilinx and Altera offer free tools for all but the high end devices. Xilinx even has a good simulator included in there tools which is much faster than Icarus Verilog.
Verilog is a lot easier to learn than VHDL so I'd suggest that as a start.
As far as getting reference designs together there are a lot of open source designs at opencores.org, I think you would want to offer a much simpler interface and access but the design work is done they would just need to be reworked into a more modular form. I'm not sure if it would ever be easy to implement FPGAs but having a good set of building blocks is what the community needs to open up development to the average user. This leaves the user still having to learn all the tools and write a fair amount of Verilog to tie everything together but it lets you lower the hurdle in getting started.
AFAIK, Xilinx 'free' Web Edition tools support Windows and Linux (not Mac OS X, but I can do Linux in a VM), but Altera seem to be Windows XP/Vista only. So, I can live with Xilinx, but Altera is a non-starter for me.
I don't disagree that we are currently heavily dependent on vendor tools, but as long as there is an assumption that FPGA tools are proprietary things, built around secrets, the status quo can't change. Getting FPGA technology into the hands of a wider community with diverse views on Openness seems like a good direction to head.
Maybe I am misunderstanding what is meant by "there are a lot of open source designs at opencores.org".
I read the FAQ, where is says:
"OpenCores Certified Projects
The number of projects being started here at OpenCores is constantly increasing. We would like to encourage all project maintainers to ensure that their project is developed to stage where it is considered completed. By this we mean the project contains:
- RTL design files (VHDL / Verilog / C / assembler / etc).
- Testbenches (self-checking).
- Documentation (design and testbench specifications, FPGA/ASIC size information).
- Make-scripts (compile-, testbench-, synthesis scripts).
- Information about Design-usage meaning, has it been verified on actual hardware, is it being used in commercial products. This information builds allot of creditability.
To help users identify those projects which have reached this stage, we will now apply a "OpenCores Certified Project"-logo to the project."
This seems like a reasonable approach, and a reasonable set of criteria (but I am no expert).
I can only find 8 OpenCores certified projects (aside: 5 of those use Verilog).
Of the 8, I believe 6 are basically processors. I have no problem with processors on FPGA's, but there are only 2 Verilog designs for anything else; one is an SD card reader, and the other an interface between a bus and a UART.
This doesn't seem like "lots". Is this an inappropriate way to count the number of designs?
Don't know what "Certified Project" means exactly but opencores has some of thier own projects/products like OpenRisc and open-source CPU that they have built into a few chips. So they are probably promoting thier own designs that have made into an ASIC.
As far as open cores if I open projects and select Verilog as the language there are 60 projects marked as "Done" and many more that are functional if not completely finished. Is there any software that is really done anyway. Is Linux done, it works and millions use it but there are new kernels, applications, utilities etc in various stages of developemnt. Linux works but is it ever done? I'm not involved with opencores.org, but I know that they are an opensource resource for designs and a basis to start a community, which would help develop a base for Leaflabs to get into FPGA projects. Doesn't sound like you have much of an FPGA background, but even a serial port can take weeks to a month to develop, leveraging opencore's resources could provide alot of the peripherals that would be required to get the ball rolling. FPGA work is highly time consuming you don't get any peripherals, you need to build them your self, define memories etc... A soft processor, local memory, UART and SPI would get you to the the equivalent of an Arduino to which you could add your signal processing blocks ADCs and DACs etc...
Code doesn't have to be Verilog either, you can mix Verilog and VHDL or convert VHDL to Verilog, they are just languages, just like you can write and application in C, C++, Java or BASIC. If you can read the code you can understand how it works and convert it to your language of choice. Much like SourceForge there are far more projects started than completed, which isn't to say that incomplete code still doesn't hold some value which can built upon.
As far as FPGAs go, Xilinx and Altera are not open-source or really open at all and I don't ever expect that to change, their value is in their IP and they won't give that up. We will always be dependent on Xilinx knowing the details of their architecture and how to best use it. It used to be that you needed to buy $3K for their development system and another $3-5K to get a simulator, so while not open-source free is pretty good and the toolchain is complete (development is hard when the simulator and synthesis don't agree on syntax an you never know whats going to work).
I'm sorry that "lots" wasn't as specific as you would like I was just offering a resource for material/designs. Its nice that you like Solaris, Mac and Linux but that doen't make it a requirement for everyone else, Solaris is all but dead since Oracale bought them, Mac has never been a good HW development platform, Linux and Win are the only real choices for HW dev. Xilinx in my opinion is a stronger option than Altera. The bigger problem for FPGAs is that very few devices exist that are hand solderable only the smallest devices are avialable in QFP packages 90-95% are BGA only and that just isn't going to work for a hobbyist.
I apologise for the long 'rambling'. I find writing things out helps me think, and getting feedback moves me along much better than my own poor brain can mange. I've written a summary at the bottom, I hope it helps.
I think we are looking from different perspectives.
IMHO, there can be a significant difference between "usable by beginners", and "adequate for experts". For me, it isn't even beginner compared to expert. It is beginner compared to someone familiar with the models, processes and vocabulary of a subject at a practical level.
Consider Unix. I was very happy with Unix back in '83, on dumb terminals. It was so much better than any OS I'd used before it was genuinely liberating. It was flexible, malleable, and didn't get in the way. At the same time, I could understand why it didn't appeal much to artists and designers, which was why I pushed very hard for investment in PC's, and later Mac's, rather than central computing. There was no way those PC's or Mac's could do some of the computing that our mainframes could do, and that was often used as an criteria for sticking with 'big iron' rather than the 'new fangled' PC's. But that wasn't the point. Most people's needs were quite simple. The problem to be solved was enabling them to do useful computing with very little pain, and not, how to push the state of the art. The real trick, in retrospect, was trying to help them to a position where they could go beyond, e.g. a spreadsheet, when they needed to.
I am an FPGA beginner. I have no practical hands-on experience. All I've done is read papers, articles, web sites, and parts of books on FPGA's, VHDL, Verilog, and reconfigurable computing. Every person who has contributed to an FPGA project at OpenCores.org is way ahead of me on the "beginner to Ross Freeman" scale.
When I dig through Open Source software, I have a lot of experience that I can apply to identify the gems, and skip over the 'pumice'. I don't have that under-pinning when I look at (Open Source or otherwise) FPGA IP.
Using the criteria of: tested on FPGA, documented, build scripts, and test cases, which is what the "OpenCores Certified Project" seems to say appeared reasonable. As I said, I may be using inappropriate criteria, but those seemed to give me a fighting chance of getting something working. I am more than willing to have to follow some competent practitioners who identify the gems. I am willing to believe many of the projects at OpenCores are, or are only a few days work away from, being fine for an enthusiastic, bright beginner. I do not yet understand that 'Done' is an adequate test, and maybe you can help me understand what is.
After I've got me current projects delivered, I would be willing to try to assist, but I am too stressed to do that for several months.
While people believe something 'has to be this way', it's unlikely to change. The VPRI fonc work includes compiling from abstract, high-level languages into hardware (FPGA), http://www.vpri.org/pdf/tr2007008_steps.pdf
(for anyone unfamiliar with VPRI, please have a look. Their goal is a to create the software for a modern, graphical computing OS, with compiler, core applications, and networking in 20,000 lines of code. Frankly, if they slip, and go over by 5x, I'll still be ecstatic.)
VPRI aren't there yet, but some of their work is inspiring. For example "A Tiny FPGA Computer" by Chuck Thacker, uses 100 lines of Verilog, and seems to be on an interesting track. Ian Piumarta's "A Tiny TCP/IP Using Non-deterministic Parsing" builds a TCP/IP stack directly from the TCP/IP specifications using 200 lines of code in their high-level language and extensible parsing technology. Their aim is to bring these technologies together. This gives me hope that we are close to a significant change in how we use digital technology.
So, I am willing to accept the current status quo for FPGA tooling, but I am hoping better things will come along soon.
Hardware vendors charging large fees for their development software seems like a 'Regressive Tax' to me. The individual and small company are disproportionately burdened compared to wealthier or larger companies. I realise they have to recoup costs for software development. But they are selling hardware, which should be profitable too. So I am delighted that they give tools away for no money, but please not Windows only (or, as bad, Web only).
I wonder if maybe its time for a few vendors to collaborate, and support something like the gcc, Linux or Eclipse model. AFAIK, there, some folks are employed and paid to contribute. The benefits to the vendors include: sharing some costs for parts which are relatively generic, and gaining a sophisticated, flexible platform for more focused value add, relevant to their specific company needs. Very few companies can afford to build a toolchain and kernel, but many more can contribute or tune it for their niche. Even the somewhat encumbered Unix of the 80's enabled a lot of new companies to innovate, and rapidly become competitive, giving us some some technology which we still enjoy more than 25 years later.
I strongly agree with your point about FPGA packaging. A friend has built something a little bigger than an embed, with USB, PIC32 and a Lattice device, and that uses a 208 pin PQFP. He had problems finding devices that he could hand solder. I think the new breed of single-die microcontroller and FPGA will help us though. I'll explain below.
I have been hoping Atmels FPSLIC would get more interest, but the newer Cortex+FPGA+analogue look even better. For my level of competence, they seem to offer the possibility of using a traditional processor to manage the FPGA fabric, and ready made peripherals like USB, ADC, SPI, and USARTs, so that I don't have to struggle as much as traditional FPGA to get to 'the fun part'.
Just as importantly as lowering the learning curve, the processor and FPGA are integrated on chip. I don't feel the need for as many pins as traditional FPGA. I feel this is one of the areas which will be very important to the hobbyist and small company. Just like the migration from discrete transistors to IC's, then IC's to microprocessors, it becomes practical, at certain architectural boundaries, to reduce the amount and quality of external connectivity, and get to smaller, cheaper, simpler PCB's. After all, an STM32F operates at 72MHz, and has a 64-bit data, 20-bit address, 24MHz (I think) bus to flash, but comes packaged in a 36-pin package which mostly looks like an 8MHz device. No worse clock than an IBM AT clone of the mid '80's.
I'd be willing to use 4-layer PCB to support a couple of square inches connected to GigE, and an internal 250MHz clock (like a Cray One :-) if the rest of the board runs at a few MHz, and the GigE PCB layout was provided by the vendor.
We may be on the cusp of something amazing, and just as amazing for us, the small guys, as the multinational corporations.
Summary: Sorry if I sounded too negative about OpenCores.org. As a beginner, documentation, build scripts, and tests are very valuable. I am willing to be guided to good examples by competent FPGA folks. When I am less stressed, I'd hope to help make stuff more approachable.
We need to get better tools to help us 'beginners' explore FPGA's. Maybe VPRI can help, maybe it'll come from vendor collaboration, or the Maple Team. I'm not (very) partisan.
I agree, many current FPGA packages are too hard for hand assembly. I believe the merging of microcontroller, FPGA and analogue will actually help, by moving high performance interconnect from the PCB into the chip. I also feel having close integration of CPU, perpherals and FPGA will significantly ease learning, and the use of FPGA's. Those are partly why I feel we are on the brink of potentially radical change.
Agreed, the FPGA market is far from hobbyist friendlily, both in terms of tool chains and devices with lower pin counts. The tool chains are easier than they used to be and the free web versions have helped a lot but they still are very cryptic to a new user and even those who are experienced. There was a strong interest in CPU/FPGA integration a few years back:
Xilinx had the powerPC in the Virtex II Pro (I think they announced a ARM core but that never appeared)
Altera announced the Excalibur series with ARM and MIPS variants (I don't thing these ever shipped) I went to a presentation on these in 2000 so this is 10 years old.
Trisend had a 8051 version that was to be followed by a ARM 7TDMI core (they offered a tool set that had be done peripherals and you could select from a list and SW would hook things up for you) this was close to what we are talking about, but they got bought by Xilinx and the products disappeared.
Atmel's FPSLIC made it to market, and is still available but the AVR isn't the fastest or most capable and I believe that the FPGA portion is pretty weak compared to Xilinx/Altera so that never really took off.
There were others as well, but none of them have made it to market.
Often it seemed that a discrete cpu and fpga were cheaper and more flexible than having them integrated and having the vendor limit memory size, clock speed etc... . With devices like the Virtex II Pro the memory is all external so you end up using a lot of I/O on the memory interface, they expected you to build your own peripherals but that used up more pins and logic such that it was generally easier to get an off the shelf CPU cheaper and have more flexibility. I still hope for CPU/FPGA integration but I don't have alot of faith that it will happen soon.
Maybe Leaflabs with a low cost board including a CPU and FPGA on a board can start a new trend and bring the integrated market to life. Just providing a CPU with a defined interface to the FPGA would help (and software to work with that interface) would be a big step forward in getting more people involved and create the community to provide reference designs and tutorials for FPGA newbies.
I hope the Maple folks don't mind me saying ...
The Cortex-M3 + FPGA + Analogue, on a single die, is available TODAY from two vendors.
Actel's SmartFusion looks like a traditional FPGA fabric with Cortex-M3 + Analogue.
I haven't phoned, but the web site shows the Development kits are on sale now:
Cypress PSoC 5 offers the programmable logic, in higher-level chunks, and has some quite flexible analogue blocks .
I phoned Cypress UK last week, and they say their development kit is available now from the web site.
There tool for configuring the 'Universal Digital Blocks' (which are 8-bits wide) looked easy to use, and they have some handy IP. One of their blogs shows how to make 48 PWM outputs with the 24 on-board reconfigurable blocks.
HTH - GB
PS - and I apologise to the Maple folks if this seems like a dig at you. I am here, spending money with you, so "Don't Panic!"
Wow, quite a thread this has become. Thanks for the great ideas.
1) We're well aware of the work at actel, cypress, and xilinx. We're REALLY excited about a chip xilinx announced was a "definite" last year, dual core cortex a9 + fpga fabric + snazzy new mcu/fpga interface that supports dynamic reconf + lots of peripheral hardware, like spi, uart, usb, etc. We have no idea about cost or release date. The original leaflabs marilyn board (stm32+spartan 3 fpga) was built before any of that stuff had dev kits available, and we just kind of stuck with the rough design. Today, we still like the separate chip version (mcu + fpga) for several reasons:
1) expertise with both chips and their toolchains (its expensive to get up to speed with a whole new chip, for us and our users)
2) cost and availability of components
3) interface between fpga and mcu. Designing this interface is half the fun and value of design, and we've gotten particular about how its going to be. You can put this into the general category of "flexibility"
In terms of tools/languages/libraries you guys are well familiar now with the vision - remove the monolithic barriers to fpga+mcu development so that pros, hackers, and students can gain momentum on actually solving some of problems of parallelism - from the perspectives of architecture, lanauges, and toolchains. Industry was more than happy to punch cards, but C was the language of the PC. Today, industry is happy to use monolithic tools that take weeks of training and write tons of specialized core and do it module by module, wire by wire (hush bluespec fans, not that i suspect there are many....). But the "killer app" for the fpga, some new strategy that makes it finally obvious that the solution to parallelism is far more than simply 'more cores' > 'less cores' is going to need a PC platform and a C like language, complete with a libc and a linux.
But the point is more than just "lets play with fpgas." Hard problems with massive data sets - the problems that actually NEED 10000 cores and get map reduced across the entire google campus, are just a tiny part of the picture. And im sure industry will happily continue to work on it with $3000 virtex 6 chips with i86 cores inside organized in a cluster of 1000 units.
All the really neat stuff is tiny and physical, mobile and low power. I dont want to have to send my voice to google to have it transcribed for me, I want to do speech to text on the device. I want to put state of the art vision on a mobile platform that costs $100, and i want grids of sensor networks that are so low power they can leech of their environment. The problem of processing is becoming one of data movement and physicality, and you want your processing to happen as close as possible to where its consumed and take up very little power. Sure, my core i7 could do X (actually i dont have one...) but with the right architecture so could a tiny 1W custom processor running onboard a 6" quadrotor.
So now that you know where I'm coming from, the question where to start. Start with the devboard? the toolchain? the library? the language? I dont know, but certainly work needs to be done on each of those fronts. I just hope we dont get spread so thin we fail at all of them.
The $99 Blackfin dev board is also interesting but no FPGA.
You must log in to post.