2
votes

My apologies if my question is bad formulated. I'm recently reading about this. It seems that Calculators comes with a Processor (i.e 32 bits) and are programmed with C or Assembler to perform the most exotic operations like fractional numbers exponents of other fractional numbers and so. I have some deep experience dealing with Assembler, and I know its quite difficult and toilsome to implement complex operations on integers let alone real numbers (but it can be done anyway).

Then, I look whether Verilog can be used to do a division of two fractional numbers, and everybody seems to agreed, "it's not possible" (In a Synthesizable way), or it would be too slow and Synthesis tools don't even have this problem defined because is unlikely that verilog will be used for so meaningless operations like division.

But then, isn't the Chip inside the CASIO calculator originally programmed in VHDL or Verilog?

I google Calculators made in Verilog, and I can only find "unrealistic" calculators for binary integers numbers and etc, never like the CASIO I have in my hands in this moment. So It seems I have mistaken my conception about Verilog and VHDL.

2
Although this is a good question, I am not sure whether you'll find the appropriate experts on SO. Maybe you should ask the question at electronics.stackexchange.com ? - musically_ut
"the CASIO calculator " Which Casio calculator? There are many. For example, I used to have the CFX-9850G, which apparently contained an obscure Hitachi-made CPU. And there was no direct way of programming the calculator in assembly. Perhaps it was possible through some kind of hack using the PC link cable, but other than that you were limited to programming in Casio's BASIC dialect. - Michael
@"Synthesis tools don't even have this problem defined": do you expect to just write "z = x / y" to infer the required hardware to perform a division? - andrsmllr
Division in hardware? Read up on the Intel FDIV bug. Yes, it's possible, but it is hard even for the biggest CPU company in the world. - MSalters
@MSalters: The FDIV bug was an error in one value in a lookup table of starting-points for iterative refinement, IIRC. The instruction (on P5) had 39-cycle latency, so clearly there's more complexity than you could reasonably expect a HW design language to synthesize for you. Anyway, this question is about fractional division, which I assume means exact ratios of integers, not necessarily with the same denominator. If the OP meant normal floating point, I thought he would have said so, although maybe he's talking about base10 decimal numbers. - Peter Cordes

2 Answers

1
votes

Bits is bits...be it a processor or other logic.

Absolutely, division has been implemented in VHDL and Verlog, that is how you do it that or schematic capture, or if crazy hand drawing the masks. addition is easy, subtraction is just addition (invert and add one, invert and set the carry in to one basically then add), multiplication, is shifting and adding, try doing binary multiplication on paper, much easier than grade school because for each position you are either multiplying by zero and adding that zero in or multiplying by one and adding that shifted value into the accumulator. so multiplication is nothing more than N number of shifts and adds, which can be implemented in one clock cycle with a massive number of gates.

division though, fractional, whole numbers doesnt matter fractional math is done with whole number math logic blocks anyway (just like we did in grade school, line up the decimal point THEN do the add or subtract, likewise multiply and divide we used basic multiply and divide with a little decimal adjustment). Division though is an iterative process, in logic the implementations you see on educational sites (verilog/vhdl) are simply doing the same thing we did with log division in grade school, but like multiplication it is much simpler than grade school you pull down bits from the numerator in the long division until the number being checked against the denominator is equal to or larger, basically the number can either go in only zero times or one times into the next number under test, unlike decimal where it can be between 0 to 9 times. but you still drop bits and test until that happens, an iterative process which you could and has been done in logic, even back in the days when they did hand draw the masks.

Because of the cost of that logic and that software would basically do it the same way iteratively, it is not surprising that a number processors do not have a divide instruction. Just let the software do it.

The processor used on some of these calculators out there do not have the divide instruction, they implement it with a software solution (I know for a fact at least one family does and assume that company has used that same chip family or brand for others). See the Hackers Delight book, its whole purpose is to show you how to do math and other algorithms in an optimized or efficient manner it is not about hacking into things (cracking) but about software/logic tricks.

So the folks that say you cant do it in verilog or vhdl perhaps are saying you cant do it on a single line a = b / c; or perhaps they have never written that code and dont want to. wouldnt be surprised that you just buy that module in a cell library and wire it up, and not know or care how it works. Like sram and other cell blocks let the foundry make an optimized cell rather than you. or on an fpga they have optimized blocks as well that their software wires up for you during synthesis.

0
votes

Take this answer with a grain of salt: I'm not really a hardware guy, so I think my understanding is correct, but it might not be.

I think when you're googling for "calculators made in verilog", you're only finding results where all the logic is in hardware.

As I understand it, real calculators save vast amounts of hardware real-estate by using a programmable microprocessor (ultimately designed by someone in something like VHDL or Verilog), but much of the logic is done in software running on that microprocessor. These "hardware only" calculators are interesting because they do everything in hardware, with no software.

So a real calculator is not only the hardware design, but also the software that is read and executed by the hardware.

Even a simple microprocessor without hardware floating-point support can do a single complex calculation on two numbers very fast in human terms.