Jump to content

Talk:Microcode

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Changed "lowest" to "lower"

[edit]

...because wrt one machine I know of, the Northern Telecom SL/1, microcode was supported by a deeper level of code called nanocode.

Don't let the naming mislead you! Nanocode is just a special case of microcode - microcode in two levels - a similar approach was used in the 68000, for example. 83.255.43.76 (talk) 07:18, 24 September 2009 (UTC)[reply]

Critique of the article

[edit]

I am a computer engineer, as was kind of confused also. This is really not a place for technical info I guess. —The preceding unsigned comment was added by 192.55.52.1 (talkcontribs) 13:37, 16 June 2004 (UTC)

if you two have sufficient technical know-how to criticize the article, why don't you amend it so that it is accurate? after all, is that not the point of a wikipedia?—The preceding unsigned comment was added by 4.10.174.115 (talkcontribs) 03:33, 17 November 2004 (UTC)

Microcode redirect

[edit]

I came here to find out what microcode was. Microcode redirects to this article, which present WAY more info than I can follow...oerhaps someone who knows what they are can add an additional article on microcode specifically, so I can learn what it is?—The preceding unsigned comment was added by 24.68.241.212 (talkcontribs) 07:04, 5 September 2005 (UTC)

PDP-8 "microinstructions"

[edit]

I disagree with the use of the PDP-8 as an example of vertical microcode. The use of the term "microinstruction" on the PDP-8 has an almost entirely different meaning that conventional microprogramming, but it is more akin to horizontal microprogramming than vertical, because it uses bitfields to control independent (though related) operations.

A better example of vertical vs. horizontal would be the HP 2100 vs. the IBM 360/65. The former has a very narrow (24-bit) microword, and is very vertical, while the latter has a very wide word (105 bits, IIRC) that has almost no encoding - most of the fields directly control individual hardware operations.

The use of the term "microinstruction" on the PDP-8 should perhaps be explained in its own section, with an explanation that the PDP-8 is not microcoded in the normal sense, but is hardwired and has an ISA that includes the "operate" instruction with fields for various operations. --Brouhaha 06:06, 10 September 2005 (UTC)[reply]

I understood that section to mean something different. I think it was trying to use the PDP-8 assembly language as a rough example of what "vertical microcode" is like. (I don't necessarily agree that PDP-8 assembly language is the best example).
On the other hand, you're correct that what the PDP-8 called "microinstructions" has very little to do with what we're talking about here and we might want to explicitly disclaim much relationship.
Atlant 23:34, 10 September 2005 (UTC)[reply]

Praise

[edit]

Thanks this is a really good summery about microprogramming....if someone does not understand this and claims to be a computer engineer i recommend to switch your career.—The preceding unsigned comment was added by 142.150.27.42 (talkcontribs) 08:46, 31 October 2005 (UTC)

If someone types in the above and can't even spell 'summary' or 'I' then I recommend you switch off your computer and immediately stop wasting the planet's ecological system.
Note the above user was ostensibly accessing Wiki from the University of Toronto. Canada must therefore go back to the drawing boards if their students still can't spell or type. Sorry, Canada.] —The preceding unsigned comment was added by 62.1.163.85 (talkcontribs) .
62.1.163.85, be sure to read WP:NPA.
Atlant 22:30, 25 June 2006 (UTC)[reply]

Wow, I can't understand it, and I have written in machine language, IBM 360/370 assembler language, Fortran, Basic, SAS, and 8051 machine (microprocessor) language. How about a good example of microcode /firmware? I couldn't think of one, except maybe "move character long" (moving megs of data instead of much less).

Peter from NYC 17 Sep 2007 —Preceding unsigned comment added by 69.203.23.215 (talk) 03:27, 18 September 2007 (UTC)[reply]

Grave omission!?

[edit]

I was quite simply flabbergasted when I noticed that this article does not at all mention the reprogrammability (flexibility, upgradability) reason for using microprogramming vs hardwiring. Correct me if I'm wrong, but isn't this reason as important as the emulation issue? --Wernher 23:15, 21 January 2006 (UTC)[reply]

But the article does mention this! See:
Microprogramming also reduced the cost of field changes to correct defects (bugs) in the processor; a bug could often be fixed by replacing a portion of the microprogram rather than by changes being made to hardware logic and wiring.
If you feel it needs more stress, this is Wikipedia so you know what to do: be bold!
Atlant 01:33, 22 January 2006 (UTC)[reply]

Work in progress

[edit]

I stumbled across this article. As the above comments indicate, it definitely needs some work. I have some experience in this area, so will contribute what I can. Post any issues or questions here. Joema 02:07, 4 March 2006 (UTC)[reply]

The change to the VAX section is wrong. The first VAX, the VAX-11/780, did not use AMD 2901 bit slice components. I don't think any VAX did use those for the main CPU other than possibly the low-end 11/730 (and the electrically identical 11/725). --Brouhaha 06:18, 4 March 2006 (UTC)[reply]
Revised VLIW/RISC section. Added some items and removed "RISC as vertical microcode" sentence. This seems like a stretched comparison, not sure it's sufficiently close to warrant using. Discuss if any questions. Joema 18:45, 4 March 2006 (UTC)[reply]
Thought about it and restored RISC/vertical microcode comparison. Joema 19:01, 4 March 2006 (UTC)[reply]
Brouhaha, you're correct -- only Nebula/Low-Cost-Nebula (11/730, 11/725) used the AMD bit-slices. (We also used them in the PDP-11/34 and /44 Floating-Point Processors, but that's not at issue here.)
Atlant 17:51, 29 June 2006 (UTC)[reply]

Modern x86 processors

[edit]

How does microprogram/microcode, specifically the VLIW/RISC section, compare to modern x86 CPU's? Like modern Pentium processors exposing a CISC instruction set but internally resembling more of a RISC processor? Fuzzbox, 29 June 2006

The following discussion is an archived debate of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the debate was Move. —Wknight94 (talk) 02:24, 5 August 2006 (UTC)[reply]

Move to Microcode

[edit]

This article is listed at Wikipedia:Requested moves to be moved to Microcode, with the rationale "Article states that "Microcode" is the more common term, and hence it should be the head." Please express your opinion below. —Centrxtalk • 04:25, 29 July 2006 (UTC)[reply]

  • don't move - microprogram is a more formal term for it, just as in the case of typical software 'program' is a more formal term than 'code'. The redirect from microcode is sufficient. --Brouhaha 05:39, 29 July 2006 (UTC)[reply]
    • I completely agree with Brouhaha. Microprogram is the formal term; Microcode is slang for that. This from 30 years experience in the industry, esp. with the big iron in IBM's stable. -R. 18:51, 31 July 2006 (UTC)[reply]
  • Strong Support. Microcode is what the technoligy is. It has been around since the 1950s or 1960s if not earlier. I could be biased here since I did program with/write microcode. Never heard the term microprgram when I was involved and I have never heard that before this discssion as anything other then a tiny program segment. Vegaswikian 19:29, 30 July 2006 (UTC)[reply]
    • You've led a sheltered life, then. Search on Amazon for books with "microcoding" in the title; then search for "microprogramming". There are eight times as many of the latter. The ACM and IEEE held conferences and workshops on "microprogramming"; they held none on "microcoding". When Maurice Wilkes invented the technique in 1951, he called it "microprogramming", not "microcoding". "Microcoding" is effectively a slang term for "microprogramming", just as "coding" is for "programming". The existence of a commonly-used term that is less formal justifies a redirect, but does not justify renaming the article. The article name is correct as it stands. --Brouhaha 00:50, 31 July 2006 (UTC)[reply]
  • Comment I always thought that microprograms were written in microcode, much like assembly programing is done in assembly language or assembly code. Seems simple to me. Atlant 19:12, 31 July 2006 (UTC)[reply]
  • Oppose moving it. When the concept first appeared, it was called control store, but its technological flowering came out of Cambridge University and was called microprogramming by its inventors, notably Maurice Wilkes. Microcode is the newer, less formal term. Microcode already redirects to it. If "Microcode" already existed and contained significant content, then merging might be worth arguing, but that's not the case. Moving it is a waste of time that doesn't improve the quality of Wikipedia - I've done pointless moves/renames in the past, and have come to realize it's a waste of time and effort. Rick Smith 19:55, 31 July 2006 (UTC)[reply]
    • Change vote to Strong Support - Okay, I just looked at the actual title of the article we're arguing about. The Google test shows the title microprogram is lost in the noise. Moreover, I was recently reminded that Maurice Wilkes, who coined the term microprogramming, has tried to enforce a very specific definition which excludes simpler forms of control store that he didn't invent. Those simpler forms are customarily referred to as microcode in the business, as are more sophisticated microprograms that incorporate Wilkes' innovations. To some extent, I wonder if we waste valuable intellectual resources arguing about things that redirect easily, but it seems to matter to someone who's willing to fiddle with the mechanics and make the change. In any case, I did a name change incorrectly early in my Wikipedia days, so this change in vote also indicates that I've developed a better understanding of the community naming policy. Rick Smith 15:09, 1 August 2006 (UTC)[reply]
  • Strong Support. The term microcode is much more commonly used. Just do a Google search on microcode vs microprogram. For many in the general public, their first exposure to the term was the Pulitzer-prize winning 1981 book The Soul of a New Machine, by Tracy Kidder: [1]. That book generally used the term microcode. During the RISC vs CISC debates of the 1980s, the term most often used was microcode (RISC processors aren't microcoded). Do a Google search on microcode + RISC vs microprogram + RISC. Microcode is much more commonly used. In general we should use the most common term, unless there is overwhelming reason otherwise. See Wikipedia:Naming conventions (common names). Joema 22:53, 31 July 2006 (UTC)[reply]
    • That's good rationale for having the redirect from microcode, but not for renaming the article. Having a more common name would be better if it was of comparable "correctness", but it is not. "Microcode" is a less formal name for the concept. As mentioned previously, computer architects (including the inventor of microprogramming), professional engineering societies, and computer science and engineering journal all predominantly refer to it as "microprogramming". In many fields there are more terms used professionally for which a less formal term is often substituted by laymen, and that justifies a redirect but not a rename. For instance, the Wikipedia article on "gerund" shouldn't be renamed to "verbal noun" even though the latter is more commonly used by laymen. The former is the precise formal term as used by linguists. For a technical term, it is appropriate (and perhaps an "overwhelming reason") to name articles for the precise technical term described, and simply mention and offer redirects from less precise terms.--Brouhaha 23:41, 31 July 2006 (UTC)[reply]
      • The WP guideline for naming articles is very clear: Name of articles "should be the most commonly used name". Microcode is much more commonly used than microprogram by the popular audience -- WP name guidelines emphasize that, not the most technically correct name as used by academic specialists. WP guidelines make exception if the "common name of a subject is misleading". That is not the case here. Joema 01:32, 1 August 2006 (UTC)[reply]
  • Don't Care: To me, the terms are interchangeable, perhaps a reflection of being a contractor who migrates from client to client who use different terminologies. As far as I know, I've never been misunderstood using the "other" term. As long as the typed word redirects to a relevant article, it doesn't matter which way it goes. —EncMstr 04:07, 2 August 2006 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Intel Core Microarchitecture

[edit]

Needs more info on intel's recent advances in microarchitecture

Do those advances in microarchitecture have anything to do with microcode? (The fact that both words include the prefix "micro" does not indicate that microcode is involved in all aspects of microarchitecture.) If not, perhaps this question should be asked on the Intel Core Microarchitecture talk page, not the talk page for the microcode page. Guy Harris 06:35, 28 August 2006 (UTC)[reply]

some CPUs don't use microcode

[edit]

The first few paragraphs seem to imply that *all* CPUs use microcode.

Towards the end, the "Microcode versus VLIW and RISC" section says that VLIW and RISC CPUs do not use microcode.

Confusing.

Then it claims that more recent "Modern implementations of CISC instruction sets" use microcode for only *some* instructions. (Is this a round-about method of talking about Intel Pentium processors, or is there some other "modern ... CISC"?)

I suspect this is incorrect.

My understanding is that, the instruction decoder on modern Pentium processors breaks down x86 instructions into one or more internal micro-ops, so it does have some similarities to the microcode store. But the information inside that decoder is technically not microcode, because the decoder is technically not a control store, because its output lines to not directly control the various parts of the Pentium CPU. Instead, the decoder output lines feed into the micro-op buffer, and as many micro-ops as possible (which varies cycle-by-cycle) are munched on each cycle (as selected by the superscalar dispatch unit).

Is it really true that some processors use microcode for some but not all instructions? --70.189.77.59 05:47, 15 October 2006 (UTC)[reply]

I've rewritten the first few paragraphs a bit in an attempt not to imply that all CPUs use microcode.
"Modern implementations of CISC instruction sets" is a way of talking about modern implementations of CISC instruction sets, including but not limited to recent Pentium processors; it also includes modern Intel x86 processors that don't have Pentium in their name, modern AMD x86 processors, and, I think, modern descendants of the System/360 processors (System/390 and z/Architecture).
In the case of S/390 and z/Architecture processors, most instructions are executed directly by hardware; others trap to what IBM calls "millicode", which consists of a combination of S/390 or z/Architecture instructions and special instructions executable only in a special "millicode" mode. See, for example, C. F. Webb and J. S. Liptay (1997). "A high-frequency custom CMOS S/390 microprocessor". IBM Journal of Research and Development. 41 (4/5): 463–474. doi:10.1147/rd.414.0463. ISSN 0018-8646. Retrieved 2006-10-16. That could be thought of as similar to PALcode on DEC Alpha processors.
In the case of P6 and later x86 processors from Intel, I have the impression that some instructions are directly translated to uops by the hardware, and other instructions cause a sequence of uops to be fetched from on-chip store; those sequences of uops could be thought of as microcode. They might not be horizontal microcode that directly controls gates in the processor, but not all microcode is horizontal.
In the case of the Nx586 and the AMD K6 and later x86 processors from AMD, I have the impression that they work similarly to the P6 and later Intel processors; at least in the Nx586 and K6, the internal operations are called "RISC86", suggesting that they're a RISC-like instruction set. They might also have some RISC86 code in on-chip store, functioning as a microcode equivalent for some of the more complex instructions. Guy Harris 08:26, 16 October 2006 (UTC)[reply]

writable microcode

[edit]

I was under the impression that all Intel and AMD processors, if they had microcode at all, had only read-only microcode. So once the chip was made, it is impossible to "update" the microcode.

So what's this I hear about a "microcode update" [2] [3] ?

Do Intel processors have a writable microcode after all?

--75.37.227.177 04:36, 13 July 2007 (UTC)[reply]

There are Intel x86 processors that have writable microcode - the Intel P6 processors, for example, have it. Guy Harris 05:51, 13 July 2007 (UTC)[reply]

Feature article

[edit]

This article is great. it should be entered into the featured articles list. —Preceding unsigned comment added by 85.197.24.248 (talk) 00:04, 20 October 2009 (UTC)[reply]

Which parts do you like the best? It would be interesting to hear.
Regards 83.255.42.150 (talk) 00:52, 26 October 2009 (UTC)[reply]

Microinstruction width

[edit]

The wording the microinstruction is often as wide as 50 or more bits is a bit awkward; both structurally and because the article later mentions a width of 108. I'd suggest the microinstruction is often as wide as 108 bits or the microinstruction is often wider than 50 bits.

The microinstruction width on the 360/85, 370/165 and 370/168 was only 108 bits if there was no emulator feature installed; with a 7070, 7080 or 7090 emulator feature the width was larger. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:45, 13 July 2010 (UTC)[reply]

Yes, you are right. Feel free to fix it. --Kubanczyk (talk) 09:35, 14 July 2010 (UTC)[reply]
Done. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:59, 15 July 2010 (UTC)[reply]

Would references be helpful?

[edit]

Would it be helpful to provide technical references for the micro instruction formats on some processors, or would that be TMI? I can probably track down the manual names and numbers for some of

*IBM 360/40
*IBM 360/50
*IBM 360/65
*IBM 370/145
*IBM 370/155
*IBM 370/165
*RCA 601 Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:51, 13 July 2010 (UTC)[reply]

"Microcode vs. VLIW/RISC"?

[edit]

"Microcode" and "RISC" aren't, strictly speaking, complete opposites. CISC and RISC are instruction set design philosophies; microcode and random logic/state machines are implementation techniques. CISC processors have been implemented without microcode (IBM System/360 Model 75, GE-600 series and the Honeywell 6000 series) and largely in hardware with only some instructions implemented in microcode (later x86 processors) or in "millicode" (later IBM System/390 processors, z/Architecture processors). RISC processors are largely implemented without microcode (please provide explicit references for claims to the contrary). Guy Harris (talk) 08:14, 25 August 2010 (UTC)[reply]

Microcode that doesn't implement instruction sets?

[edit]

The article starts with

Microcode is a layer of hardware-level instructions or data structures involved in the implementation of higher level machine code instructions in many computers and other processors; it resides in special high-speed memory and translates machine instructions into sequences of detailed circuit-level operations.

which speaks of microcode as a technique for implementing an instruction set, rather than, for example, directly implementing the algorithm(s) the machine is ultimately intended to perform.

One could argue that the term "microcode" has also been used for code that directly implemented those algorithms, e.g. the microcode in an IBM 3880 disk controller probably didn't implement an instruction set in which the controller's algorithms were encoded, the microcode probably directly implemented those algorithms.

In addition, the article says

Microcoding remains popular in application-specific processors such as network processors.

but I suspect the microcode in the network processors doesn't implement an instruction set in which the network algorithms are encoded, they probably directly implement the algorithms.

I'm not sure what would distinguish between a microinstruction set and a specialized (non-micro-)instruction set in those cases - Harvard architecture so that the code and data are separate? More VLIW-style instructions, perhaps with fields of the instruction having a more directly connection to hardware functions, along the lines of horizontal microcode? Guy Harris (talk) 00:24, 4 August 2012 (UTC)[reply]

Indeed. Properly speaking microcode is used to impliment an instruction set, but from the 1960's IBM referred to VLIW code used for other purposes as microcode. Further, on the low end IBM System/360 and S/370 machines, IBM used the same hardware instruction set to simulate the architect instructions and to implement the I/O channels.
To add to the confusion, other terms have been used for code and instruction sets used to simulate another instruction set, e.g., logand and logram on the TRW-130; see bitsavers for manuals. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:38, 7 August 2012 (UTC)[reply]

History

[edit]

@Dsimic: Aren't the History and Justification sections basically two history sections, which need to be merged? The Justification section is basically history. I mean the history of development of most things, is that of its justification. It's not like it was conceived out of sabotage or stupidity. ;) It seems like they should be merged in timeline order. — Smuckola (Email) (Talk) 17:47, 22 January 2015 (UTC)[reply]

Hello! That's a good point, they share a lot while being a bit too wordy. IMHO, it might be the best to rework (and compact) "Overview", "Justification" and "History" sections into resulting "Overview" and "History" sections, where the content from "Justification" would be split between both sections. I might take a shot at it later today, and we could move on from there – if you agree. — Dsimic (talk | contribs) 18:04, 22 January 2015 (UTC)[reply]
@Dsimic: I'm way out of my depth on the direct subject matter. I'm more in the area of just copy editing and formatting. I can't imagine who on earth wrote all this intense detail, with no citations. This isn't exactly personal memoirs. I was gritting my teeth through all the typical nostalgic faux-past tense, while reading all these statements of fact and nodding my head, "Yup yup yup, sounds goooood. Sounds like I sure am glad we have microcode, then. p.s. Mainframes sound HARD. Thanks for saving the world, IBM!" — Smuckola (Email) (Talk) 18:59, 22 January 2015 (UTC)[reply]
Well, the article is very well written but quite wordy in some sections; perhaps the people who wrote it used some references but missed to note them in the article. I'll do the above described compaction later today, it should be rather good. At the same time, cleanups you've performed are just fine and according to MOS:COMPNOW. — Dsimic (talk | contribs) 19:10, 22 January 2015 (UTC)[reply]
@Dsimic: Oh cool, MOS:COMPNOW will help with the essay I'm writing about tense. Maybe it would be worth scanning the article's history or using wikiblame, and contacting the major contributors who are already familiar with the sources and can cite them. It seems like somebody probably wrote well-formatted and well-thought-out prose like this in educated swaths. Or, that's wishful thinking. — Smuckola (Email) (Talk) 00:57, 24 January 2015 (UTC)[reply]
Here are a few nice papers that we can use as references for a large part of the article:
I'll see to incorporate them while compacting above mentioned sections. — Dsimic (talk | contribs) 01:28, 24 January 2015 (UTC)[reply]
@Dsimic: You're wanting to disregard the existing RS references already given for the text already written, in favor of homework and lecture notes? Wat. No. — Smuckola (Email) (Talk) 02:51, 24 January 2015 (UTC)[reply]
Of course not, those three references above would be only added to back what we already have as the article content. There are no reasons to discard anything that's already part of the article. — Dsimic (talk | contribs) 16:47, 24 January 2015 (UTC)[reply]
@Dsimic: No, I said "disregard", not "discard". The existing content is presumably based upon the references which are already present to be harvested for inline citations. There may not be any need for new references. If there were, they'd be RSes, not the sources you presented. — Smuckola (Email) (Talk) 19:33, 24 January 2015 (UTC)[reply]
That makes sense; however, those three papers, at least to me, are good enough to serve as references. — Dsimic (talk | contribs) 21:50, 24 January 2015 (UTC)[reply]

68K

[edit]

I'm surprised the 68K is not included in the discussion of horizontal and vertical microcode, as it uses a double layer microcode to map the possible control lines into a smaller index number into the much smaller number of unique control line patterns. 188.29.165.59 (talk) 20:29, 8 March 2015 (UTC)[reply]

Decoding of horizontal micro-orders

[edit]

I know of no IBM processor with horizontal microcode in which the micro-orders are not decoded. Typically Read Only Storage (ROS) was expensive, and it was cost effective to have a minimal amount of encoding of, e.g., register selections, ALU function. Of course, vertical microcode has more extensive encoding. Note that the definition in practice does not match that coined by Wilkes. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:50, 28 May 2015 (UTC)[reply]

RISC vs. CISC inconsistency

[edit]

It looks like there's a conflict between one of the stated advantages and disadvantages of RISC:

  • Advantage: Analysis shows complex instructions are rarely used, hence the machine resources devoted to them are largely wasted.
  • Disadvantage: Non-RISC instructions inherently perform more work per instruction (on average), and are also normally highly encoded, so they enable smaller overall size of the same program, and thus better use of limited cache memories.

If complex instructions are rarely used in real-world code, then there shouldn't be much of a difference in the size of CISC code vs. RISC code.

One of these points needs to be altered, I think... They can't both, logically, be true... Epitron (talk) 06:13, 19 April 2017 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on Microcode. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 08:00, 29 January 2018 (UTC)[reply]

Babbage engine microcoded?

[edit]

The article states that the Babbage analytical engine "deserves to be recognised as the first microprogrammed computer to be designed". I've tagged that as original research, but the opinion tag might also be relevant. The citation supports the 2002 date, but not the other claims. (In my view, considering cams to be a microprogram doesn't make sense.) KenShirriff (talk) 17:47, 3 June 2019 (UTC)[reply]

That text was added by MarkMLl in July 2007 diff, without a reference. I agree that cams are not microcode and I recommend removal of the text. It can be restored in the unlikely event of someone finding a reference. Johnuniq (talk) 00:47, 4 June 2019 (UTC)[reply]
I based that on Babbage's drawings reproduced in a book published by Dover of which I no longer have a copy. My rationale- and I hasten to add that I am entirely happy to be guided by The Community here- is two-fold. Firstly, as described Babbage went to some trouble to design his mechanism with a positive drive to all motion, i.e. no spring return which I think makes it reasonably analogous to digital electronics. Secondly, as I remember it an operation card selected a specific axle which it caused to rotate, and I think it's reasonable to argue that the multiple cams on an axle are a reasonable match with the multiple bits in a microcode word.
I'd suggest that the test which would make or break the comparison is whether cams on different axles had the same significance in the same axial positions. MarkMLl (talk) 14:10, 4 June 2019 (UTC)[reply]
Microcode is a simulation of one computer on another, possibly with a hardware assist. The cams on an axle might be analogous to bits in a logic array, but they are certainly not equivalent to instructions on a stored program computer. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:24, 4 June 2019 (UTC)[reply]
It is not a simulation, it is an implementation method. In fact it was the dominant implementation method before RISC architectures became common. MarkMLl (talk) 08:28, 5 June 2019 (UTC)[reply]
If it walks like a duck and quacks like a duck then it is a duck. Simulation is one way of implementing an architecture. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:49, 5 June 2019 (UTC)[reply]

Possible malicious link in article

[edit]

When I click on the link in reference 2 (Fog, Agner), Malwarebytes reports "Website blocked due to a Trojan." Can anyone verify this? — Preceding unsigned comment added by Paul C. Anagnostopoulos (talkcontribs) 01:10, 14 January 2020 (UTC)[reply]

Is it still doing that? It might just have been a false positive. Guy Harris (talk) 06:23, 24 May 2022 (UTC)[reply]

Why is this in Category:Microcontrollers?

[edit]

There well be microcontrollers where the CPU is implemented using microcode, but if that's a sufficient reason to put this article into Category:Microcontrollers, that would also be sufficient reason to put this in Category:Mainframe computers, Category:Minicomputers, and Category:Microprocessors.

Or is the idea that, in some cases, a peripheral might have an embedded microcontroller, and the code in that microcontroller is referred to as microcode for the peripheral? Guy Harris (talk) 06:21, 24 May 2022 (UTC)[reply]

Intro and definition

[edit]

Jul 19, 2022, 20:21 - «‎Overview: add more sane definition of instructions microops»

I've added a more sane definition to the overview along with a source. The intro one is too technical. AXONOV (talk) 20:22, 19 July 2022 (UTC)[reply]

Micro-operations, in modern CISC processors such as P6-microarchitecture and later x86 processors and at least some IBM System/390 and z/Architecture microprocessors, are different from traditional microcode. Traditional microcode isn't generated on the fly from decoded instructions, but most micro-operations in modern processors are - some x86 microcode may consist of micro-ops stored on-chip, but I'm not sure whether modern IBM mainframe processors have any microcode such as that, instead relying on millicode, which is composed of instructions from a subset of the S/390 or z/Architecture instruction set plus processor-specific extensions, to implement complex operations. Guy Harris (talk) 22:16, 19 July 2022 (UTC)[reply]
Jul 19, 2022, 21:15 - «‎Instruction decoding microcode: Micro-ops are different from traditional microcode. Traditional fully-microcoded processors fetch, decode, and execute operations were done by microcode; in micro-op processors, hardware fetches and decodes instructions, emitting a sequence of one or more micro-ops that are scheduled for execution. Intel's document is not authoritative here; it's focused on modern CISC processors.)»
@Guy Harris: The source says:

«collection of microps (or μ-instructions) make up a microcode»[4]

The term μ-op and microcode are not the same but are mutually related. That's enough per source. See also:[5]: 148 
[…] Intel's document is not authoritative here […] See additional source above. The source is talking specifically about instructions-decoding microcode which simply map macro-instructions into a circuitry (e.g. ALU) that process data specified by macro-instructions. The microcode sits between decode & execute stages in the instruction cycle. It works like a microcontroller that treats MCU instructions as data and interprets it. I propose we keep my edit. Best.
AXONOV (talk) 08:21, 20 July 2022 (UTC)[reply]
The source says:

In a Complex Instruction Set (CISC) machine like x86, the stream of instructions read from memory are decoded into small operations known as micro-ops (μops).

In fully microcoded pre-P6 processors, the stream of instructions are read from memory by microcode and decoded by microcode, which then, based on the opcode, jumps to the microcode that executes the instruction in question. The instructions are not directly decoded into microinstructions/micro-ops. (The microcode that fetches and decodes the instructions, and the microcode that executes the instructions, may be executed by different processors, as per the VAX 8800 example and possibly other processors.)
In other processors, the instructions might be fetched and decoded by hardware, and that hardware may direct other hardware to execute some instructions and direct microcode to execute others, as per, for example, the Intel 80486.
In the P6, instructions are fetched and decoded, probably by hardware. and translated on the fly into micro-ops that are scheduled for execution. This is the way all non-Atom x86 processors since the Pentium Pro work (I don't know whether the Atom processors work the same way or not).
If that's what the author of the document meant by "the stream of instructions read from memory are decoded into small operations known as micro-ops (μops)", then it's certainly true of all those x86 processors, and true (given IBM's use of the term "microop") of at least some z/Architecture processors, but it was not true of, for example, any of the IBM System/360 processors, probably not true of most if not all System/370 processors, not true of any microcoded PDP-11 processors not true of some if not all VAX processors and not, as far as I know, true of any Intel processors up to the 80386 (and possibly including the 80486 and original Pentium).
I.e., if that's what they meant, their statement does NOT apply to a large number of microcoded processors.
"The source is talking specifically about instructions-decoding microcode which simply map macro-instructions into a circuitry (e.g. ALU) that process data specified by macro-instructions. The microcode sits between decode & execute stages in the instruction cycle. It works like a microcontroller that treats MCU instructions as data and interprets it." is not at all clear:
I have no idea what it means to "map macro-instructions into a circuitry (e.g. ALU) that process data specified by macro-instructions."; which of the various types of processor I mention above does it refer, and how does it do so?
"It works like a microcontroller that treats MCU instructions as data and interprets it." is referring to the "In fully microcoded pre-P6 processors", not to the processors in which hardware reads macroinstructions and translates them to generated-on-the-fly micro-operations.
So I propose we ignore any edit that cannot distinguish between "microcode as simulator for an instruction set" (traditional microcode) and "micro-operations generated on the fly by the instruction decoder" (modern x86 and z/Architecture processors). Guy Harris (talk) 08:52, 20 July 2022 (UTC)[reply]
BTW, your other source's link informs me that I "have either reached a page that is unavailable for viewing or reached [my] viewing limit for this book", so I just bought it on Apple Books.
Section 8.9 "Microcoded Instructions" is discussing a traditional microcoded processor. Section 8.10 "Microcode Variations" says that "“On some CPUs, microcode implements the entire fetch-execute cycle — the microcode interprets the opcode, fetches operands, and performs the specified operation.", which is what I referred to as a "fully-microcoded processor", and which also implies that there are alternatives, such as "hardware directly fetches and decodes instructions and fetches some or all operands, and then jumps to microcode to implement the instruction".
Unfortunately, the book doesn't cover the way most general-purpose processors work these days; the word "superscalar" appears nowhere in the book, according to the Books app, and "out-of-order" only appears in one section with one paragraph. The stuff that processors from smartphones to supercomputers do is in sections 8.19 through 8.21, with very little detail. Guy Harris (talk) 09:19, 20 July 2022 (UTC)[reply]
As for micro-operations in the P6-and-later sense, the P6 (microarchitecture) page says:

P6 processors dynamically translate IA-32 instructions into sequences of buffered RISC-like micro-operations, then analyze and reorder the micro-operations to detect parallelizable operations that may be issued to more than one execution unit at once.[1] The Pentium Pro was the first x86 microprocessor designed by Intel to use this technique, though the NexGen Nx586, introduced in 1994, did so earlier.

The reference says that

The decoders translate x86 instructions into uops. P6 uops have a fixed length of 118 bits, using a regular structure to encode an operation, two sources, and a destination. The source and destination fields are each wide enough to contain a 32-bit operand. Like RISC instructions, uops use a load/store model; x86 instructions that operate on memory must be broken into a load uop, an ALU uop, and possibly a store uop.

which is a description of a "generate micro-operations on the fly" processor, not a traditional "microcode as instruction set simulator" processor. Guy Harris (talk) 09:29, 20 July 2022 (UTC)[reply]

Overview/Microcode

[edit]

The Microcode subsection within the Overview section contains the following:

Using microcode, all that changes is the code stored in the associated read only memory (ROM). This makes it much easier to fix problems in a microcode system. It also means that there is no effective limit to the complexity of the instructions, it is only limited by the amount of ROM one is willing to use.

Since microcode can be updated, is it actually stored in ROM? I presume the answer is, "No," but I don't know what type of memory should be mentioned, so I didn't change the text. --141.162.101.52 (talk) 16:41, 26 September 2024 (UTC)[reply]

Since microcode can be updated Some microcode can be easily updated. Other microcode cannot be easily updated, such as the microcode in the IBM System/360 Model 30, IBM System/360 Model 40, and IBM System/360 Model 50, which use capacitor read-only storage, transformer read-only storage, (balanced) capacitor read-only storage, respectively; it may be possible to replace that ROM, or (in the case of the capacitor ROM) remove a component and replace it, but it can't be electronically updated, only manually updated. Other processors may have had microcode in on-chip ROM in a single-chip CPU, which can be replaced only by replacing the entire CPU.
is it actually stored in ROM? Yes on some systems, no on others. Guy Harris (talk) 18:01, 26 September 2024 (UTC)[reply]
Fixed in this edit; I just spoke of the memory holding the microcode (which could be ROM, EEPROM, RAM, or secondary storage from which the microcode is loaded into RAM). Guy Harris (talk) 20:17, 26 September 2024 (UTC)[reply]

References

  1. ^ Gwennap, Linley (16 February 1995). "Intel's P6 Uses Decoupled Scalar Design" (PDF). Microprocessor Report. 9 (2).