* Get_Immediate @ 2003-12-10 16:00 Xavier Serrand 2003-12-10 18:33 ` Get_Immediate tmoran 2003-12-10 18:44 ` Get_Immediate Jeffrey Carter 0 siblings, 2 replies; 10+ messages in thread From: Xavier Serrand @ 2003-12-10 16:00 UTC (permalink / raw) Hello, I have to work with Ada 83 .... and, I have to use the functionnality of Get_Immediate (return immediately after the keystroke and no echo) I tried to interface with c (getch)... it works fine ... with Ada 95... I can't interface with Ada 83 ... because i have none of the specific package to do this work ... Could somebody give me a better idea if it's no possibility to find old packages to interface with c ... xavier SERRAND ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate 2003-12-10 16:00 Get_Immediate Xavier Serrand @ 2003-12-10 18:33 ` tmoran 2003-12-11 2:32 ` Get_Immediate Xavier Serrand 2003-12-10 18:44 ` Get_Immediate Jeffrey Carter 1 sibling, 1 reply; 10+ messages in thread From: tmoran @ 2003-12-10 18:33 UTC (permalink / raw) > I have to work with Ada 83 .... and, > I have to use the functionnality of Get_Immediate (return immediately > after the keystroke and no echo) Your Ada 83 compiler almost surely came with a library with the routine you want, and a way (not exactly the Ada 95 way, obviously) to interface with other code in other languages. What compiler are you using? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate 2003-12-10 18:33 ` Get_Immediate tmoran @ 2003-12-11 2:32 ` Xavier Serrand 2003-12-11 7:37 ` Get_Immediate tmoran 0 siblings, 1 reply; 10+ messages in thread From: Xavier Serrand @ 2003-12-11 2:32 UTC (permalink / raw) tmoran@acm.org wrote in message news:<kGJBb.497056$Tr4.1353908@attbi_s03>... > > I have to work with Ada 83 .... and, > > I have to use the functionnality of Get_Immediate (return immediately > > after the keystroke and no echo) > Your Ada 83 compiler almost surely came with a library with the > routine you want, and a way (not exactly the Ada 95 way, obviously) > to interface with other code in other languages. What compiler > are you using? Hello Mister Moran, I'm using Gnat 3.14 ... for Ada 95 ... and I will have to port my work on another station where i'll have to use Ada 83 ... I think i'll see how it's done with gnat ... ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate 2003-12-11 2:32 ` Get_Immediate Xavier Serrand @ 2003-12-11 7:37 ` tmoran 2003-12-11 15:38 ` Get_Immediate Robert A Duff 0 siblings, 1 reply; 10+ messages in thread From: tmoran @ 2003-12-11 7:37 UTC (permalink / raw) > I'm using Gnat 3.14 ... for Ada 95 ... and I will have to port my work > on another station where i'll have to use Ada 83 ... > I think i'll see how it's done with gnat ... How it's done today in Gnat is not going to tell you much about how it was done years ago with your old Ada 83 compiler. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate 2003-12-11 7:37 ` Get_Immediate tmoran @ 2003-12-11 15:38 ` Robert A Duff 0 siblings, 0 replies; 10+ messages in thread From: Robert A Duff @ 2003-12-11 15:38 UTC (permalink / raw) tmoran@acm.org writes: > How it's done today in Gnat is not going to tell you much about how > it was done years ago with your old Ada 83 compiler. For Ada-calling-C, it's pretty similar, but the name of the pragma was Interface in Ada 83 (now called Import in Ada 95). Ada 83 did not support C-calling-Ada, nor inter-language data references (although some compilers supported those things). The OP should look up pragma Interface in the Ada 83 manual, and also in the manual for the compiler. - Bob ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate 2003-12-10 16:00 Get_Immediate Xavier Serrand 2003-12-10 18:33 ` Get_Immediate tmoran @ 2003-12-10 18:44 ` Jeffrey Carter 2004-01-23 16:03 ` Get_Immediate mla154 1 sibling, 1 reply; 10+ messages in thread From: Jeffrey Carter @ 2003-12-10 18:44 UTC (permalink / raw) Xavier Serrand wrote: > I have to work with Ada 83 .... and, > I have to use the functionnality of Get_Immediate (return immediately > after the keystroke and no echo) > > I tried to interface with c (getch)... it works fine ... with Ada > 95... > I can't interface with Ada 83 ... because i have none of the specific > package to do this work ... > > Could somebody give me a better idea if it's no possibility to find > old packages to interface with c ... Your compiler vendor probably supplies a package with this capability, or that allows you to create this functionality. I wrote a keyboard package for DOS in Ada 83 that provided this functionality using a vendor-supplied package to make DOS calls. You should also be able to import getch using pragma Interface in Ada 83: function Getch ...; pragma Interface (C, Getch); Many compilers also supported pragma Interface_Name to supply the 3rd parameter of pragma Import: pragma Interface_Name (Getch, "getch"); -- IIRC -- Jeff Carter "Perfidious English mouse-dropping hoarders." Monty Python & the Holy Grail 10 ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate 2003-12-10 18:44 ` Get_Immediate Jeffrey Carter @ 2004-01-23 16:03 ` mla154 0 siblings, 0 replies; 10+ messages in thread From: mla154 @ 2004-01-23 16:03 UTC (permalink / raw) Following your suggestions, Jeff, the program compiled with no errors. However, when I ran the program, the getch appears not to accept the pressing of the 'Enter' key. Do you have any suggestions? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison @ 1997-08-21 0:00 Robert Dewar 1997-08-22 0:00 ` Jim Carr 0 siblings, 1 reply; 10+ messages in thread From: Robert Dewar @ 1997-08-21 0:00 UTC (permalink / raw) << So is the process of measurement with a device with less precision than necessary to make the measurement. For example, in your case there is no uncertainty in the integer part of a 14 digit real in a double precision IEEE representation that has been properly rounded, but there is an uncertainty if it was stored as a float.>> I strongly agree with Christian here, uncertainty is an even WORSE term than round off error. A common viewpoint of floating-point arithmetic held by many who don't know too much about it is that somehow floating-point arithmetic is inherently (slightly) unreliable and can't be trusted. The term uncertainty encourages this totally unuseful attitude. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison 1997-08-21 0:00 floating point comparison Robert Dewar @ 1997-08-22 0:00 ` Jim Carr 1997-08-22 0:00 ` Robert Dewar 0 siblings, 1 reply; 10+ messages in thread From: Jim Carr @ 1997-08-22 0:00 UTC (permalink / raw) dewar@merv.cs.nyu.edu (Robert Dewar) writes: > ><< So is the process of measurement with a device with less precision > than necessary to make the measurement. For example, in your case > there is no uncertainty in the integer part of a 14 digit real in > a double precision IEEE representation that has been properly > rounded, but there is an uncertainty if it was stored as a float.>> > >I strongly agree with Christian here, uncertainty is an even WORSE >term than round off error. I wrote the above, not Christian. I prefer uncertainty because, in a university context, it is familiar to students from their chemistry and physics labs and the rules for propagating it are the same. The tradition in numerical analysis has always (?) been to identify two kinds of "error" -- formula and round-off -- that compete in any given kind of calculation. Depends on the audience. >A common viewpoint of floating-point arithmetic held by many who don't >know too much about it is that somehow floating-point arithmetic is >inherently (slightly) unreliable and can't be trusted. The term >uncertainty encourages this totally unuseful attitude. Are you saying that the floating point representation is not an approximation to the real number being stored? I don't think so. I think you are saying that the results of floating point operations on numbers in that floating point representation are deterministic. I agree that they are. The point is that the difference between the real number being represented in the machine and a particular floating point representation will be propagated by the "exact" procedure and sometimes dramatically increase the difference between the result of real arithmetic on the original real number and the result of floating point arithmetic on the original fl-pt representation. The propagation of this uncertainty also follows deterministic (albeit statistical) rules quite familiar from the sciences in which measurement is commonly used. I do not see how you can claim that the difference between the result of real arithmetic and floating point arithmetic for a range of input values will not show a distribution of the type normally associated with measurement uncertainty. It does. In the example cited, the uncertainty in one possible representation (float) is greater than in another (double) with the result that the desired integer conversion is not exact in some cases that original author claimed they would be. (The example also appeared to ignore the minimum size of int mandated by C and present in many common implementations, but that is irrelevant here.) This is obvious if one knows the relative uncertainty in those two IEEE representations of real numbers. -- James A. Carr <jac@scri.fsu.edu> | Commercial e-mail is _NOT_ http://www.scri.fsu.edu/~jac/ | desired to this or any address Supercomputer Computations Res. Inst. | that resolves to my account Florida State, Tallahassee FL 32306 | for any reason at any time. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison 1997-08-22 0:00 ` Jim Carr @ 1997-08-22 0:00 ` Robert Dewar 1997-08-23 0:00 ` Jim Carr 0 siblings, 1 reply; 10+ messages in thread From: Robert Dewar @ 1997-08-22 0:00 UTC (permalink / raw) Jim Carr said <<>I strongly agree with Christian here, uncertainty is an even WORSE >term than round off error. I wrote the above, not Christian.>> You misread my message, I did not say that Christian recommended the term uncertainty, on the contrary, he objected to it strongly, and so do I, which is why I was agreeing with him in disagreeing with you.l Sorry for the confusion, but I still strongly disagree with the term. For reasons I gave earlier, it is evenn worse than error. There is no uncertaintly in the results of IEEE operations. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison 1997-08-22 0:00 ` Robert Dewar @ 1997-08-23 0:00 ` Jim Carr 1997-08-24 0:00 ` Robert Dewar 0 siblings, 1 reply; 10+ messages in thread From: Jim Carr @ 1997-08-23 0:00 UTC (permalink / raw) dewar@merv.cs.nyu.edu (Robert Dewar) writes: > > There is no uncertaintly in the results of IEEE operations. The term "rounding error" and the alternative "rounding uncertainty" both refer to the fact that the precise and deterministic result of the conversion of a real number to a floating point value gives a bit pattern that *also* results from the precise and deterministic conversion of an uncountably infinite number of other real numbers. It implies no prejudice concerning whether this is good or bad, because it is inevitable. That is what it has in common with random (not systematic) measurement uncertainties, besides the fact that students are familiar with the latter and propagation of those uncertainties, that arise from the intrinsic limitations in the precision of measurement apparatus -- which is why I have found it to be a helpful alternative, a synonym as it were, when introducing the concept to non-math-major (usually CS) students. The difference is inevitable. It does exist. It has important consequences when interpreting the result of a calculation that is being used as a substitute for working with real numbers, a major reason computers exist. If you do not like the name, propose another -- but do not pretend that it does not happen. -- James A. Carr <jac@scri.fsu.edu> | Commercial e-mail is _NOT_ http://www.scri.fsu.edu/~jac/ | desired to this or any address Supercomputer Computations Res. Inst. | that resolves to my account Florida State, Tallahassee FL 32306 | for any reason at any time. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison 1997-08-23 0:00 ` Jim Carr @ 1997-08-24 0:00 ` Robert Dewar 1997-08-29 0:00 ` Andrew V. Nesterov 0 siblings, 1 reply; 10+ messages in thread From: Robert Dewar @ 1997-08-24 0:00 UTC (permalink / raw) James Carr says << The difference is inevitable. It does exist. It has important consequences when interpreting the result of a calculation that is being used as a substitute for working with real numbers, a major reason computers exist. If you do not like the name, propose another -- but do not pretend that it does not happen. >> I think you should assume that I do understand how floating-point works, and I have written hundreds of thousands of lines of numerical Fortran code, carefully analyzed, much of which is still in use today, even though that was some thirty years ago! The point, which perhaps you just don't see, because I assume you also understand floating-point well, is that I think the term "error" for the discrepancies that occur when an arbitrary decimal number is converted to binary floating-point are not errors. An error is something wrong. There is nothing wrong here, you habve made the decision to represent a decimal value in binary form, and used a method that gives a very well defined result. If this is indeed an "error", then please don't use methods that are in error, represent the number some other way. If on the other hand, your analysis shows that the requirements of the calculation are met by the use of this conversion, then there is no error here! Similarly when we write a := b + c, the result in a is of course not the mathematical result of adding the two real numbers represented by b and c, but there is no "error". An error again is something wrong. If an addition like that is an error, i.e. causes your program to malfunction, then don't do the addition. If on the other hand, your analysis shows that it is appropriate to perform the computation a = b + c, where a is defined as the result required by IEEE854, then there is no error. Yes, it is useful to have a term to describe the difference between the IEEE result and the true real arithmetic result, but it is just a difference, and perhaps it would be better if this value had been called something like "rounding difference", i.e. something more neutral than error. The trouble with the term error, is that it feeds the impression, seen more than once in this thread, that floating-point is somehow unreliable and full of errors etc. Go back to the original subject line of this thread for a moment. Is it wrong to say if a = b then where we know exactly how a and b are represnted in terms of IEEE854, and we know the IEEE854 equality test that will be performed/ The answer is that it might be wrong or it might be right. it has a well defined meaning, and if that meaning is such that your program works, it is right, if that meaning is such that your program fails, then it is wrong. But --- we could make the same statement about any possible programming language statement we might write down. There is nothing inherently wrong, bad, erroneous, or anything else like that about floating-point. Now one caveat is that we often do NOT know how statements we write will map into IEEE854 operations unless we are operating in a well defined environment like SANE. As I have noted before, this is a definite gap in programming language design, and is specifically the topic that my thesis student Sam Figueroa is working on. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison 1997-08-24 0:00 ` Robert Dewar @ 1997-08-29 0:00 ` Andrew V. Nesterov 1997-08-29 0:00 ` Robert Dewar 0 siblings, 1 reply; 10+ messages in thread From: Andrew V. Nesterov @ 1997-08-29 0:00 UTC (permalink / raw) In article <dewar.872395110@merv>, dewar@merv.cs.nyu.edu (Robert Dewar) wrote: >James Carr says [...] >The point, which perhaps you just don't see, because I assume you also >understand floating-point well, is that I think the term "error" for >the discrepancies that occur when an arbitrary decimal number is converted >to binary floating-point are not errors. An error is something wrong. There >is nothing wrong here, you habve made the decision to represent a decimal >value in binary form, and used a method that gives a very well defined >result. If this is indeed an "error", then please don't use methods that >are in error, represent the number some other way. If on the other hand, >your analysis shows that the requirements of the calculation are met by >the use of this conversion, then there is no error here! > >Similarly when we write a := b + c, the result in a is of course not the >mathematical result of adding the two real numbers represented by b and c, >but there is no "error". An error again is something wrong. If an addition >like that is an error, i.e. causes your program to malfunction, then don't >do the addition. The result in "a" of above expression could be either exact or rounded (or inexact or approximate) in the sence of infinite presicion algebra. The answer is whether the terms and the result have finite representation in the particular floating point model and does it fit in or can it be completely placed to a given finite number of binary or other n-ary digits places. From a more general point (Hans Olsson has already mentioned that) there is much more simple and robust and not bounded to any radix arithmetic, way of analizing floating-point computations. It is said that floating poing representation fl(x) of a number x is such that fl(x) = x(1 + delta) where delta is small (in some definite sence) positive number, called roundoff error. Indeed, it does not meant that something wrong has been done, it mean that floating point representation of an exact value is known imprecise. And indeed, it is quite similar to a measurement process, where a result is also usually known imprecise or equivalently, with an error. This analogue could be extended even to systematic (or directed) and random errors, round to nearest would produce random errors, while truncation toward +-infinity would introduce systematic error. I am a little skeptical about the "uncertainty" term that was proposed by James A. Carr, because in quantum mechanics it is relevant to a value that not only "undetermined" i.e. unknown at the moment but moreover has not any value at all, as of coordinate and momentum pair if the former is known the latter could have any value. The next step in error analysis is to suggest that a binary (i.e. involving two terms or factors) operation, the operands of which are in the floating-point domain, is performed exact and the final result is rounded ( * stands for any binary operation) fl(x*y) = (x*y)(1 + delta) This way eliminates any difference between so-called "real" numbers and "fp" numbers whose are simply subset of the former more wide set, and by no means are somewhat "artificial" or "unreal". Complete discussion of above method is in chapter 20 of G.E.Forsythe and C.B.Moler "Computer Solution of Linear Algebraic Systems", Prentice- Hall, 1967 -- long before the IEEE standard! This way of analyzing floating point numbers and operations is invariant to any radix or precision fp arithmetic, and correct not just for IEEE standards. They was adopted relatively recently, while many computational programs based on above analysis have long been working on many different architectures quite well. > >If on the other hand, your analysis shows that it is appropriate to perform >the computation a = b + c, where a is defined as the result required by >IEEE854, then there is no error. Once again, as I am sure somebody had already noticed that, the standard has been drawn to merely standardize parameters of floating point arithmetic -- ranges, mantissa length, gradual underflow process, exception signals, etc, because to then in 70s (and to now on) there were plenty of different fp implementations, although all they behave more or less similarly. By no means IEEE will save computations from roundoff errors! > >Yes, it is useful to have a term to describe the difference between the >IEEE result and the true real arithmetic result, but it is just a >difference, and perhaps it would be better if this value had been called >something like "rounding difference", i.e. something more neutral than >error. > The term "difference" is already used in other terms of numerical computations, e.g. "finite-difference methods", "finite-difference equations", why make things even more tangled? >The trouble with the term error, is that it feeds the impression, seen >more than once in this thread, that floating-point is somehow unreliable >and full of errors etc. The strength of the term is that it can be smoothly combined with other sources of errors in any calculations, pertain to not only computer calculations. A model for a calculation can be imprecise i.e. with errors, input data could be as well imprecise, thus the result would be computed imprecise that is with errors of the model and input data. All these errors (of model, input data and fp arithmetic) can be compared and analized to get the estimate of how close the computed result to a perfect one. > >Go back to the original subject line of this thread for a moment. > >Is it wrong to say > > if a = b then > >where we know exactly how a and b are represnted in terms of IEEE854, and >we know the IEEE854 equality test that will be performed/ > >The answer is that it might be wrong or it might be right. it has a well >defined meaning, and if that meaning is such that your program works, it >is right, if that meaning is such that your program fails, then it is wrong. > Yes, indeed equality test has its well-defined meaning, although could be dangerous in its naive useage because A and B usually (or possibly) are results of a great deal of calculations involving roundoffs. A possibility that A and B would coinside in ALL binary or whatever places is just very small. On the other hand it could be very well justified, as an example tests for iteration termination in EISPACK codes (e.g. TSTURM) can be mentioned. A funny thing, right in the mentioned tests there is also an excellent example of how roundoff errors works to ensure iterations converge. Further, one can easily figure out how some kind of optimization could corrupt those tests and how to prevent optimization to do so. >But --- we could make the same statement about any possible programming >language statement we might write down. > >There is nothing inherently wrong, bad, erroneous, or anything else like >that about floating-point. > >Now one caveat is that we often do NOT know how statements we write will >map into IEEE854 operations unless we are operating in a well defined >environment like SANE. As I have noted before, this is a definite gap >in programming language design, and is specifically the topic that my >thesis student Sam Figueroa is working on. > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: floating point comparison 1997-08-29 0:00 ` Andrew V. Nesterov @ 1997-08-29 0:00 ` Robert Dewar [not found] ` <340DF1DD.2736@iop.com> 0 siblings, 1 reply; 10+ messages in thread From: Robert Dewar @ 1997-08-29 0:00 UTC (permalink / raw) <<The next step in error analysis is to suggest that a binary (i.e. involving two terms or factors) operation, the operands of which are in the floating-point domain, is performed exact and the final result is rounded ( * stands for any binary operation) fl(x*y) = (x*y)(1 + delta) This way eliminates any difference between so-called "real" numbers and "fp" numbers whose are simply subset of the former more wide set, and by no means are somewhat "artificial" or "unreal". Complete>> Yes, I think we all know this :-) I do not think we need an elementary lesson in error analysis here, especially one so simplistic as the one you gave. That is not the issue that we are discussing! As yoy say this material can be read in many text books (I have used many of them to teach courses in numerical analysis and floating point computation, and found good ones, although my awareness of such books is a bit out of date now, it is a while since I taught NA :-) ^ permalink raw reply [flat|nested] 10+ messages in thread
[parent not found: <340DF1DD.2736@iop.com>]
* Re: Get_Immediate [not found] ` <340DF1DD.2736@iop.com> @ 1997-09-07 0:00 ` Robert Dewar 1997-09-07 0:00 ` Get_Immediate Robert Dewar 1997-09-08 0:00 ` Get_Immediate J Giffen 2 siblings, 0 replies; 10+ messages in thread From: Robert Dewar @ 1997-09-07 0:00 UTC (permalink / raw) <<I tried using Get_Immediate with GNAT (3.05) on Solaris and I didn't get the results I wanted. What I got: My program reads each key, and I don't have to type Enter, but it also echoes each key. When I type a key that sends an escape sequence, like an arrow key, all of the codes in the sequence are echoed, e.g. ^[[A, before my program gets the first character of the sequence. Also, some keys, like Ctrl-C did not get read by my program. What I want: I want to read without any echoing and be able to get all of the codes the keyboard can generate. Can you tell me how to get these results?>> You probably need to give more detail, what machine are you on? (remember Solaris does not imply Sparc!) Also, you need to say what interface you are using, since keyboard input is often filtered in various ways by the environment. For example, the situation using X directly on a Sun work station may be radically diferent from the situation of using a terminal emulator remotely (in the latter case, the "codes" from the keyboard may be dependent on the terminal emulator). If the problem is echoing, then you may well want to use appropriate OS dependent routines that are around in your environment to do exactly what you want, using pragma Interface. But in any case, update to a more recent version, 3.05 is pretty ancient at this stage! ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate [not found] ` <340DF1DD.2736@iop.com> 1997-09-07 0:00 ` Get_Immediate Robert Dewar @ 1997-09-07 0:00 ` Robert Dewar 1997-09-08 0:00 ` Get_Immediate J Giffen 2 siblings, 0 replies; 10+ messages in thread From: Robert Dewar @ 1997-09-07 0:00 UTC (permalink / raw) Jeff said <<I tried using Get_Immediate with GNAT (3.05) on Solaris and I didn't get the results I wanted. What I got: My program reads each key, and I don't have to type Enter, but it also echoes each key. When I type a key that sends an escape sequence, like an arrow key, all of the codes in the sequence are echoed, e.g. ^[[A, before my program gets the first character of the sequence. Also, some keys, like Ctrl-C did not get read by my program. What I want: I want to read without any echoing and be able to get all of the codes the keyboard can generate. Can you tell me how to get these results?>> GNAT 3.05 is very far out of date, get a more recent version. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Get_Immediate [not found] ` <340DF1DD.2736@iop.com> 1997-09-07 0:00 ` Get_Immediate Robert Dewar 1997-09-07 0:00 ` Get_Immediate Robert Dewar @ 1997-09-08 0:00 ` J Giffen 2 siblings, 0 replies; 10+ messages in thread From: J Giffen @ 1997-09-08 0:00 UTC (permalink / raw) To: Jeff Glenn Jeff Glenn wrote: > > Mr. Dewar, > > I tried using Get_Immediate with GNAT (3.05) on Solaris and I didn't get > the results I wanted. > > What I got: > > My program reads each key, and I don't have to type Enter, but it also > echoes each key. When I type a key that sends an escape sequence, like > an arrow key, all of the codes in the sequence are echoed, e.g. ^[[A, > before my program gets the first character of the sequence. Also, some > keys, like Ctrl-C did not get read by my program. > > What I want: > > I want to read without any echoing and be able to get all of the codes > the keyboard can generate. > > Can you tell me how to get these results? > > Thanks! > > Jeff Glenn > > jeff@iop.com It sounds like the computer's sampling rate to the keyboard is faster than what the program is willing to accomodate. This might be solved by going into CMOS Setup and altering the keyboard sampling rate. I'd try slowing it down some. ASCII for cursor right, left, up and down is 1C, 1D, 1E & 1F. Other keys have others. A switch closure for a certain row and column on the keyboard is picked up when a scanning frequency detects it and sends the signal to the motherboard. ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2004-01-23 16:03 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2003-12-10 16:00 Get_Immediate Xavier Serrand 2003-12-10 18:33 ` Get_Immediate tmoran 2003-12-11 2:32 ` Get_Immediate Xavier Serrand 2003-12-11 7:37 ` Get_Immediate tmoran 2003-12-11 15:38 ` Get_Immediate Robert A Duff 2003-12-10 18:44 ` Get_Immediate Jeffrey Carter 2004-01-23 16:03 ` Get_Immediate mla154 -- strict thread matches above, loose matches on Subject: below -- 1997-08-21 0:00 floating point comparison Robert Dewar 1997-08-22 0:00 ` Jim Carr 1997-08-22 0:00 ` Robert Dewar 1997-08-23 0:00 ` Jim Carr 1997-08-24 0:00 ` Robert Dewar 1997-08-29 0:00 ` Andrew V. Nesterov 1997-08-29 0:00 ` Robert Dewar [not found] ` <340DF1DD.2736@iop.com> 1997-09-07 0:00 ` Get_Immediate Robert Dewar 1997-09-07 0:00 ` Get_Immediate Robert Dewar 1997-09-08 0:00 ` Get_Immediate J Giffen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox