Main
Main
Artwork
Artwork
Books
Books
Donate
Donate
Licenses
Licenses
Shorts
Shorts
Software
Software
Source
Source


The Long, Winding Road to Writing an Atmel AT89C2051 Microcontroller

2023-07-04

People take a lot of things for granted when they use modern desktops. Can you imagine writing assembly code to target multiple x86 systems? How would you go about addressing memory-- what if your program worked perfectly fine on a build system. But then, it exceeded the size of a different x86 desktop's memory? After many years of sifting the information toilet that is the free and open internet, I decided I should (probably) share a little bit of critical information about modern computer systems and how they work. Because (clearly)-- the majority of people (even those developing software) are very confused about that and what their goals actually are.

Meet the C programming standard. Why target a C programming environment instead of an assembler? If you answered something along the lines of "Uh-- because it's easier?", then you are in desperate need of my help. If you (simply) do not understand how to answer that question, then (honestly) you're not much less knowledgeable than a lot of software developers nowadays (lol). The fact is, it's a lot easier to target an assembler than a C programming environment. And (historically), this is the way software development was accomplished-- manually targeting an assembler. Modern desktops implement a C programming environment. And, a compiler targets a system assembler *for* a developer. But, it's a lot harder to write software for a C compiler than it is to develop a solution for a simple assembler.

So, I'll ask again: "Why target a C compiler?" That first paragraph (above) about targeting many different flavors of the same system yields some insight into the correct answer to this question. This type of philosophical quandary forms the infrastructure of modern desktop implementations: complicated software suites that run (in "user space") on top of a system kernel (executing in "kernel space"). A kernel is a critical piece of system software that makes implementing a C programming environment possible. Typically (in the case of old, old, old x86 desktops), a piece of "normalizing" software (a system "BIOS" that occupies a long term storage chip on a PCB) starts by stabilizing some race conditions a motherboard uncontrollably stores when it receives power.

First, the system software is loaded to a "kernel space" (a security feature that makes hijacking a running system difficult). After performing some nifty "normalizing" procedures, the software tries loading a kernel (there's some storage "boot record" settings that make this possible, if you're not privy). Once BIOS software replaces itself with a kernel, that single process is (essentially) like-- when you load Super Mario Brothers on an old Nintendo. Really, it's the only program executing on the system. Interestingly, a kernel is a "timesharing" suite that launches its own software. So-- you can implement a programming standard (like, uh-- the C programming standard) that makes loading some precompiled byte code possible. Ahh?? :D

So (pretty much), we can use a C programming standard to implement a piece of software that runs software developed for our C programming standard. It kinda feels like we're chasing our tail a bit, here. But (basically), we're trying to answer that first question back there: "Can you imagine writing assembly code to target multiple x86 systems?" Well, uh-- yes. Yes, we can! :o In fact, we can write assembly code to target *any* system-- no matter how it works. Because, we can adopt a programming standard. However (I mentioned), it's always easier to write software for an assembler than it is to target a C programming environment. There's a lot of reasons for that.

Still don't believe me? "No, no-- I'm a Java programmer," you say. "I was taught in school (by people with doctorates) that of course-- of course it is always easier to target a programming language like Java. It certainly is easier to implement software using a C programming environment instead of an assembler, too. I am *sure* of this!" lol. No. For starters, Java is *not* a programming language (please do not call it that because it irritates me to no end). The only "programming language" in wide use is the C programming standard. A Java development kit can use a C programming environment to implement byte code. And then, a C programming environment can be used to execute that byte code (lol). But, to say that Java is anything but a primitive scripting language-- ho, hum. No. The same is true for Lisp, Python, Perl, Ruby, etc.

Getting back to assembly versus the C programming standard, let's just-- keep getting more complicated. Shall we? 'cause what we're gonna do-- we're gonna stick six video devices on our x86 and implement a bunch of crazy graphical features and employ some multivariable calculus (the so called "threeeee deeeee gyaaamer games"). And (meanwhile), here I am over here with my simple microcontroller with 128 bytes of volatile memory and 2K bytes of long term storage. And, I *know* I will always have that (that's my target). And, that will never change. And-- what the fuck you got goin' on over therrre?? Wellll-- added this crazy blu-ray device and 7.1 surround sound oscillator circuits and some other kinda bullshit (don't even know what it does or how it works, but I put it on there 'cause-- everybody else was doin' it).

And now, let's talk about all the ridiculous libraries you're gonna need to make that crazy shit work. Gotta have the latest DirectX seven hundred forty-one (or whatever the fuck). And, oh no-- there's this new gyaaamer game you gotta have. And, shit-- the whole system's gotta be re-worked just so that one piece of shit software suite will even run on there. And, yeah-- here I am over here trying to decide how I will use 128 bytes of volatile memory and 2K bytes of long term storage to implement a storage device. And, I always know my target's specs. And, they will never change. And, I just need a few lines of assembly code to solve my problem. And like-- what the fuck are you packin' in your crazy complicated desktop, nowww??

You see (when we're targeting that there C programming standard), we're importing Windows curses (or ncurses for actual operating systems) just so we can take some user input. And (with my simple microcontroller), I'm just waiting for a signal so I can respond. And-- we're targeting OpenGL and DirectX to wrap some crazy wireframe animations with FULLY RENDERED GOD DAMN TEXTURES, AND SHITTT!!!! And, my little microcontroller just gots a simple serial interface and waits patiently for data. And, entire graphical APIs are implemented on top of C programming environments (barely functional Microsoft Windows versus a sophisticated suite like the X Windows System). I'm even working on my own API called "nipple implements pixel printing like ecma" that runs on top of a window manager called "tit imparts tiles" (lol).

That's all great-- if we're gonna try and standardize the, well-- un-standardized. But, I mean-- I just wanna develop a simple interface (like a USB keyboard or a remote control mouse). And, we use a (highly) standardized chip for that: a microcontroller (like-- the Atmel at89c2051, for example). And, that's (definitely) not an x86. That's an old Intel 8051 clone. And, there *is* no standardized C compiler for that. There's sdcc. But, let's see-- derrr-- the first thing you'll be doing is importing some non-standardized library just to make a C compiler work. So, like-- why aren't we just concentrating on the standardized specs and assembler and forgetting the C standard if we're targeting that?

Well-- it's a question that was very easy for *me* to answer. Uh-- not so much for many others, apparently. That's why I decided to spend some time educating people about the C programming standard and its actual purpose. And yeah (when we're targeting an incredibly common desktop that has a standardized C programming environment), we might consider adopting the C programming standard. Then, we can develop software that is more "portable" for common systems. But when we just wanna implement a ram-disk on a breadboard, a few lines of assembly will take care of that very easily. So-- there's a very common device that is used to write a microcontroller, right? And, that device just-- plugs into a USB port and waits for some bytes penned by an assembler-- right??

Uh, well-- a lot of people use what's called an "Arduino" device to do that. And, they use a C compiler to develop its software (and strangely call it "firmware" instead of software). And (get this), they don't even use a *standardized* C compiler. The Arduino (oddly) has its own?? And, the C compiler itself sends the bytes. And, you gotta have *that* crazy piece of software just to send them! Or, you gotta have a replacement suite (like AVRdude). Huh. The choices developers make-- the choices *you* will make as a developer-- they affect everybody else down the line. They affect how people will solve future problems. And the wrong ones are being made, here. So please, think things through a bit. It will be (greatly) appreciated in the future.

I'm not going to tell you that the entirety of *my* solution is desirable. It has its flaws (mainly a butt-load of ttl devices that are used to set S-R latches). But, those flaws are (basically) nothing compared to the ridiculous software required to solve the Arduino's issues. lol. The problem with the free and open internet (possibly one of the biggest problems facing the modern world) is the fact that-- so many unknowledgeable people are hosting inaccurate information. Wikipedia (as a basic example) is a repository of fascist propaganda-- designed to belittle and humiliate human beings under the guise of sharing "public knowledge". It's not all bad. But when it's wrong, it's wrong. But if people don't know they should take information from a public domain with a grain of salt, we have serious problems.

The entire principle behind free speech is that-- it empowers a reader. But, a reader can only benefit if he/she takes the time to understand where the information comes from. You've gotta "follow the money" (as a very wise fellow employee used to remind me). And, you'd be surprised how easy it is to believe something that is (in reality) completely untrue. And when it comes to developing software and homemade computer circuits, the average "hacker" (even 8-bit system designer) doesn't know shit from shinola nowadays. But, people will talk about these things like *their* solution is common knowledge (and there is no other way to solve a problem). That's just it though-- if *you* are solving a problem, then you should be developing your *own* solution. And if you don't take the time to understand how something works, then you are invoking propaganda as gospel. Are you okay with that?

When I initiated my endeavor to write a microcontroller, I was already aware of the fact that most people like to use an Arduino system to do that sort of thing. I originally encountered that strange idea when I was learning about computer hardware from a popular 8-bit designer who goes by the name "Ben Eater" (you can find his 8-bit computer system videos here, and they're a great resource for breadboard newcomers). I learned most of the basics about computer circuits from the guy. But, I disagreed with his approach and many of his goals. Mainly, Eater's priority is to teach and move on. I kinda feel like-- if you're going to teach people something, why not encourage them to design something practical as well? And while Eater's projects are educational, they're not exactly useful (in *my* opinion). I'm just a dick like that.

I learned how to write software for the very first time (using a MinGW variant called "Bloodshed") in a few hours one Saturday I was fortunate enough to have off from work and school (while attending college at UTPB). And, I soaked up all the information a person would ever need to implement any type of software on a modern computer system during that period spanning only part of one day. But (right away), I found the (what I now know is very common) approach to "teaching" others about writing software to be completely bass ackwards. I wish I could understand why it is that when people wanna teach, they always want to put the cart before the horse.

I think people just wanna get right to the part they consider most interesting and then-- just skip the boring educational information that can actually teach people how something works. Unfortunately (if you don't teach people the fundamentals), people will be very confused about the systems they develop solutions for. And, their solutions will never be as sophisticated as they really *could* be. The first exercise I normally encounter when I brush up on developing software (or computer hardware) is some sort of "Hello World!" nonsense. It doesn't matter what you're looking to do-- everyone always wants to feed you that kinda shit. And then, they just wanna move on to something more amusing (I guess). And, they leave a lot of questions unanswered. And, they don't take the time to explain how anything is implemented.

I think this approach is a *huge* mistake (and it costs a person being educated many initial opportunities that will be greatly appreciated later down the line). And, it's usually rationalized that this benefits a student. But really it only benefits an educator-- because then, that person doesn't have to spend any time explaining how something works. He/She only has to repeat how to set a variable and write switches and loops, and-- voila-- you're a programmer, now!!! xD But, I mean-- you're *not*. So--

When I see people writing in online forums that developing with the C programming language makes writing software easier (as if that is the C programming standard's purpose), you see-- I *know* that this approach has failed that person. That person doesn't even know which end of the compiler the byte code falls out of. And when this wannabe programmer is developing a suite to power *my* system, I can bet money that the piece of shit will not make good use of my system's hardware. And, many important things won't be considered (perhaps a user would like feature X?) And, I probably will not be able to take some source code and compile it for my own desktop (if that source code is even available). And, many people will not find this hypothetical software useful. And-- all to save an instructor the trouble of actually *educating* people.

And, the same is true for developing computer circuits. People wanna poke a couple of transistors into a breadboard. And, kinda ho-hum about doping silicon a bit. And then, they just wanna lead wires from logic chips and tuck 'em together all nice and neat with some zip ties and call it a day. And (again)-- I look at what's out there, and I just shake my head. You know-- no. That's not it. You're only covering the basics of what can be done. I have so many questions you have left unanswered-- it's not even funny. And so-- when I decided to develop a hypothetical computer system that other people can build and use (I'm speaking "hypothetically" because-- I haven't actually built *anything*, yet), I decided to do it *my* way.

I got my original inspiration for the h8r 8-bit registry from these LED dot matrix devices I discovered on Amazon. And I realized right away, like: "I can make a display with that and develop a computer system that plays old fashioned 8-bit games and give it a simple audio 'driver' (a solenoid with a speaker cone) and teach others how to do the same (just like ol' Ben Eater)". And, I (rather obsessively) began imagining how to arrange these hypothetical LED devices (thirty-two by eight dots) so that I could implement a display that was actually decent enough to do that-- but still be cost effective. And, what I finally settled on was stacking them three high and two wide (yielding a 64x24 dot display).

This is a very limiting design (suffice to say-- I have devised a 4x6 dot and even (wtf?) 4x4 dot font for this display size). And, I have some pretty interesting ideas about how to make good use of it (including going above and beyond a "gaming" system and implementing a text editor, assembler, and even a whole-ass web server). And, I designed all of this (entirely) without purchasing a single piece of hardware. lol. I even began writing opcodes for a hypthetical assembler that could be used to develop software-- for a system that does not even exist, yet. But, I figured-- it would be a disservice for me to develop a system like this and not share my experience with others. Because, I know that people will benefit from the project.

And so, I finally settled on developing the system and sharing the information on Insanely Witty Stupidity. And, I had these ideas about different types of storage that the h8r 8-bit registry would have (including a simple "ram-disk" and even a device that enables long term storage using a nineties era microcassette-- wtf??). And, I knew that I would like to be able to interface with those storage modules (modules, like-- actual processor registors that are part of the system hardware-- assessed through assembly opcodes) using USB interfaces powering existing desktops. And so, I decided a good place to start would be developing some microcontrollers that can do that. But then-- "How should I write a microcontroller?" I inevitably pondered.

Well-- after spending about a month and a half developing a breadboard project that does that the way *I* like (using the Atmel at89c2051), I'm ready to share my solution on Insanely Witty Stupidity. And (hopefully), I'll be implementing an honest "ram-disk" module (that simply plugs in to a desktop USB port and works like a nand flash) very soon. For the purpose of this article, though-- I'm afraid you'll have to settle for a "Hello World!" type of solution that (simply) makes some LED lights blink. But (intuitively), *my* solution makes use of some simple assembly code and employs basic Unix-like file descriptors to write bytes to a fuckin' chip-- the way it *should* be done.

There is no "Arduino" (or crazy complicated hardware) required. And, I'm not using some whack-- undocumented "C compiler" (or whatever you wanna call that) that only runs on a Macbook Pro, version 37a-4126 (or whatever the fuck ever) to develop a basic piece of software that does-- almost nothing at all. The biggest sin I committed was in transferring data from a USB port to a 74ls279 (four S-R latches on one sixteen pin logic chip). And, I found that by writing any byte on two different USB ttl devices, I could (easily) set and reset a homemade S-R latch (which I improvised by crossing the inputs of two breadboard nor gates I crafted with a cheap transistor kit).

There are better ways to transfer bytes-- especially with improvised serial devices. But, they require precision timing (setting baud transfer rates using crystal oscillators) and signal cleansing (comparators and whatnot). And, I just wanted a cheap and convenient device that allows me to do that with a USB port-- without using any complicated software. And, I didn't wanna build a bunch of extra breadboard bullshit just to make that work (I wanted to use two breadboards, max-- and I succeeded :D ). I could've filled boxes with electronic bullshit just to make all that happen. But instead, I employed a simple (existing) microcontroller that has a USB interface. Just, you know-- I ended up using nineteen of them (necessitating the use of multiple USB hubs).

The Atmel at89c2051's datasheet bothered me right away-- particularly, this strange feature that requires a twelve volt "logic level" (whatever the fuck *that's* supposed to mean) in order to prepare the device's PEROM bank (yes, that's spelled correctly) for programming and "chip erasing" (which is required in order to re-write a non-erased byte). I mean, this requirement-- that's just plain fuckin' stupid, right there. Reeeallyyyyy?? There are many ways to ensure that a user does not accidentally erase a ROM bank. And, *this* stupidity was the best the at89c2051's developers could come up with?

Well needless to say, that feature led to a lot of unnecessary headaches and confusion during my programmer's development process. And, the datasheet was (basically) no help regarding the chip's very obscure feature. The issue (I consider the at89c2051's twelve volt logic mode requirement to be an issue, not a useful feature) is only briefly mentioned. And, some (mysterious) requirements of the twelve volt supply are revealed. And, the datasheet authors never bother to explain how this feature is (typically) implemented or what an electrical engineer's options are. And, I am a (somewhat) experienced circuit developer. But, I have never encountered a problem that required a dual voltage system (or "logic") solution in all the years I've been building these types of things.

Well, I started by hopelessly searching the internet a bit. And, I found a couple of informational pages (written by curious hackers such as myself) that mentioned employing "charge pumps" or "boost converters" to impart a twelve volt charge on the at89c2051. But, those developers explained they only used those solutions because they were (apparently) unwilling to-- just use a twelve volt transformer (which I keep a shit-load of) to implement a "twelve volt logic mode". And (at the time)-- I didn't really understand what the fuckin' problem was, exactly.

So I'm thinking, like: "Okay, I'll give the chip vcc and ground. And then, I'll just poke this twelve volt positive wire into the rst line (the chip's reset pin)." And, I'm like: "But-- does the twelve volt supply go on top of the five volt supply? Or-- do I (then) unplug the five volt supply?" Questions, questions. Anyway, so I try it both ways. And, I set my write lines (p1.0 - p1.7 on the at89c2051). And-- I'm not writing any bits. And, I'm like: "Does it matter if I'm reading from the bits while I'm writing them? And, have I (accidentally) written a byte (necessitating chip erase)? And, am I sure my R-C circuit is working properly? Are the datasheet's timing specs wrong?" And, I'm scouring the at89c2051's shitty datasheet. And-- not a fuckin' word about this, anywhere. Nothing. Nooo-where. lol.

So after about three weeks of scratching my head (and scorching two at89c2051 chips by wiring a twelve volt ground to their vcc grounds) and testing this piece of shit circuit I built (pretty much-- I would come home from work and debug my circuit until it was time to go to bed), my girlfriend's about ready to choke me. And like-- no matter what I try, the mother fucker will *not* write a fuckin' byte. I've never encountered anything like this in my life. Not *ever*. I spent all of two hours designing a circuit that writes bytes to an EEPROM all day long. Once I had the R-C circuit right, I never had any problems. So-- wtf??

So, I'm scribbling out some circuits at work (surfing the web like a mad-man). And, I'm thinking-- this-- this is the magical transistor logic switch or open collector (or whatever the fuck I was working on at the time) that's gonna make my shitty programmer work. And, I'm finally going through the "loop" of this circuit in the back of my mind-- thinking: "Where's the ground? God damn it-- where is the fucking *ground* on this mother fuckerrr??" And, that's just it-- a circuit is a loop. It's always a single loop-- from start to finish.

I finally tried a DIFFERENT type of search, confirming a nagging suspicion I was lucky enough to have. And as it turns out, you can use an external twelve volt transformer to implement the at89c2051's logic shifter. But, it needs a common ground. The chip's vcc ground would need to use that twelve volt ground in order for a seperate transformer to impart a twelve volt charge to its rst line. Well-- I suppose you could use zener diodes or voltage regulators on every single line in order to do that (that would work as far as I know). Another option would be powering an entire breadboard with a twelve volt transformer and adding a twelve volt to five volt step down transformer to power the at89c2051-- just so you can have a twelve volt "logic mode" with a common ground.

So, I suppose (finally) I was able to understand why people were saying they didn't "want" to use a separate transformer for their programmer's twelve volt mode. What I still *don't* understand though (yeah, *still* not getting this part)-- why the FUCK did nobody bother to mention any of this (again, even the at89c2051's piece of shit datasheet mentions nothing about implementing a twelve volt "logic mode"-- only that you gotta have one to switch the chip to its programming mode)? I scoured many at89c2051 programmer pages and the chip's datasheet. And the words "common ground"-- not mentioned, anywhere.

Well being a person who doesn't like to limit myself, I decided I would like to try my hand at building my own boost converter circuit (using an inductor and three different transisters and some zener diodes). And, I attempted this three different times. And, I scorched the inductor-- all three times. I'm not gonna tell you those circuit schematics were wrong. But I mean, looped together means looped together. And, attaching *here* is no different than attaching *there*-- right??

After my third attempt, I returned to this guy's at89c2051 programmer page and tried building a charge pump, instead. :D *His* programmer stupidly uses an Arduino to do some of the work. But since his charge pump seemed to yield some success, I thought: "Well-- why not try what that dude did?" Well (right off the bat), he mentions having his Arduino "produce a PWM wave" for his charge pump (lol). And, I'm like: "Oh, Gahhhd-- here we go-- with the ignorant stupidity." But, at least the guy bothered to share the duty size (62kHz) of his "PWM signal". ;)

So, the first thing I did was think about producing a 62 kilohertz wave using a 555 timer (I didn't even need to research whether or not a 555 timer can produce a PWM, 'cause-- it is *literally* an oscillator device and I already *knew* it could be used to do that somehow or another). And rather than reading mindless gibberish for fifteen minutes (that may or may not help), I opted to try probing DuckDuckGo with a search like "555 diodes" (or something similar). And, I enabled my browser's javascript engine (even though I don't like using it) because-- I was hoping to just find a circuit schematic image and copy it. And after checking 8-10 images, I found one that used a set of diodes to produce a perfect 50% duty cycle (lol). It's amazing what a little knowledge can do for you (that's why it's important to take the time to understand how things work).

So, the charge pump from that there "Leap" project page primes two capacitors with 62kHz PWM outputs-- and a third with their inverse. So, I attached two jumpers to two capacitors and a third to an open collector. And, I attached the open collector output to the third capacitor (duh). And, I added some diodes and a twelve volt zener diode (the project charge pump actually produces a sixteen volt output, so you gotta limit the voltage to (at most) 12.5 volts according to the datasheet). And (for the first time EVERRR), I was able to pull something like 9.89 volts from my "logic" level to my vcc's five volts ground (obviously, I was unable to pull twelve volts from my external transformer to my vcc ground-- only to the transformer's *own* ground). And so, I thought: "Well maybe I can write a fuckin' byte, now."

Well-- nothing changed, lol. So at this point, I can only think: "Well, I'm still not *quite* where I wanna be with my twelve volt supply." And (after all), the datasheet clearly states (one of the few things its authors actually *bothered* to share about the chip) that the minimum programming voltage is 11.5. So, I decided to read carefully about the Leap project's charge pump. And, it uses ceramic capacitors. And, I didn't have any that were the required size (220 nanoFarad, rated for 25 volts-- which clearly doesn't need to be larger than five volts, but whatever). I only had those shitty can shaped capacitors.

So I ordered a nice, big ceramic capacitor kit (I prefer those type anyway). And, I tried my luck. And then, I was able to pull 10.89 volts (lol). So of course, I'm like: "Motha fucker! Gimme some fuckin' volts! God damn it!! What is the fuckin' deal, here??" And so, I re-checked the Leap project's circuit components one piece at a time. And, I noticed (at this point) that the only difference between that guy's circuit and mine was the diodes. I used some nice, fat ones. I figured: "Well, I'm building a charge pump. I might as well have some big, fat fuckers that won't catch fire and burn the house down." I never even thought twice about it or bothered to check the diode type (who the hell checks a diode tag?)

So, I researched the recommended diode's specs (that would be the "1N4148"). And as it turns out, it's a little wimpy lookin' fuckin' thing (lol). But, I did actually have some. And, it is a diode that is designed for speed rather than power. Well, I suppose (in hindsight) speed is more important for a charge pump (which is *literally* trading current for voltage). So, I quickly changed my diodes to the recommended ones. And, I was able to pull 11.69 volts! :D So, I tried writing some bytes (a complicated procedure when you're just poking wires and pressing momentary switches). And (for the very first time), I was able to write bytes, read them (by switching p3.4 from high to low), switch addresses, and reset. Everything *finally* worked the way it was supposed to (according to the at89c2051's piece of shit datasheet). I was astounded.

That was week four (lol). I spent the following week (finally) adding some fuckin' 74ls279 S-R chips and ttl devices. And I began having voltage problems, again. So, the problem with using 74ls279 devices to power an at89c2051 programmer is that-- they're designed more for 3.3 volt systems. You can get away with 3.3 volts when you're setting bits (according to the datasheet). Setting your xtal1 and p3.2 (for switching addresses and writing or chip erase) are good, too. But, powering a charge pump requires some care. You can't just supply the 555's power from a 3.3 volt output. That shit don't work (lol). I had to attach it to the rail and leave it. And then, I used transistors to turn my PWM supplies' output on and off with the 74ls279's output.

I also found I needed to use open collectors to switch on my programmer's five volt and twelve volt logic modes (otherwise, 3.3 volts are being used to power them-- and it's just not enough). So, I ended up using those bits' "reset" modes to turn the logic modes on (so when their outputs' LEDs are dark, the logic modes are actually active which is kinda confusing). And then, I found my charge pump could supply 11.89 volts when I used ttl devices and a BASH terminal to control the programmer (which was pretty damn exciting!) And then, I discovered a brand new problem.

Up until that moment, I used an R-C circuit for xtal1 (switching byte addresses) and p3.2 (writing bytes or chip erase). And, I found that p3.2 worked fine from a terminal. But, switching addresses-- kinda hit and miss. And that was a *real* pain in the butt to confirm, too-- 'cause I was having to power off the device and flip dip switches to switch to a "manual" mode so I could read bytes (I found that the 74ls279 chips had trouble storing bits-- and prefered to switch to "set" mode on their own when my read LEDs were active). And then, I had to power the device *back* on (lol) and switch p3.4 low to verify an address switched. And then, I could use terminal commands to try switching addresses. And, I couldn't just do that while trying to *write* bytes from a terminal.

What I found is that switching addresses had (probably) less than a fifty percent chance of success. *Writing* bytes and chip erasing (strangely) seemed to work perfectly. I had some suspicions right away. And, I (correctly) checked the at89c2051's stupid datasheet. And, the timing requirement for switching addresses is at least 200 nanoseconds. And, the timing requirement for writing a byte is a range from 1 to 110 microseconds. And then, the shitty datasheet just says (randomly): "You gotta pulse p3.2 for 10 milliseconds to chip erase" or something similar. And, it gives no maximum value (it also doesn't mention the fact that the write lines have to be held high in order to do a "proper" chip erase with all high byte values that can *actually* be written afterwards-- sigh).

Anyway, so my assumption was that using an R-C circuit to implement a microsecond timer was doable-- but a nanosecond timer was (possibly) not going to work with a breadboard and some shitty capacitors made in China. So, I decided to replace xtal1's flip switch with a second 555 timer (this time, configured as a monostable wave generator). And, let's see-- perhaps that would be a-- 100 picoFarad capacitor with a 2K resistor (should be about 220 nanoseconds)? Once I replaced xtal1's R-C circuit, I found I could switch addresses by setting and resetting its ttl device *every single time*.

So, I finally tried developing a simple 8051 assembly program using sdcc (which I spent time compiling for the latest Slackware systems I build packages for). Well first, I had to compile sdcc for bonslack-15.0 running on top of armv5, armv7, and aarch64-- using an existing SlackBuild, even! I swear-- if it wasn't for trivial problems, I would have none. I found (using various Raspberry Pis) that cc1plus tends to (either) summon Linux's memory overcommit hitman-- or (adding swap from a network system I typically use for that-- 'cause I'm not putting swap on an mmc device serving a system with a gigabyte of memory-- that's fuckin' stupid) totally swap the system (rendering it completely useless).

I finally discovered that-- if I use a plain, old mechanical disk that I've had for nineteen years (at the time of this writing) to host a one gigabyte swap file, then the dipshit builds perfectly fine using bonslack-15.0 on top of a Raspberry Pi (version 2 or 3). It may be necessary to start the SlackBuild without an X server running (although, I'm still not convinced about that-- I just didn't bother re-trying it after my aarch64 system froze). And-- all I wanted was a fuckin' 8051 assembler (for God's sake). And then, I attempted writing a simple assembly program (using the 8051's usual opcodes-- which should work fine according to the datasheet) that sets and unsets p1.0 - p1.7 (which would be viewable by attaching some LEDs).

And so, I tried assembling my project (blinky_light.s). And-- I'll be damned if I didn't run into *another* shit-show (lol). So apparently, sdcc doesn't like it when you just-- command its assembler to write some byte code for you (because, well-- I guess that would be too easy). Instead, it has this crazy shit where it's gotta-- write a bunch of different types of standardized "pseudo-code" files. And then you can convert that crazy bullshit to, you know-- some actual bytes (which is, like-- the whole fuckin' point of using an assembler from what I understand) using GNU objcopy.

I found that the correct procedure for producing byte code (when targeting the 8051) goes like this: 1) "sdas8051 -los project.s", 2) "sdld -i project", 3) "packihx project.ihx > project.hex", 4) "objcopy -Iihex -Obinary project.hex project.bin". And then (obviously), that there "project.bin" (or whatever name you would like to give the write file for objcopy) would be the, uh-- yes, the actual *bytes*-- the assembled program. And, I believe I ended up with a program that was all of forty-one bytes (lol). And so (next), I wrote "kkk" (which stands for "kkk keys knowledge"). xD

Ohhh-- I just can't help myself from triggering fascist dipshits who fear the world we live in. It is my lot in life (it's who I am inside). But, yeah-- why did I write a script that programs an at89c2051 with my breadboard programmer and name it after the (possibly) most notorious faction of murderin', racist imbeciles to ever walk the face of the earth? Well if Wikipedia hosts an article discussing this, they will (likely) kick things off by making unsubstantiated claims that I am a white power racist and should be dismembered and hung from spikes lining a protective barrier (built by power hungry socialists in a dystopian future they seek).

To the disdain of Wikipedia authors, I (inconveniently) am not descended from a purist "white" bloodline (which has no clear definition in white power racism). I am actually one-eighth Choctaw Indian (yes, I say "Indian" because that is a common nomenclature with a clear definition). And so (according to white power racists), I am what is called a "mongrel" (I believe that is the correct name for it). And therefore, I am an abomination. And, I should not be allowed to exist (probably slaughtered on sight). And so (unfortunately for Wikipedia's bottom line), I could never become a white power racist and lead a faction like the KKK. lol.

And so, why name a script "kkk"? Well, I *always* give my software unsavory names-- because I find it humorous, and it helps me remember. It's a tradition I started when I began developing the mic independent collection (a repository of my best hacks and libraries and scripts that is still incomplete to this day). I have many funny (apparently "triggering") program names in my repository ("clit liberated internet teller", "hiccup isn't cluelessly compliant unlike posix", "aids implements desktop simulations", "bones obliterates non-extensible software", "death exposes all the hardware", and "afro fetches remote ornaments" come to mind). And if that bothers you, well then-- perhaps you should rename the script to suit your own needs (and shut your fuckin' mouth, dumb-shit). ;)

And sooo (with an assembled binary)-- that left me with the task of attempting to write the file to an at89c2051 using my programmer. But (at this point), I mean-- I was pretty confident that writing a device would be successful. That-- That is why it pays to take the time to understand how something works. Because then, you can build whatever crazy shit you want. And (in the end), you don't think-- you *know* the mother fucker will work. There is NO QUESTION. But, I tried out the ol' kkk for the first time. And, there were some hiccups (you gotta flip on your write lines and switch your programmer to "scripting" mode using three other dip switches). But, uh-- I'll be God damned-- it worked PERFECTLY! xD

And, well-- the next test was popping this poor, tortured at89c2051 into a fresh breadboard and giving it an oscillator (omg-- an actual oscillator to actually *use* the fuckin' thing) and power lines and some lights. And so, I went ahead and ordered some actual crystal oscillators (yeah, I didn't even have any). And, the slowest one was wayyy too fast to see any lights blinking (lol). So, I opted to try out an old circuit I designed with a 555 timer that allows flipping dip switches to switch from really low speeds to insanely high speeds. And then, finally-- I could see the lights flashing! And so, I knew kkk didn't (necessarily) need a byte ordering mode (big endian versus little endian) or anything like that. Simply assemble a program with sdcc-- and then write the bytes to an at89c2051. Uh-- was that (really) so much to ask for?

And now, to reveal my unhinged insanity on the public domain:

at89c2051/01.jpg

at89c2051/02.jpg

at89c2051/03.jpg

at89c2051/04.jpg

at89c2051/05.jpg

at89c2051/06.jpg

at89c2051/07.jpg

at89c2051/08.jpg

at89c2051/09.jpg

at89c2051/10.jpg

at89c2051/11.jpg

at89c2051/12.jpg

at89c2051/13.jpg

I noticed later-- one of my read lines' resistors was not seated when I took some of these photographs. You can see it (poking up) in front of a side shot that features the at89c2051 seated in its IC socket. That little mother fucker. Wait until I get my hands on that-- piece of shit. D:< If you envy the embossed digits labeling the load order of my USB hubs' upwards ports (lol), I found this label maker to work pretty decently (and felt like writing about it). I tried a different embossing labeler in the past (manufactured by DYMO). And, it punched a few digits-- then broke. What a piece of shit. Also (just fyi), I found that my Raspberry Pi 3 would only activate all nineteen ttl devices if I attached the hubs on a USB 3.0 "powered" hub.

If you're trying to understand why more than one ttl device shares power outputs with the breadboard's rails, it's because-- I found that some of the 74ls279 bits randomly set without input (and do better when they share their power output with the system). And, there are a couple of jumper leads just kinda-- hanging from the corners of the at89c2051 breadboard for testing purposes (I kept having to check rst's voltage with a multimeter to verify it switches to its twelve volt logic mode, so I just left those wires dangling there in case I needed them). Also, I never got around to recreating my twelve volt logic setting for use with my device's "debug mode" (kinda lost that when I tried powering the device's PWM circuit with a 3.3 volt output). So, there's just (randomly) a slider switch hanging there with a dip switch that doesn't do anything.

If you are a mad scientist like me (mwuh-ha-ha-ha-haaa!) and would like to build an Atmel at89c2051 programmer using breadboards and USB ttl devices like *I* did, then you can use this electrical schematic I crafted (hopefully, a perfect duplicate) as a guide:
at89c2051_ttl_writer.jpg
COPYING
readme

And, you can use my kooky script for Bash (kkk) to write the nineteen ttl devices:
kkk
COPYING

And, if you're wondering what this "echo_fancy" command is all about (set by kkk), you can borrow it as well if you'd like (although-- you might want to change its path from "/mic/sh" to something else):
echo_fancy
COPYING

I started Insanely Witty Stupidity's "official" Atmel at89c2051 project page (which has a link for the "blinky light" project mentioned in this article). Insanely Witty Stupidity projects (like the programmer schematic linked above) will be released under the Honest Blueprint License (hbl). I also took the liberty of developing the Honest Recipe License-- a license for sharing source code and other types of recipes. The GNU General Public License (for example) is an elaborate "catch-all" license that is presumed to allow developers to share a project. But, the document (oppressively) inhibits creators from distributing their work without hosting its "ingredients" in many unfair ways (as defined by the license).

The idea with a recipe license is to allow a developer to share a "recipe" (which is an intellectual solution to a problem)-- but NOT A FINISHED PRODUCT. And, there are no stipulations of Insanely Witty Stupidity's "honest" licenses other than-- the released work *can not* be copyrighted (which *should* be impossible in a capitalist civilization but is enforced by powerful socialists in ours). If you are a developer attempting to release source code using the hrl, you can release a separate scripting project (using Insanely Witty Stupidity's Honest Scripting License) that compiles source code using one or more targets (perhaps you would like to share an autobuild "makefile", for example). See Insanely Witty Stupidity's license page for details.

In other news (no I'm not finished yacking, yet)-- I'm still powering through chapter four of Ghosts of Glory High (version 1.1). It's been a tough re-write for that chapter. The language scarcely resembles what I started with (but tells the same story). And, I'm only about two-thirds of the way through that monstrosity. When I'm finished though, it'll be worth it. And, a reader won't feel like punching me when they're weighing through them ridiculous shenanigans (lol). The language of the fourth chapter is particularly bad (I recall struggling with it for Ghosts of Glory High's initial release as well). But, it'll be smooth sailing when I'm done. And then, I plan on doing the same with chapter five. Then, I'll begin writing chapter four of "The Critical Mass of the Hybridized Rodent".

Hopefully you'll start to see some Insanely Thoughtful Article links, again. We finally got rid of the dick-head where I work. And, I used to spend like-- five to ten minutes first thing in the morning writing some thoughtful introductions when I got to work. And, that guy made things kinda weird and unpredictable the past three months. But, we'll see about that. The lady I hoped would be managing the place did not get the job. :( Instead, the lady who managed it perfectly fine before is reprising her role. And, my hopeful has taken a better job with similar perks but better pay (and hopefully better management).

The whole thing's pretty upsetting for me, honestly (I'm losing a brilliant colleague and dear friend who has always worked her butt off to make my job a lot easier). But, the good news is that things will be (somewhat) normal again where I work. And, I won't have to sweat spending thirty seconds to take a piss (lol). It really helps out Insanely Witty Stupidity when I don't have to check around a corner every ten to fifteen seconds-- just 'cause I wanna throw a little something on my server. ;) We are extremely far behind on our deliveries (thanks to the dipshit). So, I'll be battling that for the next few weeks. But then, I think I'll be able to find time to link a few articles each morning. So, keep checking Insanely Witty Stupidity's index page. And, uh-- enjoy the programmer. :D

Back

______________________________________________

Random Fact: The name "Insanely Witty Stupidity" is a sarcastic homage to William Shakespeare's pretentious (and completely confusing) application of poetic literature in character dialog. After all-- the lack of realism portrayed by Shakespeare character interactions makes it difficult for people to take the work seriously.

html revised 2024-04-23 by Michael Atkins.

The maintainer of insanelywittystupidity.com does not care if people duplicate this page or any part of it-- as long as this notice remains intact.