Saturday, February 25, 2023

How to tune a filament profile in Cura


Getting started with Cura, one problem you may run into fairly quickly is that when you want to switch between different types or brands of filaments, there isn't a good way to save the settings associated with that filament.  By default, Cura has the ability to save "quality profiles" but those end up becoming cumbersome.  Also, Cura does have the ability to choose between filaments, but doing so only changes a very small number of parameters, like print temperature, bed temperature, fan speed, and retraction.  There are a lot more settings that are associated with the current filament, and it would be a lot easier if you could save them so that switching filaments automatically applied the relevant values to your slicing settings.  Thankfully, you can!  But that leads to a new problem.  How do you determine the right values for each parameter?  In this post, I will lay out a simple procedure that you can follow to tune in a new filament profile so you can bring up all of those tuned settings at any time with just a few clicks.  This whole process will take an hour or two, maybe a bit more the first couple of times, but you only need to do it once per filament type, saving you a ton of time in the future.

Cura settings layers

The good news is that Cura is already designed in a way that is meant to allow for this kind of functionality.  Cura stores all of its settings in different layers.  These layers are applied on top of each other one at a time.  If a setting is left blank in a given layer, the value from the previous layer is used.  If the setting is not left blank, the value in the top-most layer overrides the values in the layers below.  These layers include (from bottom to top):

Printer > Extruder > Material > Quality > Overrides

There are likely others as well, but these are the ones you're most likely to interact with.  Each layer only includes a very small number of the settings available in Cura.  For instance, the Printer layer includes things like the build volume and motion limits, the Material layer includes temperatures, cooling, and retraction, Quality includes (by default) things like layer height, wall thickness, and print speed, and the Overrides layer is everything that you manually type into the settings window.  The reason for this is that something like the build volume shouldn't change just because you change to a different filament, or things like the temperature shouldn't change just because you want to print at a higher quality.  So, this system is actually really well done, but the problem is that for most of the layers, there's no way to assign which settings go on which layer within the GUI.  So, you're left with just the settings that Ultimaker decided to put in each layer.

Install plugins

Luckily, there are a few plugins available for Cura to allow us to do what we want.  Honestly, this should be included in Cura directly, but for now that isn't the case, so we'll need to install a couple of plugins.  First, and most importantly, is the Material Settings plugin.  Second is the Linear Advance Settings plugin.  In order for the Linear Advance Settings plugin to be effective, you will need Linear Advance to be supported by your printer's controller and enabled in your printer's firmware.  If the controller doesn't support Linear Advance, there isn't much you can do about it other than upgrade the controller board.  If your controller does support it, but the firmware doesn't, you will need to install a custom firmware build in order to enable it.  Building Marlin firmware from source is outside the scope of this guide, but if you want to look into it more, there's a good guide here.  If you can't enable Linear Advance, you can still follow the rest of this guide, just skip anything that references Linear Advance.

Configure Material Settings Plugin

Now that the Materials Settings plugin is installed, you can assign any of Cura's settings to the Material layer.  Under the Prepare view in Cura, click on the Extruder drop-down, then on the Material drop-down and down at the bottom select Manage Materials.  Or, you can just use the keyboard shortcut Ctrl+K.  In this dialog, you can create a new material if it doesn't already exist.  One weird caveat in doing so is that the "Material Type" field should exactly match one that already exists.  So, things like PLA Pro or Silk PLA should both have a material type of just "PLA", ASA should have a material type of "ABS", and TPU should have a material type of "TPU 95A" regardless of what its actual shore hardness is.  This is just a weird quirk of Cura, and it will complain if you use a material type that it doesn't already know.  The Display Name field can be anything you want, and should be where you specify materials like Silk PLA, ASA, etc.  Once you've set up the basic parameters, click on the Print Settings tab and then click the Select Settings button at the bottom.  This will open a new dialog that lists every setting that Cura supports.  This can be rather intimidating at first glance, so here are the settings I recommend, and you can come back here and enable any other settings that you want at a later time.  You don't actually have to calibrate all of these settings for every material, it's just good to have them available if you do.  Some of these don't really belong in the Material layer, but there isn't a similar plugin for the Printer or Extruder layers, and the Quality layer doesn't allow overwriting any of the default profiles, so settings marked with an '*' are things that I basically just always set to a default value.


- Outer Wall Wipe Distance

- Top Surface Skin Layers
--Top Surface Skin Line Width
-- Monotonic Top Surface Order * (always enable this)

-Monotonic Top/Bottom Order * (always enable this)

- Monotonic Ironing Order * (always enable this)

--Skin Edge Support Layers

-Default Printing Temperature

- Printing Temperature Initial Layer

- Initial Printing Temperature

- Build Plate Temperature

- Build Plate Temperature Initial Layer

- Flow
-- Wall Flow
-- Inner Wall(s) Flow

- Top/Bottom Flow

- Top Surface Skin Flow

- Initial Layer Flow

- Standby Temperature

- Linear Advance Factor

- Print Speed

- Enable Acceleration Control * (always enable this)

- Print Acceleration

- Initial Layer Acceleration
-- Initial Layer Travel Acceleration

- Enable Jerk Control * (always enable this)

- Print Jerk

- Initial Layer Jerk

- Retraction Distance

- Retraction Speed

- Retraction Extra Prime Amount

- Minimum Extrusion Distance Window

- Limit Support Retractions

- Combing Mode

- Max Comb Distance With No Retract

- Z Hop When Retracted

- Z Hop Height

- Fan Speed

- Initial Fan Speed

--Regular Fan Speed at Layer

- Minimum Layer Time

- Support Overhang Angle

Build Plate Adhesion
(mostly just do this if you plan to print ABS/ASA without an enclosure so you can enable a brim)
-Build Plate Adhesion Type

- Brim Width

- Brim Distance

Dual Extrusion

Mesh Fixes

Special Modes

- Enable Coasting

- Coasting Volume

- Overhanging Wall Angle

- Overhanging Wall Speed

- Enable Bridge Settings

- Bridge Wall Coasting

- Bridge Wall Speed

- Bridge Wall Flow

- Bridge Skin Speed

- Bridge Skin Flow

- Bridge Skin Density

- Bridge Fan Speed

- Bridge Has Multiple Layers

TeachingTech Calibration Generators

YouTuber TeachingTech has created a great website for generating calibration patterns that can help greatly simplify the process of determining the right print settings, but I don't completely agree with the order of the tests, so I'll run through my process here.  Instructions for each individual test are detailed on each page of the website, so I won't go into them here.  Start with the machine basics like the Frame Check, PID Autotune, Extruder E-Steps, and First Layer.  These have nothing to do with the material, but you'll need to have those done before you can get good results with the rest of the tests.

Custom Start/End gcode

Once you've got your machine squared away and ready to test, you'll want to grab the start and end gcode from Cura and add them to the calibration test gcode.  You can do this by going to one of the tests like Temperature, check the "Additional start gcode" and "Additional end gcode" boxes, which should open up an extra text box where you can type in whatever gcode you want.  You can get this gcode from Cura by selecting Settings>Printer>Manage Printers, and clicking Machine Settings.  Copy the contents of Start G-code into the website's Additional start gcode field and Cura's End G-code into the website's Additional end gcode field.  Then add 2 extra lines.  At the bottom of the Additional start gcode, add:

M900 K0.0 ;

If you have already set up a similar filament of the same type, and happen to know the Linear Advance K-factor, you can put that in here instead of 0.0.  Either way, you'll update this value later.

At the top of the Additional end gcode, add:

M400 ;

In fact, you should probably add the M400 at the top of your End G-code in Cura as well, especially if you are running a custom build of Marlin with the gcode buffer size increased from the default.  Otherwise, temperature changes are immediately applied, skipping over the gcode buffer, causing the hotend to be shut off as much as 30 seconds before the print is actually finished.  If your custom start gcode includes bed leveling commands such as G28 or G29, you can set the bed leveling option in the website drop-down to None.


Now, the first test to run on any new material is the temperature tower.  Follow the instructions, and pick the best result.  Enter the result into Cura by opening Manage Materials, selecting your material from the list, and opening the Print Settings tab.  Enter the resulting temperature under Default Printing Temperature.  For bed adhesion purposes, enter a value about 10 degrees higher for Printing Temperature Initial Layer and Initial Printing Temperature.  Initial Printing Temperature will show a warning yellow if it is higher than the Default Printing Temperature, but this can be ignored.

Linear Advance

Follow the instructions for printing a Linear Advance test pattern, being sure to use the Default Printing Temperature value from the temperature tower, not the increased first layer temperature.  In Cura's Print Settings, enter this value into the Linear Advance field.

Print Speed

Back to the TeachingTech website, open the Print Speed test tab.  Enter the print temperature from the first test into the relevant field, and enter the Linear Advance K-value into the M900 line at the bottom of the Additional start gcode field.  Then generate the test file and print it.  After examining the result and determining the top speed, enter the result into Cura's Print Settings under Print Speed.  PLA is a pretty good material to determine the actual maximum speed for the printer itself that can be used as a default in the future, and then you only really need to run this test for trickier materials like Silk PLA, PETG, or TPU if you want.  Or, at least it will give you a good baseline to start from in the future.


This one is really something that should be assigned to the Extruder layer, not the Material layer, so once you run it once, you should just be able to reuse the value for future materials, unless you get into things that also have a drastically different Print Speed, like TPU.  I suggest setting the Initial Layer acceleration values to something around 50-60% of the value you determine from the test.


Retraction is pretty straightforward.  Run the test, record the results in Cura.

Flow Rate

This one is a bit complicated.  First, update the retraction distance and speed values using the previous test results.  Then, in Cura, there are typically 3 different speeds, calculated as fractions of the default Print Speed that you actually configured in the profile.  The default is used for several things like infill, top/bottom/etc. the second speed is used for walls, and the third is used for the initial layer.  To get the best results from the flow rate test, you'll want to run it at each of the three speeds, then in Cura you'll use the result at each speed for each of the features that runs at that speed.  For instance, use the feed test printed at the wall speed to configure the wall flow rate, and so on.  Pay close attention to the Preview window and set the Color Scheme in the top-middle to Speed to make sure that the whole model is being printed at the same speed.  If the walls are being set to a lower speed, the two places to check are that your wall speed is set to the same speed as your default speed (reset this once the tests are done), and also check that the minimum layer time is set high enough.

Speaking of minimum layer time, if your box ends up looking really melted and distorted, then you need to reduce the minimum layer time for the material.  The way you can dial this in is to set the value to something high like 30 seconds, then start reducing the print speed until the box is no longer melted.  Then, to determine what minimum layer time corresponds to that print speed, leave the print speed at the speed that came out clean, reduce the minimum layer time, and slice the model, but don't print it, just look at the Preview window.  Keep reducing the minimum layer time until the preview shows that the print speed has been reduced below the configured speed.  The lowest minimum layer time that doesn't change the print speed is your actual minimum layer time that you'll want to configure in the material profile.  You can add another second or two just to be safe if you want.

If you do have to dial in the minimum layer time, you'll need to scale up the box in the slicer in order to perform the actual flow rate tests as outlined here.  You don't need to scale the Z axis, just X and Y.

Also, I have noticed that some of the other TeachingTech tests modify the flow rate to 97%.  I'm not sure why (one obvious case is if you cancel the print speed test early, which will leave the flow rate at something other than 100%, but some of the other tests seem to change this as well), but you want to be sure this isn't the case before starting this test.  On an Ender 3, you'll notice it says ">>97%" (or some number that isn't 100%) on the left side of the screen.  An easy way to reset this is to just power cycle the printer before starting this test, or you can go through the menu and change the flow rate.  You can also manually send

M220 S100

from the gcode terminal.  Once you've performed the flow rate tests at the 3 key print speeds, go through the material profile and configure the flow rates for the associated print features.

Other Calibration

At this point, you're probably ready to go.  There are plenty of other things that can be calibrated, but for the most part, I don't bother with further calibrations until they become necessary.  Here are a couple of them.


Usually, I just set my cooling to 50% after layer 2, except for high-temp filaments like ASA and PC, where I reduce it to the absolute minimum value that actually spins the fan (around 15% for me, might be different for you).  However, if you think you need to dial in your cooling more precisely, TeachingTech also has a decent model for that.  It's not on the calibration website, but you can download the model here.


Bridging is another setting that I don't usually bother with until I really need to, because it's a pretty complex one to get right and can take awhile.  For me, I go with the following settings as my baseline, and adjust as needed.

Bridge Wall Coasting: 50%
Bridge Wall Speed/Bridge Skin Speed: ~25% of the default print speed, or slower
Bridge Wall/Skin Flow: 50%
Bridge Skin Density: 150%
Bridge Fan Speed: 100%

If you want a test model, you can find several online, such as this or this.  Take your pick.


I don't have a good calibration print for these, but coasting and wipe are good parameters to increase if you're dealing with gaps near your seam (increase outer wall wipe) or excess ooze with filaments like PETG or Silk PLA (increase coasting, can cause gaps, in which case also increase outer wall wipe).


If you find any other settings that you want to dial in that feel like they should be associated with the Material layer, you can always check the box later, run your tests, and save the results.  Certain other parameters, like wall thickness, infill density, layer height, support settings, etc. should be left out of the Material layer and used in Quality profiles instead.  These are the sorts of settings that are usually listed under print instructions for a model.  That way, imagine you're printing something made of a bunch of parts, like a Voron, where those settings are specified in the build documents, but also imagine that you're using 2 different brands of filament for the 2 different colors.  You want the "Voron Parts" settings to be stored in a quality profile, and the material settings stored in the material profile, so you can swap materials without changing the physical strength properties you get from the wall and infill settings.  Understanding what the different layers are and what their purpose is will help you to use them the most effectively, and in turn, give you the most benefit.  Hopefully this guide will help you get the most out of Cura, and the best results from your prints.  Happy Printing!

Tuesday, March 16, 2021

Ghidra Dark Theme (VS Dark color scheme)

I've been using Ghidra for awhile now, but its default color scheme annoys me, especially when all of my other development tools and editors are running dark themes. While there is some support for recoloring certain aspects of the UI, not everything can be modified, leading to a rather inconsistent UI experience. The best guide I was able to find was this one: but even it left several dockable panes with the default white background, or even worse, black text in areas that did apply the dark theme to the background colors, but not the text for some reason. I have been able to extend that guide a bit further, and in the process I also modified the color scheme to my preferred one, to try and match the default dark theme from Visual Studio.

As with the enigmatrix tutorial, I chose to use FlatLaf as the base for my dark theme. You can find the latest version on Maven. Once you select a version, the download link will be in the top-right, and you'll want to select "jar" from the drop-down menu. Or, you can just grab the 1.0 jar here.  Once you download the jar file, I suggest placing it into your %USERPROFILE%\.ghidra folder.  Next, you'll need to modify a couple of Ghidra's launch files in order for Ghidra to be able to find it.  First up, you'll want to add the full jar path to the CPATH lines in <GHIDRA_PATH>/support/ (launch.bat on Windows).  The CPATH variable is a colon-delimited list (in the .sh file, the .bat file is semicolon-delimited), so just type a colon at the end, followed by the full path to the flatlaf jar file.

Next, set FlatLaf as the systemlaf in <GHIDRA_PATH>/support/  Just copy this at the bottom of the file:


Now open Ghidra and click Edit>Tool Options and select Tool.  Under Swing Look And Feel, select System, and check the box for Use Inverted Colors.  Click OK and close Ghidra.  Using inverted colors is an important step for converting some of the hard-coded elements that might otherwise use a white background or black text on our new dark themed backgrounds.  Next up is the color assignment.  As I said, I'm using the Visual Studio dark theme as the basis for my theme.  You're free to edit these as you like, but keep in mind that because Use Inverted Colors is enabled, you'll have to invert the RGB values that you actually want displayed (at least in most cases, some colors don't get inverted, it's really inconsistent and annoying).

First of all, you'll want to edit %USERPROFILE%\.ghidra\<GHIDRA_VERSION>\tools\_code_browser.tcd.  There are a lot of values to edit here.  This file doesn't seem to support hex values, so if you want to change any of these from what I have here, you first have to invert the hex code, and then convert the result to a 32-bit signed integer.  e.g. to get hex color #343A40, you would first invert it to #FFCBC5BF (the first FF is because you have to expand to 32-bits, so 00 inverts to FF), and then convert to signed 32-bit integer -3422785.  You can do this using the Windows 10 calculator by switching to Programmer mode and clicking the word size button above the keypad until it displays "DWORD".  Select Hex, enter your desired hex color value, and then XOR FFFFFFFF.  The value displayed next to DEC should be the value to use.

Now, we're going to edit the FlatLaf theme itself.  First of all, open up a text editor and copy the following contents into it.  Note that unlike the _code_browser.tcd edits above, this is the full contents of the file, so you don't have to go through and edit a bunch of separate sections.

Save this file as (if on Windows, you may need to save the file using Unix LF line endings instead of the default CRLF).  Next, rename the jar file you downloaded to .zip, or just open it directly with an archive tool like 7-zip.  Browse to com\formdev\flatlaf and replace with the one you just created.  Save the archive and rename back to .jar if necessary.

Now you're done!  Reopen Ghidra, and you should be greeted with a nice dark theme.

There are some weird orange and yellow colors, especially in dialog header, those are the result of inverting the hard-coded original blue colors.  Aside from that, it should all be themed.  I will say, there were a few colors in the properties files that I couldn't manage to match up to a UI element in the actual application.  It's possible Ghidra doesn't use those theme elements, or it's possible that I just missed them.  In most cases, if I couldn't find them, I left them as bright magenta (which may show up as bright green, depending on the inversion, it's not the darker purple like the selection color in the disassembly pane shown in the screenshot above), so if you happen to find something in the UI that is showing up as magenta or green, please let me know in the comments, and I'll see if I can track it down.

Saturday, July 15, 2017

The MSU-1 Volume Fiasco, Explained

If you've ever tried one of the many amazing SNES hacks utilizing the MSU-1 audio coprocessor, you may have run across information about volume levels referencing "hardware" versions and "emulator" versions of the same hack, as well as "boosted" or "non-boosted" audio files, and may have been confused by the complicated, and often conflicting information about all of these different variants.  I've explained the issue on several forums, but I wanted to go ahead and do a single, unified write-up explaining the issue, as well as the "correct" way to do things.  Hopefully this will help clear up the confusion for people new to the MSU-1.

First of all, before I dig into the full explanation, I'll just cut to the chase.  THERE IS NO LONGER ANY NEED FOR SEPARATE VERSIONS.  You only need one version, and that same version works everywhere.  Many MSU-1 hack authors have fixed their hacks and released them as a single fixed version.  However, since there are still a handful of un-fixed patches floating around, if you happen to have a patch with both "Hardware/SD2SNES" and "Emulator" versions, the correct combination is to use the "Hardware/SD2SNES" version of the patch with non-boosted audio files.  However, if you can't find non-boosted audio files, then you can also use boosted audio files with the "Emulator" version of the patch, but that will not sound as good (see Problem #3 below).  If you're not sure whether your audio files are boosted or not, try out the Hardware/SD2SNES patch in an emulator.  If the audio sounds really loud/distorted, you have boosted audio files.

Also, if you have an older-revision SD2SNES, you'll need to go into the menu and set the MSU-1 audio boost to max (see Less Hacky Workaround #1 below).

Now, to explain the issue...

First, we had higan. Audio was mixed properly, and .pcm audio files were more-or-less properly normalized. Let's call this the correct patch and correct audio files.

Problem #1:

The SD2SNES played MSU-1 audio too quietly. There was a lot of speculation as to why, but eventually ikari realized that the DAC output was high impedance, and the SNES analog mixing inputs were low impedance, which caused reduced volume. This means the problem is in the hardware, and can't be fully fixed without a board revision.

Hacky Workaround #1:

SNES audio tends to be somewhere in the range of -10dBFS to -20dBFS RMS. This leaves a fair amount of headroom. Therefore, it's possible to simply amplify the .pcm files to remove that headroom, allowing them to be more or less the right volume when played on the SD2SNES. Let's call this boosted audio files.

Problem #2:

These newly boosted audio files are much too loud when played on higan. Maintaining 2 separate audio packs is not only a logistical pain in the butt, it's also a huge amount of storage and bandwidth increase.

Hacky Workaround #2:

Instead of releasing separate audio packs, just modify the .asm code so that any time you write to the MSU-1 volume register, you write a smaller value instead of the original. Patch files are small, so uploading 2 versions of the patch is much easier than 2 versions of the audio files. Through rough trial-and-error, it was mostly settled on $60 being the value used for "full volume" and $30 for "half volume" with anything else such as fade effects being adjusted relative to those values. Let's call this the "emulator version", and the original is now the "hardware/SD2SNES version". These are sometimes also referred to as the "FF version" (aka the hardware version) and the "60 version" (aka emulator version). To reiterate, the hardware/FF version is the original, "correct" patch.

Less Hacky Workaround #1:

ikari realized that he could actually do this same audio boosting in firmware, in realtime, between reading the file and sending it to the DAC. This would allow using "correct audio" files and the "hardware/FF" patch, and still get more or less the correct audio levels. This essentially eliminated any need for HW#1, which in turn eliminated the need for HW#2.

Non-Hacky, Correct Fix #1:

SD2SNES Rev. H includes a unity-gain op-amp on the output of the MSU-1 DAC, which solves the impedance problem, fixing the volume level. Along with LHW#1, all revisions of the SD2SNES can now output the correct volume levels. HW#1 and HW#2 are completely unnecessary.

Non-Hacky, Correct Fix #2:

For those of you with an older hardware revision of the SD2SNES, it is possible to install a hardware mod which upgrades the audio output circuit to match that of the Rev. H.  Rev. G. hardware requires an extra step in order to "downgrade" to Rev. F first, but other than that, the process is identical for all revisions.  I outlined the process here, but since writing that post, a guy by the username borti4938 has created a simple PCB which greatly simplifies the process, so I would suggest checking that out.  Once the mod is installed, the hardware is essentially identical to Rev. H, and the previous instructions will apply.  Use the proper "SD2SNES/FF" patch with properly normalized audio files, and completely disable the MSU-1 audio boost in the firmware menu.

Now, technically, with HW#2, boosted audio and the "emulator/60" patch cancel each other out, resulting in the correct levels, so why not just use that version for everything? After all, a lot of patch creators were really annoyed at having to go to all the trouble of boosting their files for HW#1, along with re-uploading everything and writing new documentation, and they really didn't want to go through all of that again. Unfortunately, that leaves us with...

Problem #3:

Actually, this is several problems. First of all, most of the boosted audio files were actually peak normalized to 0dBFS. On the one hand, thankfully this wouldn't cause any clipping, but it does mean that the audio files aren't actually normalized relative to each other. If you don't understand the difference between RMS and peak normalization, the ELI5 version is that with peak normalization, the ONLY THING that matters is the single loudest sample in the entire track. Imagine you have 2 tracks, one is really loud all the way through, and one just has a loud cymbal crash at the end, while the rest of the track is really silent. Peak normalizing these two tracks to the same level means that loud crash at the end will be the same loudness as the entire loud track, so if you listen to them side by side, the entire quiet track will be much quieter than the loud one. This is an extreme example, but if you've ever looked at a waveform visually, this is basically true of any track with a lot of really large "spikes" in volume (the "quiet" tracks) vs tracks which are very "dense" and consistently the same volume. The "spiky" tracks will end up sounding much quieter. RMS normalization accounts for this by "averaging" the volume over time, which gives a better comparison between tracks. Basically, long story short, peak normalized tracks are no longer properly normalized relative to each other.

Now, that's assuming that the tracks were peak normalized to 0dBFS. Some people didn't do that, they just "cranked up the volume", which actually resulted in clipping, permanently damaging the files. The only way to fix them is to reconvert from the original source files.

Also, some games don't actually have a single normalized level for their entire OST, and instead use a wide dynamic range. Super Metroid is probably the most extreme example I've found, but a lot of the JRPG's do as well. You completely lose this dynamic range in the boosted tracks, which really kills a lot of the impact of that dynamic range (imagine the Arrival on Brinstar track being as loud as the Ridley fight... it's just wrong).

So, you could stick with HW#2, but it's ugly, and it's still wrong for several reasons. Thankfully, I've gotten a lot of people on board with understanding this and have been working to remove and replace boosted audio packs with properly normalized ones. It's been a bit of an uphill battle, it's not 100% complete, and in some instances I've just had to do the work myself, but the hardest part was convincing people that this was really the right way to do it, and on that front, at least, I've pretty much succeeded. This means no more need for separate patches, or hacky workarounds, the ONLY workaround that needs to be mentioned is the MSU-1 boost option in the SD2SNES menu for older hardware revisions, OR the op-amp installation mod, which essentially upgrades the hardware to Rev. H. Then, all you need is the correct patch (aka "hardware/FF" version), and the correctly-leveled audio tracks, and we can go back to (mostly) pretending that this whole fiasco never happened.

Sunday, May 29, 2016

Why I should never be allowed to name software

I have this thing about coming up with funny (at least I think so), quirky, or... colorful names for software projects. I've come to the conclusion that perhaps I should not be allowed to do so. I don't really remember all of the names I've come up with, but here is the list of the ones that I do. Some of these products exist, others exist under other names, others still exist only in my mind... and should perhaps stay that way.

  • Personally, I think Chaos Monkey passed up a perfectly good opportunity to be called ClusterF*%#
  • In a similar vein, the GlusterFS distributed file system really needs a command analogous to the Unix fsck... named, of course, glusterfsck
  • I once wrote an I2C slave interface library for the AVR USI module, called USI2C.  It's pronounced you-see two-see
  • If anybody ever builds a BSD-based smartphone OS, I'd call it BSD Mobile, aka BSDM
  • PwnAFriend. I don't know what it does yet, but it sounds cool.

Monday, June 8, 2015


A few months back, I learned of the USBDriveby device developed by Samy Kamkar that was able to infect MacOS computers by posing as a USB keyboard and mouse and executing a scripted sequence of mouse movements and key presses. His device used the Teensy 3.0 microcontroller dev board and requires a micro-USB cable to plug into. In my classic fashion of never having any good ideas of my own, but seeing other people's cool ideas and thinking "I can do that better" I started thinking of ways that I could improve on the hardware used, rather than utilizing a general purpose dev board like the Teensy.

I immediately knew that I wanted to use my favorite USB microcontroller, the PIC16F1455. It comes in packages with as few as 14 pins, or as small as a QFN-16, and requires no external components beyond a pair of simple bypass capacitors, making it perfect for small, simple USB devices. It's also supported by the free USB M-Stack, which means I'm not tied down by the frustrating license stipulations of the Microchip USB stack.

The real design revelation came when I tore apart a cheap $2 DealExtreme Bluetooth dongle to find that all of the electronics, including the actual USB pads, were all on a single PCB that could be easily removed from the shell.

The tricky part was that the PCB was 0.6mm thick, and finding a manufacturer willing to produce boards at that thickness for less than $100 took some doing. Once I realized SeeedStudio would handle such a board, it was a simple matter of measuring the original board and throwing together a replacement in EAGLE.

The firmware isn't quite done yet, but I do have the device enumerating as a keyboard and mouse and can send arbitrary mouse movements and button presses, as well as keyboard key presses, so all that really remains is setting up a queue-based event processor and then feeding it the original USBDriveby script. All in all, I'm pretty happy with how it turned out, and now I'm trying to come up with other ideas for how to use this thing, since I'm probably not going to get much use out of it as a MacOS exploit. The board has a single push button and LED (plus an additional power LED), so I can probably find another purpose for it eventually.

Tuesday, May 26, 2015

Generating tiles for Google Maps API

I use Google Maps API to render the maps on my Zelda Parallel Worlds walkthrough, and as a result I needed to generate the necessary tiles for the Maps API to use.  My source image was a 4,096x4,096 image, and I needed to generate 256x256 tiles at various zoom levels, starting at fully zoomed out, where the entire map was contained in a single tile, up to however large I could reasonable render (which ended up being a whopping 16,384x16,384).  GIMP's script-fu functionality was perfect for the task, but I couldn't find a script that quite did what I wanted, including scaling the map to the various zoom levels, so I made my own.  I used the tiles-to-files plugin as my starting point and went from there.  The end result gives the following options

Tile size is adjustable (though I've only tested powers of 2, such as 64, 128, and 256)
Max zoom level determines how many zoom levels should be generated.  The lowest zoom level is 0, which is a single tile in size.

Output file type is selectable between PNG and JPEG

Interpolation mode can be chosen separately for shrinking and growing.  In my case, I didn't want any interpolation when growing, since I was growing a pixel-perfect image by a factor of 2 each zoom level, so I wanted to retain the pixel-perfect aspect and just create "big pixels".

Here's the script:

; Google Maps Tiles, V1.0
; Based on the tiles-to-files script by theilr
(define (script-fu-google-maps-tiles inImage inDrawable inTileSize inMaxZoom
                                     inFileType inShrinkMethod inGrowMethod outDir)
  (gimp-image-undo-group-start inImage)
  (let* (
          (fullWidth  (car (gimp-image-width  inImage)))
          (fullHeight (car (gimp-image-height inImage)))
          (tileWidth  inTileSize)
          (tileHeight inTileSize)
          (zoomWidth  tileWidth)
          (zoomHeight tileHeight)
          (newImage (car (gimp-image-new tileWidth tileHeight RGB)))
          (hcnt 0)
          (vcnt 0)
          (zcnt 0)
    (set! zcnt 0)
    (while (<= zcnt inMaxZoom)
      (set! zoomWidth (* tileWidth (expt 2 zcnt)))
      (set! zoomHeight (* tileHeight (expt 2 zcnt)))
      (gimp-rect-select inImage
                        CHANNEL-OP-ADD FALSE 0)
      (gimp-edit-copy-visible inImage)
      (gimp-selection-none inImage)
      (set! tmpImage
        (car (gimp-image-new zoomWidth zoomHeight RGB)))
      (set! tmpLayer
        (car (gimp-layer-new tmpImage fullWidth fullHeight
                     RGB-IMAGE "Background" 100
      (gimp-image-add-layer tmpImage tmpLayer -1)
      (set! selLayer
        (car (gimp-edit-paste tmpLayer FALSE)))
      (gimp-floating-sel-anchor selLayer)
      (if (< zoomWidth fullWidth)
          (gimp-context-set-interpolation inShrinkMethod)
          (gimp-layer-scale tmpLayer zoomWidth zoomHeight FALSE)
          (gimp-image-resize-to-layers tmpImage)
      (if (> zoomWidth fullWidth)
          (gimp-context-set-interpolation inGrowMethod)
          (gimp-layer-scale tmpLayer zoomWidth zoomHeight FALSE)
          (gimp-image-resize-to-layers tmpImage)
      (set! hcnt 0)
      (while (< (* hcnt tileWidth) zoomWidth)
        (set! vcnt 0)
        (while (< (* vcnt tileHeight) zoomHeight)
          (gimp-rect-select tmpImage
                            (* tileWidth hcnt)
                            (* tileHeight vcnt)
                            CHANNEL-OP-ADD FALSE 0)
          (gimp-edit-copy-visible tmpImage)
          (gimp-selection-none tmpImage)
          (set! tmpLayer
            (car (gimp-layer-new newImage tileWidth tileHeight
                         RGB-IMAGE "Background" 100
          (gimp-image-add-layer newImage tmpLayer -1)
          (set! selLayer
            (car (gimp-edit-paste tmpLayer FALSE)))
          (gimp-floating-sel-anchor selLayer)
          (if (= inFileType 0)
              (set! outfname (string-append outDir
                                            (number->string zcnt)
                                            (number->string hcnt)
                                            (number->string vcnt)
              (file-png-save  RUN-NONINTERACTIVE
                              0 9 1 0 0 1 1 )
          (if (= inFileType 1)
              (set! outfname (string-append outDir
                                            (number->string zcnt)
                                            (number->string hcnt)
                                            (number->string vcnt)
              (file-jpeg-save RUN-NONINTERACTIVE
                              0.95 ; JPEG compression level
                              0    ; Smoothing
                              1    ; Optimize
                              1    ; Progressive
                              ""   ; Comment
                              0    ; Subsampling (0-4)
                              1    ; Baseline
                              0    ; Restart
                              0    ; DCT
          (set! vcnt (+ vcnt 1))
        (set! hcnt (+ hcnt 1))
      (gimp-image-delete tmpImage)
      (set! zcnt (+ zcnt 1))
    (gimp-image-delete newImage)
    (gimp-image-undo-group-end inImage)
  "script-fu-google-maps-tiles"            ; function name
  "<Image>/Filters/Tiles/_Google Maps"     ; menu label
  "Split an image into tiles suitable\     ; description
   for use with Google Maps API"
  "qwertymodo"                             ; author
  "(c) 2015, qwertymodo"                   ; copyright notice
  "25 May 2015"                            ; date created
  "RGB*"                                   ; image type
  SF-IMAGE      "Image"   0
  SF-DRAWABLE   "Drawable" 0
  SF-ADJUSTMENT "Tile Size (px)"           '(128 8 1024 1 8 0 SF-SPINNER)
  SF-ADJUSTMENT "Max Zoom Level"           '(4 0 10 1 2 0 SF-SPINNER)
  SF-OPTION     "Output File Type"         '("png" "jpg")
  SF-ENUM       "Interpolation (Shrink)"   '("InterpolationType" "cubic")
  SF-ENUM       "Interpolation (Grow)"     '("InterpolationType" "cubic")
  SF-DIRNAME    "Output Folder"            "tiles"

Tuesday, August 27, 2013

Roll-Your-Own EEPROM Burner

It's been awhile since I've written anything, mostly because I've been busy actually making stuff, but I figured I'd take some time out to do some writing instead.  Something I've been working with lately is building SNES reproduction carts.  I see a lot of people buying and using Willem-based programmers to burn their EEPROMs, and from what I can tell, they're way more trouble than they're worth, and more expensive too. I chose to go a different route and build my own programmer.  EEPROM programming is a pretty straightforward microcontroller exercise.  The main trouble is finding a microcontroller with enough I/O pins. Personally, I used a Teensy++ 2.0 development board.  One of the most common chips used for SNES reproduction is the AM29F032B, used with a TSOP-to-DIP36 breakout board, usually the one made and sold by  So, for lack of anything better to do, I'm going to explain how to go about creating a programmer for the AM29F032B, though much of the information can be adapted to any EEPROM.

First of all, you'll need to build yourself an adapter in order to connect the microcontroller to the ROM.  For the sake of cleaner code, logically contiguous pins on the ROM, such as A0-A22, D0-D7, should be connected to logically contiguous pins on the microcontroller (an 8-bit microcontroller will only have 8-bit ports, so a good compromise is to utilize full ports as much as possible, e.g. A0->PORTX0, A1->PORTX1 . . . A7->PORTX7, A8->PORTY0, A9->PORTY1, etc.). For my Teensy++ adapter, it looked like this:

(I'm trying to figure out how to create a custom part in Fritzing so I can post a nice image representation of this circuit, but for now you'll have to live with a pin table)

ROM    Teensy
Vcc    Vcc
Gnd    Gnd
A0-7   D0-7
A8-15  C0-7
A16-22 F0-6
D0-7   B0-7
/CE    E7
/OE    E6
/WE    E1

Ok, so now that we have everything wired up (or, in my case, I created a socketed PCB), we can start writing code. The most basic functions are reading and writing a single byte. The function prototypes will look something like this.

uint8_t ReadByte(uint32_t address);
void WriteByte(uint32_t address, uint8_t value);

In order to understand how to implement these functions, we first need to look at the function waveforms in the datasheet.  Here's the read function:

We can ignore the actual timings right now, all we really care about is the sequence.  From the diagram, we can see that the sequence goes like this:

Set CE# high, OE# high, and WE# high (in any order, or simultaneously)
Set up the address
Set CE# low
Set OE# low
Wait for a short time
Read the data
Set CE# high and OE# high (in any order, or simultaneously)

In code, it looks like this:

uint8_t ReadByte(uint32_t address)
  // Set data line as input, pulled high
  DATA_DDR   = 0x00;
  DATA_PORT  = 0xFF;
  // Pull all control lines high
  // Set up address
  ADDR_PORT_0 = address & 0xFF;
  ADDR_PORT_1 = (address >> 8) & 0xFF;
  ADDR_PORT_2 = ((address >> 16) & 0x7F);
  // Pull CS low, then OE low

  // Read data
  uint8_t data = DATA_PIN;
  // Pull all control lines high
  return data;

As you can see, I've #defined a few values here to make the code cleaner. That's all done based on the pinout specified above.  For instance:

#define DATA_PORT    PORTB
#define DATA_PIN     PINB
#define DATA_DDR     DDRB

#define CS_PORT      PORTE
#define CS_DDR       DDRE
#define CS_BIT       (1<<7)

...and so on.  Writing is almost identical, though you wouldn't think it, looking at the waveform in the datasheet.

The reason that this looks so complicated is that Flash ROMs actually require you to write several command bytes for every byte of data you actually want to program.  However, we want to first write the code to write a single byte, then it's easy to write multiple bytes in a row by making consecutive calls to that function. To make it easier, the sequence is:

Set CE# high, OE# high, and WE# high (in any order, or simultaneously)
Set up the address
Set CE# low
Set WE# low
Set up the data
Wait for a short time
Set CE# high and WE# high (in any order, or simultaneously)

The write operation actually occurs when CE# or WE# is pulled high, which latches the data lines and then performs the write.

No code this time, it should be trivial.  Copy and paste the read function and make the necessary changes.

Next, we want to be able to program information to the chip.  As mentioned before, this is done by writing several command bytes, followed by the actual data byte.  This varies from chip to chip, but for the AM29F032B, the sequence is:

Addr   Data
0x555  0xAA
0x2AA  0x55
0x555  0xA0
addr   data

where the final "addr" and "data" are the actual address and data that you want to program on the chip.  All you have to do is call your WriteByte 4 times in a row with those addresses and data values.

Now, the final function we need to write is to erase the chip.  EEPROMs, including Flash ROMs, must be erased before they can be written to.  This is because the program function can only change a 1 to a 0, it can't change a 0 to a 1.  Because of that, in order to change a 0 to a 1, you have to change EVERYTHING to 1's by erasing the chip, then you can go back and program the 0's.  It's just how it is.  Anyway, erasing a Flash ROM is achieved by a command sequence.  Again, this varies from chip to chip, but for the AM29F032B, the sequence is:

Addr   Data
0x555  0xAA
0x2AA  0x55
0x555  0x80
0x555  0xAA
0x2AA  0x55
0x555  0x10

One last thing is that we need to know when the erase function has completed. There are several ways to do so, as described in the data sheet. The lazy way to do it is to continuously read any address (I usually pick 0x000) until the data returned is 0xFF. The reason for this is that during an erase procedure, the result of any read, instead of being the data at that address, is actually a status register. The status register will never equal 0xFF, but once the chip is erased, the whole chip will be all 1's, so any read should return 0xFF. Like I said, it's the lazy way to do it.

So now, we have 4 functions that pretty much handle everything that we need in order to burn code to our Flash ROM:

uint8_t ReadByte(uint32_t address);
void WriteByte(uint32_t address, uint8_t value);
void ProgramByte(uint32_t address, uint8_t value);
void EraseChip();

Now, you have to figure out how to actually transfer data between the PC and the microcontroller. I use RealTerm, because it has the ability to transmit binary files over a serial connection. I then set up my Teensy++ main loop with a simple serial interface that resembles a command-line application, with various commands and flags, then I use RealTerm's send function for programming, and its capture function for reading. Once I've programmed the ROM, I read it back to a file, and compare the file against the original ROM file to make sure that they match (be sure you've padded your ROM file or else trim the file you read back to match the original file size or you may get an incorrect mis-match). Because I'm using the Teensy++'s CDC virtual-serial-port-over-USB interface, it would be entirely possible to write a full PC-side host application tailor-made for this device, but there really isn't much point, seeing as all it would be doing is sending a file to a serial port, or capturing data from that serial port. Better to just use an existing application, if it fits our needs, and RealTerm does just that.

[Minor Edit]
RealTerm apparently does a really terrible job of packet utilization for USB-CDC virtual serial devices, an issue which I've submitted to their bug tracker, though it hasn't generated any response, so it's unlikely to be fixed.  For that reason, I've switched to Tera Term, as it speeds up write speeds by a factor of about 20-30, which makes the difference between taking 45 minutes to burn a chip with RealTerm vs about 90 seconds with Tera Term.  This doesn't change the fact that I'm still using the same code on the microcontroller side.

Anyway, I'll probably throw up some more pictures at some point, but for the most part, I wanted to describe the process, rather than just handing out schematics and code.  This is a relatively simple feat to accomplish, and from what I've seen of a lot of the SNES reproduction makers, I feel it should be something of a rite of passage.  If you want to just go out and buy yourself a Willem, go ahead.  But be warned that nobody really wants to help repro makers with their crappy Willems.  Be a man, roll your own burner.

Here's mine:

An original SNES MaskROM, used for my initial read-only testing

The double-sided PCB design cut down on PCB size, and as a result, cost