Thinking of turning an old oscilloscope into a vector display – think Space Invaders etc. The electronics are no problem (so far). But I would like to try with a test image or two.
The electronics will take a serial data stream of X and Y and Z (beam on/beam off) data and I could roll my own test data and send it to the electronics.
But it would be nice to have an (easy?) way of producing 2D wireframe images from some free download software – is such software available. I don't really want to spend ages learning how to do a proper tech drawing. Just a squares and a few triangles would do to start with.
The WEB seems full of stuff that seems massively too good for what I want. My starting point is CAD circa 1965.
Are you thinking of a flat image being displayed or a rotating 3D wireframe image? The flat image can be mocked up with anything that can draw a line on a plain background for testing purposes, a rotatable 3D image will take a lot more effort.
What does the data stream look like? Is it an (x, y, z) triple for each pixel of the display?
Let us say your display is 10 x 10 pixels, then you would need a list of 100 (x,y,z) items to populate it.
There are hundreds of both vector and raster drawing programs that would fill a canvas of a desired size with either a black dot or a white dot. The difficulty is in getting the format in which they save that information (gif, bmp, jpg, svg, etc.) into a format that the 'scope will accept.
You might have to roll a good part of it on your own using Python or similar. Make a simple 2×2 pixel canvas and put a black dot in one of the four positions. Work out how to extract a list of four triples from that file (three of which will have zero Z-values).
This is way more advanced than you need, but it does show the extreme simplicity of getting every pixel from an image into a Python array (for what you need to do, reading up to Figure 2 will be sufficient).
Another thought I had, but that will require work in a different direction is to pretend you want to draw your line art on a pen plotter, a laser cutter, a vinyl sign cutter, etc. The granddaddy control language of these was HPGL – pen up, pen down and move between start and end points. The difficulty with this is you would have to write something to give you the intermediate points (Bresenham).
Thinking of turning an old oscilloscope into a vector display – think Space Invaders etc. The electronics are no problem (so far). But I would like to try with a test image or two.
The electronics will take a serial data stream of X and Y and Z (beam on/beam off) data and I could roll my own test data and send it to the electronics.
But it would be nice to have an (easy?) way of producing 2D wireframe images from some free download software – is such software available. I don't really want to spend ages learning how to do a proper tech drawing. Just a squares and a few triangles would do to start with.
The WEB seems full of stuff that seems massively too good for what I want. My starting point is CAD circa 1965.
rgrds
Can't answer WRT a cad package that could do images for you, but the display electronics are interesting…
Not sure what basis you are using, and a lot depends of how fast you want to write ( eye persistance), how quickly you can turn the beam on and off and a number of other things…
I designed a display system for a Heads-up-Display – actually a helmet mounted display for a pilot – used the DDA algorithm – generated the graphics with a fast processor ( DSP back then..) which fed two very fast DAC's with some very, very fast drivers to prevent twirls and squiggles at line ends and starts…that was fun..We created our own PC based graphics (cad..) tool to create the graphics that would be displayed. Some graphics were static, eg, artificial horizon line, which only moved with A/C attitude, and others that moved all over this display , such as targets, etc.
Not helping but more reminiscences. In 1973 in my college a chap called Adrian Aylward had a homebrew computer with TTL state machine ALU not a microprocessor just 1k of core for program and RAM with a teletype and a 5in 'scope screen for vector graphics, probably the first student computer in the country.
His main game was a binary start rotating in the middle with two spaceships (pointy triangles) which you could rotate, apply thrust, and fire a stream of bullets, while being pulled into the star by gravity. All that in just 1k. When 'teletennis' appeared he just programmed that in too.
I wonder what became of him. I didn't hear of another homebrew in the university until 1977 and I graduated before starting my 6800 based one..
+1, or putting it another way, how do the electronics work?
One way of creating an image is to sweep X from side to side to draw a line of horizontal dots and then sweep Y from top to bottom to draw lines of dots. This is how analogue TVs work. The image is pre-calculated.
Another is to place each dot according to it's X,Y coordinate, in which case there's no need to scan. The line flows like handwriting.
In this example, my screen is 10 x 10 pixels, and I have a rectangle (in RED) tilted at 15° to print on it. Which pixels should be ON and in what order will the electronics draw them?
By convention computer graphics put 0,0 at top left.
ON pixels between corners are calculated and are in bold.
The scan approach would set Y=0, and send 0,0,0; 1,0,0; 2,0,0; 3,0,0; 4,0,0; 5,0,1; 6,0,0; 7,0,0; 8,0,0; 9,0,0;
Then Y=1, and send 0,1,0; 1,1,1; 2,1,1; 3,1,1; 4,1,1;5,1,1; 6,1,0; 7,1,0; 8,1,0; 9,1,0;
Then Y=2 and send 0,2,1; 1,2,1; 2,2,0; 3,2,0; 4,2,0; 5,2,1; 6,2,0; 7,2,0; 8,2,0; 9,2,0;
And so on up Y=9
The handwriting approach would just send ON coordinates: 5,0,1; 1,1,1; 2,1,1; 3,1,1; 4,1,1;5,1,1; 0,2,1; 1,2,1;5,2,1; etc.
Practically, it's usual to pre-calculate the matrix, and drive the screen's electronics from that. It is possible to calculate the X,Y coordinates needed to draw a line on the fly, but it's easier not too. (There's a need to repeat frames so people can see the image.)
A computer display driver uses precomputed images. The software draws on an matrix called a frame buffer. This is an area of memory organised like my simple 10×10 matrix except it's extended to allow colour and intensity. Each display has a device driver that translates the image in the frame buffer into whatever format the display electronics need. Could be a JPG, or rasters for an Analogue TV, HPJL pen movements, postscript commands, HDMI, or whatever. The driver does other stuff like scale the image to fit, sort out colour profiles and so forth that aren't necessary for a simple set-up.
A rectangle is defined as the four coordinates of it's corners. Scaling and rotating are done on these four coordinates. Keeps the object definitions simple.
To draw the rectangle, an algorithm calculates which pixels between each corner need to be ON to draw a line. Bresenham is famous for his way of calculating lines and circles efficiently so it's usual to copy him. The pixel settings can be stored in a list and sent to a device that understands the list.
I would implement this in three stages:
A PC program (I'd use Python) that captures the object's initial coordinates and calculates the pixels needed to draw lines, connected to
An Arduino that reads PC created coordinates and translates/interfaces with the electronics. (Written in C, how complicated depends on what the electronics need as input)
The electronics driving the oscilloscope's X,Y and putting pixels on/off. Gut feel, two Digital to Analog Converters and an ON/OFF switch for Z would do it
However, it does depend on how easy to produce the graphics need to be. For a one off demo, the coordinate list can be created manually by drawing it on graph paper, but this soon gets tedious!
An easily missed point is the need to refresh the display repeatedly until the human eye registers the image! Partly depends on the persistence of the oscilloscopes phosphor, but the display should hold the picture for about 35mS before moving to the next frame.
I don't know off-hand of a program that allows shapes to be drawn and outputs the result as a simple list of XY coordinates. Most users want complex outputs like JPG. I'll have a think.
Interesting project! But can the electronics be explained please?
I don't know off-hand of a program that allows shapes to be drawn and outputs the result as a simple list of XY coordinates.
Something struck me after seeing your diagram: there are many online programs around that will convert images into ASCII art.
Without trying them, I do not know they use the full set of printable characters for the output or if you can choose a single character (equivalent to the yellow 'o' in the diagram).
Assume one of them outputs a load of 80 character long lines with some of the characters being spaces and the rest being a printable character. If he can run his shape through one of these programs, it will only need minor post processing to turn it into 80 'characters' of either one or zero. Remove the line breaks and there is the serial stream for the whole image, starting top left and ending bottom right.
It might be a way to experiment: turn the image into a kind of text-based oscilloscope screen.
A further thought that struck me about making an image into a serial stream of data is a fax machine. That scans the page line by line and if it sees dark, it sends a 1 down the phone line. There are lots of computer-based fax programs around.
Thanks everyone for the info so far. The idea for all this came from a couple of websites: trmm.net/MAME/ and trmm.net/V.st/
These are put up by someone called Trammel Hudson who has a long list of interesting projects. The V.st shows the digital to analogue hardware. Essentially USB data in and volts X and Y out. A file of co ordinates is cycled through and sent out to USB to keep the picture refreshed. There is also an fpga version but not sure if that is complete.
Being lazy I have not yet gone into the MAME software. This appears to reproduce old arcade games on modern kit – CRT and LCD displays but has been 'got at' aka patched to output vector sequences. Originally some of these old games ran on real vector displays – so full circle. Essentially atari roms in and vectors out the USB port. There is also a joystick interface that 'looks' like a keyboard.
What I was after was an easy way to generate a file of X and Y coordinates from some sort of CAD system and then roll it through a home brew routine to output the vectors to USB and sit back and admire the view.
All a bit of a waste of time, but what else are hobbies for.
The bad news is that the fairly larger Teensy 4.1 modules are unavailable until 2023 due to the chip shortage.
Not too difficult to generate the pixel coordinates of lines in Python3 because there's a module already.
Using corners from my rectangle example above, the code:
And the output:
Bresenham uses fewer pixels to represent lines than my handraulic example, which is usually a good thing. I simply said pixels are ON if the red-line enters a cell: Bresenham is smarter than me, only activating pixels when the red line penetrates deeply.
I mentioned Arduino as my choice of microcontroller because I'm familiar with them and their 5V electronics are fairly robust. Teensy could be a better choice: they're faster and some boards have a built in DAC, which could be multiplexed to keep Roger's electronics very simple.
…an easy way to generate a file of X and Y coordinates from some sort of CAD system
Draw a point entity at each vertex of your shape in the CAD system. Save as a(n ASCII) dxf file. Use EzDXF in Python to extract the coords of the point entities and feed them into the code provided above.
Thanks all. I downloaded LibreCAD, drew a rectangle, exported as SVG file. Then copied SVG into an editor and it contains the co-ords in a reasonably accessible form. Not tried a dxf file yet but I guess that will do much the same and the Python code will help.
Always been a bit worried by CAD – looks complicated. So thanks all, was easier than expected.
When I finish the painting and all the other jobs I will go back to the display project – unless another project gets in the way…..
Some people seem to be missing the point of vector graphics and displaying with a plain 'scope screen. The OP is not trying to make a raster scan system nor a pixel based one.
There is a pretty common (nowadays) code standard – G-code, and programs that generate it. It is fundamentally just start point, end point, move from one to the other. The only difference from the OP's initial criteria is the 'z' coordinate of beam on/off which is a separate command. A pre-processor could adjust this if necessary, just as one uses a pre-processor for a particular CNC machine.
Not long ago I wrote my own software to generate Gcode, and I wanted it. do it from a LibreOffice drawing. (2D).
I found I could save a LibreOffice Draw drawing as an SVG (vector graphics file), and then found out that the SVG file consists of a readable text file, giving description and co-ordinates of any drawn object in the 'Object Space'. I was able to use these in my software to generate the Gcode, and it was very successful. The SVG file had a limited number of objects used in any drawing, and provided X,Y co-ordinates to 2 decimal places, and in the end proved quite simple to 'parse' and convert to a Gcode text file. (and coincidentaly saved me having to use about 3 or 4 different applications to get from drawing to Gcode)!
Since you are using an oscilloscope, I assume you would not be using a 'scanning' mode (raster?), but simply want to define one end of (say) a line with X, Y co-ordinates, then drive the CRT spot to the other end of the line, defined by a second set of X,Y co-ordinates, much as the normal scope display operates. Things like squares, circles, triangles or even characters etc could be displayed much as a Lissajous figure would be displayed.
I would think it would be possible to define a number of objects from an SVG vector drawing, then save the co-ordinates of these objects in memory. (of some kind), then to display any of these objects, call up the appropriate object in memory, and feed it to the Oscilloscope X,Y inputs.
Furthermore, the SVG file defines the objects from a specified origin, and you can in effect, vary the offsets from this origin to define the position of the object in the 'drawing space' (in this case, your scope display), so by stepping these offsets, you can move the drawn object across the scope.
Also since you have the object defined in terms of X1, Y1: X2, Y2, etc, it is fairly easy to scale these co-ordinates to increase or decrease the apparent size of the object drawn, which would create the illusion of the object moving towards or away from the viewer.
Depending on the hardware, it may be possible to 'parse' an SVG file containing the required data in real time and feed it to the scope, but as I say – hardware dependent – it may be too slow.
If you can, I would suggest creating a simple drawing in LibreOffice Draw and save it as an SVG file (it may also be possible in the Windows Drawing app, but I know nothing about Microsoft stuff), and then you can read this SVG file using a Text Reader application, and see what you can learn. (For example, I wanted my Origin to be the lower left corner of the drawing, but found that LibreOffice SVGs seem to use the lower right, so I had to compensate for this when I parsed the file for my Gcode, or it all appeared in mirror-image).
I'm sure there is a standard for how SVG files define objects, but when I looked it up online, I realised that life was too short to read, digest and understand, so I just 'did it myself' by looking at an SVG file.
Sounds like an interesting project – hope this provides some help or direction.
Some people seem to be missing the point of vector graphics and displaying with a plain 'scope screen. The OP is not trying to make a raster scan system nor a pixel based one.
There is a pretty common (nowadays) code standard – G-code, and programs that generate it. It is fundamentally just start point, end point, move from one to the other. The only difference from the OP's initial criteria is the 'z' coordinate of beam on/off which is a separate command. A pre-processor could adjust this if necessary, just as one uses a pre-processor for a particular CNC machine.
Umm, but Roger needs a pre-processor to drive his electronic interface to an oscilloscope. Something like GRBL is pretty close, and I'm sure it could be made to work, but my feeling is Roger will get on better by keeping it simple with what he understands. One thing GRBL might have trouble with is repeating moves inside 30mS frames so the lines drawn on the oscilloscope last long enough to be seen by a human.
Writing a pre-processor to drive machines that step, or can be stepped, I think it's easier to work in pixels than vectors. Anyone able to describe a vector solution? Definitely possible.
I hope Roger reports back! Could be he has to try more more than one approach.
Thanks everyone. Essentially the PC or laptop or Pi than runs the MAME software continually sends the vector buffer to keep the screen refreshed. The data rate is said to be about 1.5 million vectors/second, X and Y each being 12 bit numbers. There appears to be some attempt to sort the vectors into some sort of order to economise on movement time.
I did consider GBRL and G Code but thought that would be too slow and anyway MAME allegedly does the job. An Arduino 'loop' seems ideal. Going straight from LibreCAD to a file > Python > USB seems pretty simple.
At the moment I am at the WD40 stage on getting the front panel and knobs off this old 'scope as well as painting, mowing grass etc etc.
My special thanks to Grindstone Cowboy – I think we should (have) all misspent our youth….
Just to see if my oscilloscope can do an XY display I knocked up this simple proof-of-concept. It draws a single diagonal line by generating X,Y coordinates with a loop. Roger's version will be cleverer!
The microcontroller is a Nucleo F429ZI, complete overkill for this except the chip has two built-in DACs, so no need for me to bread board a circuit. One DAC does X, the other does Y, and they can be connected straight into my oscilloscope. The output voltage can be between 0 and 3.3V.
The demo C program increments X from 0 to 1.0V in 0.1V steps, and Y is the inverse 1.0 – X with a 1 second delay between each step. The code is:
With the oscilloscope in normal YT mode (T is timebase), the output waveforms are X (yellow) and Y (blue):
Putting the scope into XY mode, which maps channel A against channel B, produces the expected 45°diagonal line:
The line is dotted because at 0.1V per step my 'pixels' are too far apart, ie coarse. Stepping at 0.01V would be much better.
So Roger's approach works in principle. All Roger has to do is the hard part – driving the oscilloscope trace with a list of coordinates read by a microcontroller from a PC! Hours of fun ahead…
Not sure if it's any help but if you use a 2d cad package I.e. free cad or Qcad and save as a DXF file it's ascii and you can extract the coordinates as a text string. I used to do this for geological mapping before GIS became available.
Barry
Author
Posts
Viewing 21 posts - 1 through 21 (of 21 total)
Please log in to reply to this topic. Registering is free and easy using the links on the menu at the top of this page.