💾 Archived View for compudanzas.net › uxn_tutorial_day_2.gmi captured on 2022-01-08 at 13:45:29. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-17)
-=-=-=-=-=-=-
this is the second section of the uxn tutorial!
in this section we start exploring the visual aspects of the varvara computer: we talk about the fundamentals of its screen device so that we can start drawing on it!
we also discuss working with shorts (2-bytes) besides single bytes in uxntal.
if you haven't done it already, i recommend you read the previous section at uxn tutorial day 1
before jumping right into drawing to the screen, we need to talk about bytes and shorts :)
even though uxn is a computer that works natively with 8-bits-sized words (bytes), there are several occasions in which the amount of data that it is possible to store in one byte is not enough.
when we use 8 bits, we can represent 256 different values (2 to the power of 8). at any given time, one byte will store only one of those possible values.
in the previous section, we talked about a case where this amount is not enough in uxn: the number of bytes that the main memory holds, 65536.
that number corresponds to the values that can be represented using two bytes, or 16 bits, or a "short": 2 to the power of 16. that quantity is also known as 64KB, where 1KB corresponds to 1024 or 2 to the power of 10.
besides expressing addresses in main memory, today we will see another case where 256 values is not always enough: the x and y coordinates for the pixels in our screen.
for these and other cases, using shorts instead of bytes will be the way to go.
how do we deal with them?
counting from right to left, the 6th bit of a byte that encodes an instruction for the uxn computer is a binary flag that indicates if the short mode is set or not.
whenever the short mode is enabled, i.e. when that bit is 1 instead of 0, the uxn cpu will perform the instruction given by the first 5 bits (the opcode) but using pairs of bytes instead of single bytes.
the byte that is deeper inside the stack will be the "high" byte of the short, and the byte that is closer to the top of the stack will be the "low" byte of the short.
in uxntal, we indicate that we want to set this flag adding the digit '2' to the end of an instruction mnemonic.
let's see some examples!
first of all, let's recap. the following code will push number 02 down onto the stack, then it will push number 30 (hexadecimal) down onto the stack, and finally add them together, leaving number 32 in the stack:
#02 #30 ADD
this would be the final state of the stack:
32 <- top
in the previous day we mentioned that the literal hex rune (#) is a shorthand for the LIT instruction. therefore we could have written our code as follows:
LIT 02 LIT 30 ADD ( assembled code: 80 02 80 30 18 )
now, if we add the '2' suffix to the LIT instruction, we could write instead:
LIT2 02 30 ADD ( assembled code: a0 02 30 18 )
instead of pushing one byte, LIT2 is pushing the short (two bytes) that follows in memory, down onto the stack.
we can use the literal hex rune (#) with a short (four nibbles) instead of a byte (two nibbles), and it will work as a shorthand for LIT2:
#0230 ADD
now let's see what happens with the ADD instruction when we use the short mode.
what would be the state of the stack after executing this code?
#0004 #0008 ADD
that's right! the stack will have the following values, because we are pushing 4 bytes down onto the stack, ADDing the two of them closest to the top, and pushing the result down onto the stack
00 04 08 <- top
now, let's compare with what happens with ADD2:
#0004 #0008 ADD2
in this case we are pushing the same 4 bytes down onto the stack, but ADD2 is doing the following actions:
the stack ends up looking as follows:
00 0c <- top
we might not need to think too much about the per-byte manipulations of arithmetic operations, because normally we can think that they are doing the same operation as before, but using pairs of bytes instead of single bytes. their order not really changes.
in any case, it's useful to keep in mind how they work for some behaviors we might need later :)
let's talk now about the DEO (device out) instruction we discussed in the previous day, as its short mode implies something special.
the DEO instruction needs a value (1 byte) to output, and an i/o address (1 byte) in the stack, in order to output that value to that address.
DEO ( value address -- )
this instuction has a counterpart: DEI (device in).
the DEI instruction takes an i/o address (1 byte) from the stack, and it will push down onto the stack the value (1 byte) that corresponds to reading that input.
DEI ( address -- value )
what do you think that DEO2 and DEI2 would do?
in the case of the short mode of DEO and DEI, the short aspect applies to the value to output or input, and not to the address.
remember that the 256 i/o addresses are covered using one byte only already, so using one short for them would be redundant: the high byte would be always 00.
considering this, the following are the behaviors that we can expect:
the DEO2 instruction needs a value (1 short) to output, and an i/o address (1 byte) in the stack, in order to output that value to that address. therefore it needs a total of 3 bytes in the stack to operate.
on the other hand, the DEI2 instruction needs an i/o address (1 byte) in the stack, and it will push down onto the stack the value (1 short) that corresponds to that input.
in the following section we will see some examples where we'll be able to use these instructions.
the 'write' port of the console device that we used last time has a size of 1 byte, so we can't really use these new instructions in a meaningful way with it.
the system device is the varvara device with an address of 00. its output ports (starting at address 08) correspond to three different shorts: one called red (r), the other one green (g), and the last one blue (b).
in uxntal examples we can see its labels defined as follows:
|00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ]
we will ignore the first elements for the moment, and focus on the color components.
the varvara screen device can only show a maximum of four colors at a time.
these four colors are called color 0, color 1, color 2 and color 3.
each color has a total depth of 12 bits: 4 bits for the red component, 4 bits for the green component, and 4 bits for the blue component.
we can define the values of these colors setting the r, g, b values of the system device.
we could write that as follows:
( hello-screen.tal ) ( devices ) |00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ] ( main program ) |0100 ( set system colors ) #2ce9 .System/r DEO2 #01c0 .System/g DEO2 #2ce5 .System/b DEO2
how would we read what those literal shorts mean?
we can read each of the colors vertically, from left to right:
if we run the program now we'll see a dark purple screen, instead of black as what we had before.
try changing the values of color 0, i.e. the first column, and see what happens :)
as a recap: we mentioned that the screen device can only show four different colors at a given time, and that these colors are numbered from 0 to 3. we set these these colors using the corresponding ports in the system device.
now let's discuss the screen device and start using it!
in uxntal programs for the varvara computer you will be able to find the labels corresponding to this device as follows:
|20 @Screen [ &vector $2 &width $2 &height $2 &pad $2 &x $2 &y $2 &addr $2 &pixel $1 &sprite $1 ]
the inputs that we can read from this device are:
and the output ports of this device are:
the screen device has two overlayed layers of the same size, the foreground and the background.
whatever is drawn over the foreground layer will cover anything that is drawn in the same position in the background layer.
in the beginning the foreground layer is completey transparent: a process of alpha blending makes sure that we can see the background layer.
the first and simpler way to draw into the screen is drawing a single pixel.
in order to do this we need to set a pair of x,y coordinates where we want the pixel to be drawn, and we need to set the 'pixel' byte to a specific value to actually perform the drawing.
the x,y coordinates follow conventions that are common to other computer graphics software:
if we wanted to draw a pixel in coordinates ( 8, 8 ), we'd set its coordinates in this way:
#0008 .Screen/x DEO2 #0008 .Screen/y DEO2
alternatively, we could first push the values for the coordinates down onto the stack, and output them afterwards:
#0008 #0008 .Screen/x DEO2 .Screen/y DEO2
a question for you: if we wanted to set the coordinates as ( x: 4, y: 8 ), which one of the shorts in the code above you should change for 0004?
sending a single byte to .Screen/pixel will perform the drawing in the screen.
the high nibble of that byte, i.e. the hexadecimal digit at the left, will determine the layer in which we'll draw:
the low nibble of the byte, i.e. the hexadecimal digit at the right, will determine its color.
the 8 possible combinations of the 'pixel' byte that we have for drawing a pixel are:
let's try it all together! the following code will draw a pixel with color 1 in the foreground layer, at coordinates (8,8)
#0008 .Screen/x DEO2 #0008 .Screen/y DEO2 #41 .Screen/pixel DEO
the complete program would look as follows:
( hello-pixel.tal ) ( devices ) |00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ] |20 @Screen [ &vector $2 &width $2 &height $2 &pad $2 &x $2 &y $2 &addr $2 &pixel $1 &sprite $1 ] ( main program ) |0100 ( set system colors ) #2ce9 .System/r DEO2 #01c0 .System/g DEO2 #2ce5 .System/b DEO2 ( draw a pixel in the screen ) #0008 .Screen/x DEO2 #0008 .Screen/y DEO2 #41 .Screen/pixel DEO ( fg layer, color 1 )
woohoo!
remember you can use F1 to switch between zoom levels, and F3 to take screenshots of your sketches :)
the values we set to the x and y coordinates stay there until we overwrite them.
for example, we can draw multiple pixels in an horizontal line, setting the y coordinate only once:
( set y coordinate ) #0008 .Screen/y DEO2 ( draw 6 pixels in an horizontal line ) #0008 .Screen/x DEO2 #41 .Screen/pixel DEO #0009 .Screen/x DEO2 #41 .Screen/pixel DEO #000a .Screen/x DEO2 #41 .Screen/pixel DEO #000b .Screen/x DEO2 #41 .Screen/pixel DEO #000c .Screen/x DEO2 #41 .Screen/pixel DEO #000d .Screen/x DEO2 #11 .Screen/pixel DEO
note that we have to set the color for each pixel we draw; that operation signals the drawing and has to be repeated.
we can define a macro to make this process easier to write:
%DRAW-PIXEL { #41 .Screen/pixel DEO } ( -- )
we will not cover repetitive structures yet, but this is a good opportunity to start aligning our code towards that.
even though the x and y coordinates of the screen device are intended as outputs, we can also read them as inputs.
for example, in order to read the x coordinate, pushing its value down onto the stack, we can write:
.Screen/x DEI2
taking that into account, can you tell what would this code do?
.Screen/x DEI2 #0001 ADD2 .Screen/x DEO2
you guessed it right, i hope!
that set of instructions increments the screen x coordinate by one :)
they seem handy, so we could save them as a macro as well:
%INC-X { .Screen/x DEI2 #0001 ADD2 .Screen/x DEO2 } ( -- )
here's another question for you: how would you write a macro ADD-X that allows you to increment the x coordinate by an arbitrary amount you put in the stack?
%ADD-X { } ( increment -- )
adding 1 to the value at the top of the stack is so common that there's an instruction for achieving it using less space, INC:
INC ( a -- a+1 )
INC takes the value from the top of the stack, increments it by one, and pushes it back.
in the case of the short mode, INC2 does the same but incrementing a short instead of a byte.
our macro for incrementing the x coordinate could be then written as follows:
%INC-X { .Screen/x DEI2 INC2 .Screen/x DEO2 } ( -- )
using these macros we defined above, our code could end up looking as following:
( hello-pixels.tal ) ( devices ) |00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ] |20 @Screen [ &vector $2 &width $2 &height $2 &pad $2 &x $2 &y $2 &addr $2 &pixel $1 &sprite $1 ] ( macros ) %DRAW-PIXEL { #41 .Screen/pixel DEO } ( -- ) %INC-X { .Screen/x DEI2 INC2 .Screen/x DEO2 } ( -- ) ( main program ) |0100 #2ce9 .System/r DEO2 #01c0 .System/g DEO2 #2ce5 .System/b DEO2 ( set initial x,y coordinates ) #0008 .Screen/x DEO2 #0008 .Screen/y DEO2 ( draw 6 pixels in an horizontal line ) DRAW-PIXEL INC-X DRAW-PIXEL INC-X DRAW-PIXEL INC-X DRAW-PIXEL INC-X DRAW-PIXEL INC-X DRAW-PIXEL
nice, isn't it? the operations now look clearer! and if we wanted to have this line available for use in other positions, we could define a macro for it:
%DRAW-LINE { } ( -- )
try writing the macro and using it in different positions of the screen!
now, we'll see now how to leverage the built-in support for "sprites" in the varvara screen device in order to draw many pixels at once!
the varvara screen device allows us to use and draw tiles of 8x8 pixels, also called sprites.
there are two posible modes: 1bpp (1 bit per pixel), and 2bpp (2 bits per pixel).
1bpp tiles use two colors, and they are encoded using 8 bytes; using one bit per pixel means that we can only encode if that pixel is using one color, or the other.
2bpp tiles use four colors and they are encoded using 16 bytes; using two bits per pixel means we can encode which one of the four available colors the pixel has.
we will be storing and accessing these tiles from the main memory.
a 1bpp tile consists in a set of 8 bytes that encode the state of its 8x8 pixels.
each byte corresponds to a row of the tile, and each bit in a row corresponds to the state of a pixel from left to right: it can be either "on" (1) or "off" (0).
for example, we could design a tile that corresponds to the outline of an 8x8 square, turning on or off its pixels accordingly.
11111111 10000001 10000001 10000001 10000001 10000001 10000001 11111111
as each of the rows is a byte, we can encode them as hexadecimal numbers instead of binary.
it's worth noting (or remembering) that groups of four bits correspond to a nibble, and each possible combination in a nibble can be encoded as an hexadecimal digit.
based on that, we could encode our square as follows:
11111111: ff 10000001: 81 10000001: 81 10000001: 81 10000001: 81 10000001: 81 10000001: 81 11111111: ff
in uxntal, we need to label and write into main memory the data corresponding to the sprite. we write the bytes going from top to bottom of the sprite:
@square ff81 8181 8181 81ff
note that we are not using the literal hex (#) rune here: we want to use the raw bytes in memory, and we don't need to push them down onto the stack.
to make sure that these bytes are not read as instructions by the uxn cpu, it's a good practice to precede them with the BRK instruction: this will interrupt the execution of the program before arriving here, leaving uxn in a state where it's waiting for inputs.
in order to draw the sprite, we need to send its address in memory to the screen device, and we need to assign an appropriate sprite byte.
to achieve the former, we write the following:
;square .Screen/addr DEO2
a new rune is here! the literal absolute address rune (;) lets us push down onto the stack the absolute address of the given label in main memory.
an absolute address would be 2-bytes long, and is pushed down onto the stack with LIT2, included by the assembler when using this rune.
because the address is 2-bytes long, we output it using DEO2.
similar to what we saw already with the pixel, sending a byte to .Screen/sprite will perform the drawing in the screen.
the high nibble of the 'sprite' byte will determine the layer in which we'll draw, just like when we were drawing using the 'pixel' byte.
however, in this case we'll have other possibilities: we can flip the sprite in the horizontal (x) and/or the vertical (y) axis.
the eight possible values of this high nibble, used for drawing a 1bpp sprite, are:
if you observe carefully, you might see some pattern: each bit in the high nibble of the sprite byte corresponds to a different aspect of this behavior.
the following shows the meaning of each of these bits in the high nibble, assuming that we are counting the byte bits from right to left, and from 0 to 7:
as an example, when the 'sprite' high nibble is 0, that in binary is 0000, it means that all the flags are off: that's why it draws a 1bpp (0) sprite in the background (0), not flipped neither vertically (0) nor horizontally (0).
a high nible of 1, i.e. 0001 in binary, has the last flag on, so that's why it's flipped horizontally, and so on.
the low nibble of the 'sprite' byte will determine the colors that are used to draw the "on" and "off" pixels of the tiles.
note that 0 in the low nibble will clear the tile.
additionally, 5, 'a' and 'f' in the low nibble will draw the pixels that are "on" but will leave the ones that are "off" as is: this will allow you to draw over something that has been drawn before, without erasing it completely.
let's do this! the following program will draw our sprite once:
( hello-sprite.tal ) ( devices ) |00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ] |20 @Screen [ &vector $2 &width $2 &height $2 &pad $2 &x $2 &y $2 &addr $2 &pixel $1 &sprite $1 ] ( main program ) |0100 ( set system colors ) #2ce9 .System/r DEO2 #01c0 .System/g DEO2 #2ce5 .System/b DEO2 ( set x,y coordinates ) #0008 .Screen/x DEO2 #0008 .Screen/y DEO2 ( set sprite address ) ;square .Screen/addr DEO2 ( draw sprite in the background ) ( using color 1 for the outline ) #01 .Screen/sprite DEO BRK @square ff81 8181 8181 81ff
the following code will draw our square sprite with all 16 combinations of color:
( hello-sprites.tal ) ( devices ) |00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ] |20 @Screen [ &vector $2 &width $2 &height $2 &pad $2 &x $2 &y $2 &addr $2 &pixel $1 &sprite $1 ] ( macros ) %INIT-X { #0008 .Screen/x DEO2 } ( -- ) %INIT-Y { #0008 .Screen/y DEO2 } ( -- ) %8ADD-X { .Screen/x DEI2 #0008 ADD2 .Screen/x DEO2 } ( -- ) %8ADD-Y { .Screen/y DEI2 #0008 ADD2 .Screen/y DEO2 } ( -- ) ( main program ) |0100 ( set system colors ) #2ce9 .System/r DEO2 #01c0 .System/g DEO2 #2ce5 .System/b DEO2 ( set initial x,y coordinates ) INIT-X INIT-Y ( set sprite address ) ;square .Screen/addr DEO2 #00 .Screen/sprite DEO 8ADD-X #01 .Screen/sprite DEO 8ADD-X #02 .Screen/sprite DEO 8ADD-X #03 .Screen/sprite DEO 8ADD-Y INIT-X #04 .Screen/sprite DEO 8ADD-X #05 .Screen/sprite DEO 8ADD-X #06 .Screen/sprite DEO 8ADD-X #07 .Screen/sprite DEO 8ADD-Y INIT-X #08 .Screen/sprite DEO 8ADD-X #09 .Screen/sprite DEO 8ADD-X #0a .Screen/sprite DEO 8ADD-X #0b .Screen/sprite DEO 8ADD-Y INIT-X #0c .Screen/sprite DEO 8ADD-X #0d .Screen/sprite DEO 8ADD-X #0e .Screen/sprite DEO 8ADD-X #0f .Screen/sprite DEO BRK @square ff81 8181 8181 81ff
note that in this case, we have a couple of 8ADD-X and 8ADD-Y macros to increment each coordinate by 0008: that's the size of the tile.
because the square sprite is symmetric, we can't really see the effect of flipping it.
here are the sprites of the boulder/rock and the character of darena:
@rock 3c4e 9ffd f962 3c00 @character 3c7e 5a7f 1b3c 5a18
i invite you to try using these sprites instead to explore how to draw them flipped in the different directions.
in 2bpp sprites each pixel can have one of four possible colors.
we can think that, in order to assign these colors, we will encode one out of four states in each of the pixels of the sprite.
each one of these states can be encoded with a combination of two bits. these states can be assigned different combination of the four system colors by using appropriate values in the screen 'sprite' byte.
a single 2bpp tile of 8x8 pixels needs 16 bytes to be encoded. these bytes are ordered according to a format called chr.
to demonstrate this encoding, we are going to remix our 8x8 square, assigning one of four possible states (0, 1, 2, 3) to each of the pixels:
00000001 03333311 03333211 03332211 03322211 03222211 01111111 11111111
we can think of each these digits as a pair of bits: 0 is 00, 1 is 01, 2 is 10, and 3 is 11.
in this way, we could think of our sprite as follows:
(00) (00) (00) (00) (00) (00) (00) (01) (00) (11) (11) (11) (11) (11) (01) (01) (00) (11) (11) (11) (11) (10) (01) (01) (00) (11) (11) (11) (10) (10) (01) (01) (00) (11) (11) (10) (10) (10) (01) (01) (00) (11) (10) (10) (10) (10) (01) (01) (00) (01) (01) (01) (01) (01) (01) (01) (01) (01) (01) (01) (01) (01) (01) (01)
the chr encoding requires some interesting manipulation of those bits: we can think of each pair of bits as having a high bit in the left and a low bit in the right.
we separate our tile into two different squares, one for the high bits and the other for the low bits:
00000000 00000001 01111100 01111111 01111100 01111011 01111100 01110011 01111100 01100011 01111100 01000011 00000000 01111111 00000000 11111111
now we can think of each of these squares as 1bpp sprites, and encode them in hexadecimal as he did before:
00000000: 00 00000001: 01 01111100: 7c 01111111: 7f 01111100: 7c 01111011: 7b 01111100: 7c 01110011: 73 01111100: 7c 01100011: 63 01111100: 7c 01000011: 43 00000000: 00 01111111: 7f 00000000: 00 11111111: ff
in order to write this sprite into memory, we first store the square corresponding to the low bits, and then the square corresponding to the high bits. each of them, from top to bottom:
@new-square 017f 7b73 6343 7fff 007c 7c7c 7c7c 0000
we can set this address in the screen device the same as before:
;new-square .Screen/addr DEO2
the screen device will treat this address as a 2bpp sprite when we use the appropriate color byte.
let's see how to use the sprite byte in order to draw 2bpp tiles!
the high nibble for 2bpp sprites will allow us to choose the layer we want it to be drawn, and the flip direction.
the eight possible values for this nibble are:
note that these eight values all have a leftmost bit in 1: this bit signals that we will be drawing a 2bpp sprite. the other three bits of the nibble behave as described above in the 1bpp case.
the low nibble will allow us to choose between many combinations of colors assigned to each different states of the pixels.
the following code will show our sprite in the 16 different combinations of color. there's some margin in between the tiles in order to appreciate them better:
( hello-2bpp-sprite.tal ) ( devices ) |00 @System [ &vector $2 &pad $6 &r $2 &g $2 &b $2 ] |20 @Screen [ &vector $2 &width $2 &height $2 &pad $2 &x $2 &y $2 &addr $2 &pixel $1 &sprite $1 ] ( macros ) %INIT-X { #0008 .Screen/x DEO2 } ( -- ) %INIT-Y { #0008 .Screen/y DEO2 } ( -- ) %cADD-X { .Screen/x DEI2 #000c ADD2 .Screen/x DEO2 } ( -- ) %cADD-Y { .Screen/y DEI2 #000c ADD2 .Screen/y DEO2 } ( -- ) ( main program ) |0100 ( set system colors ) #2ce9 .System/r DEO2 #01c0 .System/g DEO2 #2ce5 .System/b DEO2 ( set initial x,y coordinates ) INIT-X INIT-Y ( set sprite address ) ;new-square .Screen/addr DEO2 #80 .Screen/sprite DEO cADD-X #81 .Screen/sprite DEO cADD-X #82 .Screen/sprite DEO cADD-X #83 .Screen/sprite DEO cADD-Y INIT-X #84 .Screen/sprite DEO cADD-X #85 .Screen/sprite DEO cADD-X #86 .Screen/sprite DEO cADD-X #87 .Screen/sprite DEO cADD-Y INIT-X #88 .Screen/sprite DEO cADD-X #89 .Screen/sprite DEO cADD-X #8a .Screen/sprite DEO cADD-X #8b .Screen/sprite DEO cADD-Y INIT-X #8c .Screen/sprite DEO cADD-X #8d .Screen/sprite DEO cADD-X #8e .Screen/sprite DEO cADD-X #8f .Screen/sprite DEO BRK @new-square 017f 7b73 6343 7fff 007c 7c7c 7c7c 0000
try flipping the tiles!
the screen.tal example in the uxn repo consists of a table showing all possible (256!) combinations of high and low nibbles in the sprite byte.
screenshot of the screen.tal example, that shows a sprite colored and flipped in different ways.
compare them with everything we have said about the 'sprite' byte!
nasu is a tool by 100R, written in uxntal, that makes it easier to design and export 2bpp sprites.
besides using it to draw with colors 1, 2, 3 (and erasing to get color 0), you can use it to find your system colors, to see how your sprites will look with the different color modes (aka blending modes), and to assemble assets made of multiple sprites.
you can export and import chr files, that you can include in your code using a tool like hexdump.
i recommend you give it a try!
the last thing we'll cover today has to do with the assumptions varvara makes about its screen size, and some code strategies we can use to deal with them.
in short, there's not a standard screen size!
by default, the screen of the varvara emulator has a size of 512x320 pixels (or 64x40 tiles).
however, and for example, the virtual computer also runs in the nintendo ds, with a resolution of 256x192 pixels (32x24 tiles), and in the teletype with a resolution of 128x64 pixels (16x8 tiles)
as programmers, we are expected to decide what to do with these: our programs can adapt to the different screen sizes, they might have different modes depending on the screen size, and so on.
additionaly, we can change the varvara screen size by writing to the .Screen/width and .Screen/height ports.
for example, the following code would change the screen to a 640x480 resolution:
#0280 .Screen/width DEO2 ( width of 640 ) #01e0 .Screen/height DEO2 ( height of 480 )
note that this would only work for instances of the varvara emulator where the screen size can actually be changed, e.g. because the virtual screen is a window.
it would be important to keep in mind the responsiveness aspects that are discussed below, for the cases where we can't change the screen size!
originally, the way of changing the screen size in uxnemu implied editing its source code.
if you downloaded the repository with the source code, you'll see that inside the src/ directory there's uxnemu.c, with a couple of lines that look like the following:
#define WIDTH 64 * 8 #define HEIGHT 40 * 8
those two numbers, 64 and 40, are the default screen size in tiles, as we mentioned above.
you can change those, save the file, and then re-run the build.sh script to have uxnemu working with this new resolution.
as you may recall from the screen device ports mentioned above, the screen allows us to read its width and height as shorts.
if we wanted to, for example, draw a pixel in the middle of the screen regardless of the screen size, we can translate to uxntal an expression like the following:
x = screenwidth/2 y = screenheight/2
for this, let's introduce the MUL and DIV instructions: they work like ADD and SUB, but for multiplication and division:
using DIV, our translated expression for the case of the x coordinate, could look like:
.Screen/width DEI2 ( get screen width into the stack ) #0002 DIV2 ( divide over 2 ) .Screen/x DEO2 ( take the result from the stack and output it to Screen/x )
if what we want is to divide over or multiply by powers of two (like in this case), we can also use the SFT instruction.
this instruction takes a number and a "shift value" that indicates the amount of bit positions to shift to the right, and/or to the left.
the low nibble of the shift value tells uxn how many positions to shift to the right, and the high nibble expresses how many bits to shift to the left.
in order to divide a number over 2, we'd need to shift its bits one space to the right.
for example, dividing 10 (in decimal) over 2 could be expressed as follows:
#0a #01 SFT ( result: 05 )
0a is 0000 1010 in binary, and 05 is 0000 0101 in binary: the bits from 0a were shifted one position to the right, and a zero was brought in as the leftmost bit.
to multiply by 2, we shift one space to the left:
#0a #10 SFT ( result: 14 in hexadecimal )
14 in hexadecimal (20 in decimal), is 0001 0100 in binary: the bits from 0a were shifted one position to the left, and a zero was brought in as the rightmost bit.
when in short mode, the number to shift is a short, but the shift value is still a byte.
for example, the following will divide the screen width over two, by using bitwise shifting:
.Screen/width DEI2 #01 SFT2
in order to keep illustrating the use of macros, we could define a HALF and HALF2 macros, either using DIV or SFT.
using DIV:
%HALF { #02 DIV } ( number -- number/2 ) %HALF2 { #0002 DIV2 } ( number -- number/2 )
using SFT:
%HALF { #01 SFT } ( number -- number/2 ) %HALF2 { #01 SFT2 } ( number -- number/2 )
and use any of them to calculate the center:
.Screen/width DEI2 HALF2 .Screen/x DEO2 .Screen/height DEI2 HALF2 .Screen/y DEO2
note that the HALF2 macro using SFT2 would require one byte less than the one using DIV2. this may or may not be important depending on your priorities :)
as an exercise for you, i invite you to write the code that would achieve some or all of the following:
once you have it, i invite you to do the same, but using an image composed of multiple tiles (e.g. 2x2 tiles, 1x2 tiles, etc).
besides covering the basics of the screen device today, we discussed these new instructions:
we also covered the short mode, that indicates the cpu that it should operate with words that are 2 bytes long.
in uxn tutorial day 3 we start working with interactivity using the keyboard, and we cover in depth several uxntal instructions!
however, i invite you to take a break, and maybe keep exploring drawing in the uxn screen via code, before continuing!
if you enjoyed this tutorial and found it helpful, consider sharing it and giving it your support :)
uxn tutorial deprecated appendix a
most recent update on: 12022-01-06
text, images, and code are shared with the peer production license