Overview
You really have a ton of options. I'll summarize my views on how instructions get translated, but I'll also provide a few of the options I had when I was getting started.
My Take
First off, it's easiest to think in terms of binary input. Let's say you have a 16-bit microprocessor. (That is, the instructions are encoded in 16 binary bits.) Consider an assembly operation SET that places a number into a register. For example:
SET(R1, 12) // Stores 12 into register 1
Let's arbitrarily (because even in any standard architecture, the choice is really arbitrary) choose that the SET instruction is translated into the following 16-bit binary value I:
0001 0001 0000 1100
Basically, I just made up a convention. But here's how I break it down. I choose to let bits I[15:12] (expressed in big-endian notation) represent a particular instruction. I choose to let the integer 1 correspond to the instruction SET. Now I've decided upon that convention, I can say that if I have a SET instruction, let bits I[11:8] correspond to the register. (Obviously that means I will only have 16 registers: 4^2=16). Finally, I let bits I[7:0] correspond to the data I want to store in the given register. Let's look at SET(R1, 12) again in binary (I separate each group of four by newlines for clarity):
if I = 0001 0001 0000 1100
I[15:12] = 0001 (binary) = 1 (decimal) = SET instruction
I[11:8] = 0001 (binary) = 1 (decimal) = R1 since I[15:12] correspond to SET.
I[7:0] = 0000 1100 (8-bit binary) = 12 (decimal) = value to store in R1.
As you can see, everything else within a microprocessor becomes very simple. Let's say your store 4 lines of instructions in RAM. You have a counter attached to a clock. The counter counts through lines in RAM. When the clock "ticks" a new instruction comes out of the ram. (That is the next instruction comes out of RAM - though this could be somewhat arbitrary once you insert JUMP statements.) The output of RAM goes through multiple bit selectors. You select bits I[15:12] and send them to a Control Unit (CLU) which tells you which instruction you are trying to convey. I.e. SET, JUMP, etc. Then depending on which instruction is found, you can decide to allow registers to be written or registers to be added or whatever else you choose to include in your architecture.
Now thankfully, the arbitrary conventions for binary values of machine instructions has already been chosen for you (if you want to follow them). This is exactly what is defined by an Instruction Set Architecture (ISA). For example MIPS, HERA, etc. Just for clarity, the actual implementation you create when you design the circuitry and whatnot is called the micro-architecture.
Learning Resources
Texts
The Harris and Harris book is one of the most famous texts for undergraduate computer architecture courses. It is a very simple and useful text. The entire thing is available in a PDF here for free by some random school. (Download it quick!) I found it super helpful. It goes through basic circuits, topics in discrete mathematics, and by the time you get to chapter 7 building a microprocessor is a piece of cake. It took me ~3 days to complete a 16-bit microprocessor after having read that book. (Granted, I have had background in discrete math, but that's not super important.)
Another super helpful and very standard book is the Hennessy and Patterson book that is also available in PDF form from some random school. (Download it quick!) The Harris and Harris book was a simplification based off this book. This book goes into a lot more detail.
Open Source Microprocessors
There are a ton of open source microprocessors out there. It was super helpful for me to be able to reference them when I was building my first microprocessor. The ones with Logisim files are especially nice to play with because you can view them graphically and click and mess with them like that. Here are a few of my favorite sites and specific mps:
4-bit:
16-bit:
Open Cores - I don't really get this site. I applied for an account but they haven't really gotten back... Not a big fan but I imagine if you have an account it must be awesome.
Tools
Logisim
As mentioned previously, Logisim is a great resource. The layout is entirely graphical and you can very easily see what is going on bit-wise at any point in time by selecting a wire. It is in Java so I'm pretty sure it works on whatever machine you want it to. It's also an interesting historical perspective on graphical computer programming languages.
Simulation
In Logisim, you can simulate actual software being run. If you have a compiler that compiles binaries to the ISA you are targeting, then you can simply load the binary or hex file into Logisim RAM and run the program. (If you don't have a compiler, it's still possible and a great exercise to write a four line Assembly program and translate it by hand yourself.) Simulation is by far the coolest and most gratifying part of the entire process! :D Logisim also provides a CLI for doing this more programmatically.
HDL
The more modern form of generating/designing micro-architectures is through the use of a Hardware Description Language (HDL). The most famous examples include Verilog and VHDL. These are often (confusingly!) modeled after sequential languages like Ada and C/C++. However this is by far the preferred design method because verification of a model/design is much better defined. In my opinion, it's much easier to reason about the textual representation than to examine graphically. Just as programmers can poorly organize code, hardware developers can poorly organize the graphic layout of a graphic design of a micro-architecture. (Although this argument can certainly be applied to HDL.) It's still easier to document textually than graphically and to design more modularly in general using an HDL.
If you're interested in learning this, there are tons of undergraduate hardware courses with open curriculum and lab work discussing and learning about using HDL to describe circuits and micro-architectures. You can find these simply by googling. Or you can also try to learn HDL through the next step - tools that convert C/C++ code to HDL. If you're interested, then Icarus Verilog is a good open source compiler and simulator for Verilog.
Simulation
Using a tool like Icarus Verilog, you can also easily simulate a real program being run from a binary. You simply wrap your microprocessor in another Verilog script that loads a file or a string into RAM through some bus. Piece of cake! :D
HLS
In recent years, High Level Synthesis (HLS) has also gained significant foothold on the market. This is the conversion of C/C++ code into actual circuits. This is pretty incredible because existing C/C++ can (but not always) be converted into hardware.
(I say not always because not all C/C++ code is synthesizable. In a circuit, the bit stream exists everywhere at once. In software, we think of code as being sequential. This is a terrible mode to think in if you are trying to design hardware!!)
But as you may be able to guess this ability is incredible for optimizing certain aspects of code on hardware such as matrix operations or math in general. However, this is relevant to you because you can use HLS tools to see how a C implementation of a dot-product (for instance) might be translated into HDL. I personally feel like this is a great way to learn.
Simulation
HLS simulation is just as easy as simulating HDL because the high level code is simply converted into HDL. Then you can simulate and run tests on it exactly how I explained above.