Firstly, I am a beginner to Support Vector Machines so I'm sorry if I am going about this problem in the wrong way. I am trying to implement a very simple SVM from scratch which uses the identity kernel function to classify linearly separable data into one of two classes. As an example of the sort of data which I will be using, consider the plot below seen in this document:

Using the points (1,0), (3, 1) and (3, -1) as support vectors, we know that the following is true with regards to calculating the decision plane (Screenshotted from the same document):
Which when fiddled and rearranged a bit gives us Lagrange multipliers of -3.5, 0.75 and 0.75 respectively.
I understand how this algebra works on paper, however I am unsure as to the best approach when it comes to implementation. So my question is as follows: how are the SVM's Lagrange Multipliers calculated in practice? Is there an algorithm which I am missing which will be able to determine these values for arbitrary linearly separable support vectors? Should I use a standard maths library to solve the linear equations (I am implementing the SVM in java)? Would such a maths library be slow for large scale learning? Note that this is a learning exercise so I'm not just looking for a ready made SVM library.
Any other advice would be much appreciated!
EDIT 1: LutzL made a good point that half the problem is actually determining which points are to be used as the support vectors, so to keep things simple assume for the purpose of this question that they have already been computed.