From what is said in the paper, for this statistical machine learning (Bayesian) approach, you do not have a training and a test data set.
Your X matrix corresponds to the set of input masks on your SLM (each column represents one input mask) and the Y matrix represents the corresponding output intensity patterns.
After choosing some parameters (like the number and the type of "randomness" of input masks, the number of macropixels on your SLM and CCD, that will set the size of the transmission matrix you want to reconstruct), you send the input masks on the SLM and measure the corresponding output intensity patterns. From these, you construct the X and Y matrices that you feed to the algorithm to retrieve at the output the transmission matrix.
In your paper above, Learning and Avoiding disorder in multimode fibers, my understanding is that you measured the TM in pixel basics. After that it can be converted to the mode basics by multiplying with corresponding theoretically/analytically estimated mode basic matrix (input and output). The resultant TM in mode basics is not diagonal which it should be due to the short and straight MMF. The Deep learning frame work then converted the theoretically estimated mode basics matrix into the realistic one that accounts for the misalignment in the system. Once used with these new basic matrix, the TM in mode basic will become diagonal.
Yes, that is it.
Please correct me if I am wrong but in that case why don't you simply calculate the SVD on the pixel-basic TM, and use the left and right and basic vectors as the realistic modes that include the misalignment instead of Deep learning? The deformation-free principal modes are calculated that way aren't they?
While it would work in some situations, there is few reasons why it is not always a good solution.
First, you assume, when the fiber is short and straight that the matrix is quite diagonal, but is usually not exactly diagonal, it may even not be diagonal at all for reasons you did not suspect before hand (like some defects during fabrication). So the SVD will force the TM to be diagonal, even if it is not, which would lead to non physical results.
Secondly, what happens when you want to study fibers for which the TM is not diagonal (long or disordered fibers)? In that case, the SVD will not work at all, as it can only find a diagonal TM. Our approach is only based on an energy conservation criterion, and still works (we demonstrated it in the supplementary materials).
Then, even in a perfect fiber, you can still have coupling between degenerate modes, the SVD may not be able to discriminate the modes, leading to inaccurate behaviors.
Another point you already raised is that it allows to decouple the effects of aberrations, that are well decomposed in the Zernike polynomials, and other effects due to propagation in the fiber.
I was told by a professor from Yale
Is it Hui Cao? Doug Stone?
Not sure I understand your point about interferences of pairs of FFT components (and by the way modes do not corresponds to FFT components).
But you can quite easily try numerically to see if it works.
We have a simple simulation code in Python in which you can simulate mode matrices and transmission matrices:
quI was told that if I do the SVD directly on the intensities obtained from different input (basically not applying the 4-phase or off-axis technique to extract the complex field pattern but just use the intensity vectors as column of the "intensity TM"), the number of SVs will be roughly twice the number of modes.
Never heard of that. It may be true though, but I do not know any paper discussing that point.
My question is: After obtaining the TM, should I apply SVD on such TM and try to reduce it to a leaner version
It is not a bad idea, and something that I sometimes do.
You can do the SVD, keep only the singular values/vectors corresponding to the modes, and come back to the pixel basis. To know where to cut, you look at the singular values and you should see a drop.
However, in our experiments, I noticed that if we use enough random input vectors, this step was not necessary as we already efficiently mitigated the effect of noise. Of course, that may not be true for another experiment so it is worth trying anyway.
The TM for a multimode fibre should ideally has the rank of N (number of macro pixel on the SLM)? It is because I think the basic vectors of both left and right singular matrices are the same fibre modes, so that the TM would be an square vector of rank N.
The rank of the matrix cannot be higher than the number of propagating modes. Physically, light can only propagate through those modes. If the rank of the TM is higher, that means that there is more orthogonal channels to transmit information than the number of modes, which is not possible.
However, for the rank of the matrix to be equal to the number of modes, which is the case if you do things correctly, the number of input pixels should be high enough, much higher than the number of modes.
This is because if you do not have a good spatial resolution, some modes, typically the highest order modes, will not be sampled correctly and their contributions would be lowered or killed.
Let's take one question at a time.
Is it absolutely critical to use a orthogonal basis in the input?
No, it's not!
If you do it, the advantage is that you directly measure the TM. Meaning that if X is the matrix representing the stack of input vectors, and X is unitary (i.e. the input vectors form an orthogonal basis of same norm), then the stack Y of output directly gives an estimation of the TM in this basis.
If your inputs are random, then X is a random matrix. The first consequence is that because they are not orthogonal, you need more inputs. So if you have N input pixels, you need enough input vectors so that X in of rank N, the more the better as it mitigates the effect of noise (typically, 5 x N is a good choice).
More importantly, the stack of Y does not directly gives you an estimation of the TM! You need to convert the data into an orthogonal basis, typically the basis of the pixels. To do so, you need to use X⁺ the pseudo-inverse of X:
TM = Y x X⁺
We have detailed this step in our last paper:
It is in the Supplementary section S1.4, we will update the version soon and it will be in Appendix B of the final paper.
Sorry, I did not have time to post my slides before, I just uploaded them here.
If you want to know more about the aberration correction, I wrote a small highlight about our related article here.
You can also find the defense of Maxime Matthès who did this work during his PhD here.
The paper will be published next Monday on PRX.
Just use a random complex Gaussian matrix
TM = np.random.randn(N_out,N_in)+1j*np.random.randn(N_out,N_in)
As I no longer use Matlab, I cannot help you with this software.
As for Python code, you need to understand that there is no code that you can simply copy and paste and would work out of the box, as part of the code depends on your hardware, in this case, mostly the camera.
However, I wrote some modules that can be useful for part of the code,
import numpy as np N_macropixels = ... # number of macro-pixels on your SLM N_phases_to_test = ... # number of phases you want to test on each macropixel phases_to_test = np.arange(0,2*mp.pi,N_phases_to_test) phase_vector = *N_macropixels #initial guess, phase = 0 on each macropixel for i_pixel in range(N_macropixels): best_value = None for phase in phases_to_test: # try a new phase value for the i_pixel-th pixel phase_vector[i_pixel] = phase # display the new SLM mask display_phase_vector(phase_vector) # get the value of the target intensity value = get_target_intensity() # if the value we get is the best so far, we store the value and the corresponding phase if best_value is None or best_value < value: best_value = value best_phase = phase # after trying all the phases for one macro-pixel, we keep the one corresponding to the best value phase_vector[i_pixel] = best_phase
display_phase_vector() is a function you have to write to display the image on the SLM you can use SLMpy and Layout for that, the idea is that you divide your SLM in macro-pixels and assign to each macro-pixel the phase corresponding to each value of phase_vector.
get_target_intensity() is a function you have to write to get the intensity at the target location. The actual code depends on your camera but basically, you want to get an image, and sum the values of the intensity around a given target location and return this value.
Please read the information on the Github page.
The minimal example I provided actually loads a two image sequence
# Binary amplitude image (0 or 1) bitDepth = 1 imgBlack = np.zeros([DMD.nSizeY,DMD.nSizeX]) imgWhite = np.ones([DMD.nSizeY,DMD.nSizeX])*(2**8-1) imgSeq = np.concatenate([imgBlack.ravel(),imgWhite.ravel()]) # Allocate the onboard memory for the image sequence DMD.SeqAlloc(nbImg = 2, bitDepth = bitDepth) # Send the image sequence as a 1D list/array/numpy array DMD.SeqPut(imgData = imgSeq)
And also I wonder whether it is the right way to write code like I showed above when I want to display a single image in DMD for a single iteration loop.
No it is not, read my answer. You need to load your sequence once before displaying them continuously.
I am not sure what you are trying to do, but it does not seem to be an efficient way to display sequences of image:
Instead of loading once your entire sequence, which is only two images, you load a sequence of one image at every single iteration of your loop.
The typical way is to load the entire sequence, and then run it.
DMD.SeqAlloc(nbImg = 2, bitDepth = bitDepth) DMD.SeqPut(imgData = imgs.ravel()) DMD.Run(loop = True)
Your sequence of two images will then run continuously. In order to get your images synchronously, you need to trigger your camera with your DMD (use the cable that come with the DMD, look at the documentation). The trigger signal can be customized using the arguments of SeqTiming (namely
That would be the most efficient way to use your system. Keep in mind that the DMD frame rate can be very fast, so if you use for instance a 10 kHz refresh rate, most likely your camera will not follow the pace. Adjust
There is no quick solution to your problem. It is not that easy to get a transmission matrix of a medium with given physical properties.
Most likely, you will have to perform simulations, for instance using Comsol, or using other types of approaches, such as the Recursive Green's Function method.
Without simulations, you can dwell into random matrix theory, and manually tweak the properties of your matrix to match some physical properties. (see https://arxiv.org/abs/cond-mat/9612179 for instance).
The first order is to simply take a complex random matrix and match the total transmission (normalize the matrix) with the one you expect from a medium with given properties. But you will miss all the correlation effects.
Again, there is not short answer, the fastest solution is probably to perform simulations with an existing software such as Comsol.
First, it is important to understand that ALP4lib is simply a wrap of the .dll (C/C++ API) provided by Vialux. So errors can come from the Python part, the one I wrote, or the .dll, the black box from Vialux.
The requested memory is not available (full?).
This one actually come from the .dll. I checked into the API desciption pdf, it simply says that "the memory requested is not available".
This could happen if you allocate to much memory without freeing some. If it happens the first time you allocate memory, this seems wrong.
My best bet would be to unplug/plug back the device and try again.
What is your bit depth? What is the rest of the code? Did you manage to make the DMD work anmother way (with Vialux codes)?
How can we use K^\dag instead of K^-1? I understood that we used K^\dag to construct TM in noise, but I can’t see why K^\dag operate like K^-1 and be same phase conjugation, and KK^\dag(O_foc) be called time reversal operator.
There is two questions here:
I have acquired TM and focused light through scattering medium by using E_in(input electric field) = K^\dag*E_out(pattern desired). I can focus on one point and two points arranged horizontally but can’t focus on two points arranged vertically or five points or pattern such F-shape. If you tried to focus on multiple spots vertically, can you see the similar phenomenon that failed focusing?
I am not sure of what you ask. In a random scattering media, there is no difference between vertical or horizontal, so it should be the same. But, because phase conjugation does not compensate for the losses (unlike inversion it does not amplify the weak signals), if you try to focus on two points that are not equivalent statistically, you may focus only on the one where you have more signal. For instance, if the thickness of your medium is not homogenous and you try to focus on two points where the thickness is different, you will not have the same brightens. In both points you will put back the contributions in phase, but, because the average transmission is lower where the medium is thicker, the components you put back in phase are of lower amplitude. So you have one weak spot and one bright spot. That may be what happened.
Another thing, because phase conjugation is not "perfect" like inversion, it add some noise at elsewhere than the focus point too, so if increase the number of points you want to focus on, you decrease the signal to noise ration. At some point, you will not see anything.
My problem turned out to be with the incident angle. With the angle less than 10 deg the issue was solved.
I did not think about that one. It is usually stated in the data-sheet to keep an incident angle below a given value, typically around 15 degrees.
Before I noticed extra brighter spot jump on to the position of the 1st order at phase value above Pi and increase the power at the spot (filtered through a pinhole).
Not sure I understand that part.
Coming back to your initial issue:
I have power fluctuating and increasing all the way to nearly 2Pi (1000 value).
I guess you got rid of the fluctuations but I would expect the actual correct Pi value to be around the maximum pixel value, so around 1000 seems legit, right?
That makes sense that around the maximum wavelength the SLM is supposed to work, you approach the limit of the phase modulation.
I do not know that brand at all, I will just extrapolate from what I already encountered, probably not everything is relevant in your case.
I made up a simple line grating with a pitch of 60 um (6 pixels) and diffract the zero-order to the 1st order when the grating is On. I then changed the level of phase modulation at all lines (increase the value in the csv file) and observe increasing power at the 1st spot.
That is not a bad idea, however, I would do it a bit differently. By putting a grating, you end up with a spatial square signal, that has many harmonic components in the Fourier domain, hence multiple order of diffractions.
As you have phase SLM, I would do a phase ramp. Over N pixels, I would linearly increase the pixel value between 0 and V_max and I will then increase V_max gradually. I will repeat this ramp to have it periodic of course.
If V_max corresponds to 2pi, you have a tilted plane wave with ideally (if everything was perfect and pixels infinitely small) one only spot (+1 order) in the Fourier plane.
Well, you will always have intensity in the 0-order due to the limited diffraction efficiency and filling fraction, but the other orders should be very small.
That being said, your experiment should work too, It is just not as easy (visually) to find the maximum of intensity in the +1 compared to a minimum elsewhere.
So, what can go wrong?
First, the calibration is usually given for one wavelength, meaning that increasing linearly the pixel value increase proportionally the phase for this wavelength, but for another one, the relation is not linear anymore. That is probably not the cause, but it is worth keeping in mind.
Another thing is that SLMs do not have a plane response, meaning that if you send a constant value, you do not have a plane wave. Companies usually provide a correction mask, that also depends on the wavelength.
Moreover, the value for pi depends also on the wavelength. The phase shift is proportional to 2pi/lambda*delta_n, with delta_n the birefringence induced by a given voltage. So for the same voltage (pixel value), you have much less phase modulation at 1550 nm compared to visible. So if the pi value is 500 at 775nm, it will be 1000 at 1550nm.
So most likely, your experiment worked and you find the pi value around 1000.
I may be wrong, but that would be my first guess.
I have a confusion in the orientation of the SLM. Should we block the zero order of the SLM and work with the first order? If so I will need to put a aperture in the fourier plane that allows just +1 order to pass through. Is it the right way to do it?
Well, that is entirely up to you and depends on your needs.
On a typical SLM, the filling fraction is about 90-95%. If you work at the 0th order, meaning that do not do anything special, and if you have smooth phase mask to display (high spatial frequencies can be challenging due to pixel to pixel coupling), you end up with 5-10% of the light that is not modulated (for which the SLM acts as a simple mirror).
If it is OK for your experiment, no need to go further.
Then, if you want to remove that unmodulated light, you go to the first order. To do so, you add a phase ramp to your mask. Modulated light will be shifted in the Fourier plane, where you will put an aperture to reject the rest. Unmodulated light sees the SLM as a mirror, so it stays in the 0th order and is filtered out.
Important note: When I say +1 order, it DOES NOT refer to the diffractions due to the pixel pitch (leading the pixel array to act as a grating). It is the diffraction effect introduced to the phase ramp you add to the modulation.
What do you mean that your laser is stable? Because it can be stable in intensity but have phase fluctuations or frequency drift. If the drift in frequency is larger than the spectral correlation width of your medium that means that the speckle will change (as for example a scattering medium gives rise to different speckles for frequencies separated than more than the spectral correlation width).
Another factor is the interferometric stability, the arms should be of the same length, isolated from vibrations, temperature changes and without airflow (so inside a box) to prevent phase fluctuations between the two arms.
The fact that the speckle does not change visually is irrelevant, I guess you measured the decorrelation as a function of the time, what does it look like? If you did not do it, you should start there to quantify the decorrelation.
Good to hear you found a solution!
I would be happy to hear about it.
It seems that there is a lot of information in your question, let me try to see if I got everything right;
You use a two-wavelength system. If I understand correctly, you want to measure the amplitude and phase at both wavelengths to lift the phase ambiguity. If so, we can treat the two wavelengths independently and simplify the problem by considering only one.
Then, your issue is that you have spherical references instead of off-axis plane waves, right? So if you send a plane wave as your signal, you should see concentric ring shaped fringes with the spacing between them decreasing as you go away from the center.
But I do not understand what you say about the numerical aperture, can you clarify?
I do not know much about telecentric systems, I trust you when you say that you need spherical waves.
In the code I provided, we select the information that carries the phase information by taking only the -1 (or +1 would work too) diffraction order. It is easy as it corresponds to selecting a window in the Fourier plane around the spatial frequency of the spatial beating between the two waves.
Now, in the case of a spherical wave, it is not that easy, is that the core of your problem?
There is a technique called point diffraction interferometry that consists in placing a neutral density filter with a pinhole in front of the wavefront to analyse. You observe fringes corresponding to the interference between the spherical wave created by the pinhole and the the attenuated wavefront you want to analyse. If the original wavefront is approximately collimated, you get circular fringes similarly to what you describe.
I tried quickly to look in articles about this technique to look for a method for a quantitative phase reconstruction but I did not find anything yet.
However, I am still interested in finding a solution to this problem!
I have no time right now, but next week I can try to make some simulations to see if there is a not-too-complicated solution. Maybe using a transformation to go back to a simpler system.
I would like to know more about your system but I do not have access to the article of which you gave the link. Can you send it to me by mail?
I was expecting both LP modes and LG modes to have circular geometries and similar profiles.
That is where your problem is. You forgot that modes can be degenerate.
In a nutshell, if a mode is not degenerate, it sure will share the same symmetry as the system. But when you have a group of degenerate modes, the group should share the symmetry but not the modes individually. In the case of a multimode fiber, you could easily check that if you rotate by any arbitrary angle the modes in a given subspace (i.e. corresponding to the same propagation constant), they are still a basis of your subspace. However, each LP mode associated with a degenerate propagation constant is not circularly symmetric.
For instance, the two LP11 modes oriented horizontally and vertically are orthogonal modes with the same propagation constant. The same profiles oriented at + and -45 degrees are too.
If you play with the phase, you can find linear combinations of degenerate modes that look totally different from the ones you started with.
In the case of graded index fiber, it is interesting to notice that the size of the groups of degenerate modes increases when the propagation constant decreases. So that you can find more combination of modes.
I am not familiar with the LG modes for MMF, but it seems that they are a mode basis for graded index fiber but not for step index fibers.
To check if both representations are similar, simply project each LG modes on all the LP modes. If everything is right, for each LG mode, the only non-zero values should correspond to LP modes with the same propagation constant. This would show that the two representations are good.
In short, no.
The total scattering matrix, composed of the transmission matrices and reflexion matrices from both sides, is unitary if the scattering medium is non-absorbing but the transmission matrix itself, even the complete one, is not.
If the transmission matrix where unitary, that would mean that all the energy you send from one side goes to the other one. This is not possible as a significant part of the light is reflected back.
Yes, the correlation coefficient is one standard measure of the similarity between two images.
What do you mean by error? In what context? What data to you have?
If you have two speckle images and you want to know how close they are, you can just calculate the quadratic error:
which is 0 if the two speckle are the same
Or calculate the correlation between the two images
which is 100% if the two speckles are the same.
I am note sure I understand your normalization.
You estimated that mean(abs(k_ij)) should be smaller than gamma, but why try to set mean(abs(k_ij)) = gamma?
First, by setting mean(mean(abs(k_ij))) = gamma, it is still possible that the output energy for one column is bigger than N, which would not be possible without gain.
(The losslessness seems to implies mean(abs(k_ij)) = gamma, but I do not think the inverse is true).
Moreover, as you say, it is usually much lower.
When you reach the multiple scattering regime, (i.e. when you can apply the random matrix theory), a significant part of the light (> 50%) is backscattered and thus not transmitted.
Basically, the way I usually normalize the matrix is by setting the mean transmission T=mean(abs(K)2)=mean(s2)/N to the value I want.
For example, to simulate a medium with a transmission of 10%, I would use:
K = K/normFactor
I am not an expert in Random Matrix Theory but here are my remarks:
I think that there is a mistake in your first normalization.
You say that you normalized to the biggest absolute value, as you assume no amplification possible.
First of all, the maximum possible energy transmission is the maximum singular value, not the maximum matrix element squared.
If you normalize using the maximum singular value, you end up with a maximum transmission equal to one.
In general, this is not the case, it is usually much lower.
The RMT is not a microscopic theory in the sense that the results do not come from a theory that take into account the physics of the system.
The [0,2] range has not physical meaning, we just want to compare to a known distribution.
The normalization of the singular values allows to fit with this range.
The normalization sNorm1 = s/sqrt(m) as in Edelman & Wang seems correct for an initial distribution of the elements of K with a standard deviation equal to one.
Then, if you take
normFactor = std(K)
you end up with the expected [0,2] range.
The normalization from my paper is actually wrong, it should be
sNorm2 = s*sqrt(m)/sqrt(sum(s**2))
I corrected it in my thesis but the typo stayed in the paper.
With this normalization, the standard deviation of the initial distribution of the elements of K do not have to be one, so there is no need for the first normalization (with normFactor).
Please not that the Marcenko Pastur law (or quarter circle law in this very case) is not specific to random scattering and is not even true in the general case.
It is true only when the input/output are uncorrelated and when only a small fraction of the input/output channels are recorded.
If you want to simulate the total transmission matrix (all input/output channels) you would end up with the bimodal distribution.
However, in piratical applications, we are usually very close to the Marcenko Pastur distribution as the incomplete channel control and the optical losses kill the bimodal distribution.
(Transition between the two models is treated here, but it is a bit hardcore: journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.063901)
Thanks for the info, I will test it and try to write something about it on the website.
The change of basis you need to perform depends on the basis you use for the measurements, so only you can know.
If you use a Hadamard basis, then you need to multiply by the inverse of the Hadamard matrix (which is conveniently a Hadamard matrix) to retrieve the matrix in the pixel basis.
What you measure is T' = T \times Ha with Ha the Hadamard matrix and T the transmission matrix.
So you need the operation T = T' \times (Ha)^-1 to get T.
That depends on what kind of transmission matrix you want to simulate and what for.
The easiest matrix is a Gaussian complex random matrix:
T = randn(n,m)+j*randn(n,m)
This matrix would not conserve the energy nor it will make appear the effect of mesoscopic correlations.
However, it is valid in most cases: if you consider a subpart of a full transmission matrix (which is usually the case when only one polarization is controlled/measured and with a limited numerical aperture) and when the discretization of the field is such that the elements correspond to different speckle grains (and thus are considered uncorrelated).
If you want to simulate the full transmission matrix of a slab or a disordered wire, then the singular values of the matrix has to follow the bimodal distribution. You would then have to generate a diagonal matrix with values that follow the distribution and then multiply on both side by unitary random matrices.
Again, I cannot go in much details if I do not know more about what you want to do.
Sorry for the delay, I missed this post.
Well, one thing that can happen is the following:
You do have a phase only modulator, but due to the finite numerical aperture of every optical setup, you always end up with some amplitude modulation, at least for the high spatial frequencies.
Consider the following. You display a checkerboard on the SLM. I assume that the SLM is working in pure phase only modulation, that means that the squares are either with a 0 phase or a pi phase. If I use two lenses as a 4-f system, what should I see in the image plane? If everything was perfect, I should see nothing with a CCD camera as it should be a phase only image similar to the one in the SLM plane. But if you do the experiment, you will see black lines at the edges or the squares but the inside of the squares themselves are all white. What happens is that the lenses you use do not have a numerical aperture equal to 1. When you display something with sharp edges, the corresponding high spatial frequencies are diffracted quickly and do not enter your lenses (or whatever optical system you have). So in your object plane, you do not have exactly the same image as in the SLM plane as you filtered out (involuntarily) the high spatial frequencies.
The short story is, when you have high spatial frequencies in a phase only image, such as sharp edges, you will always have a bit of amplitude modulation.
Now, back to the Fresnel lenses. In such pattern, you should not have sharp edges ideally. The phase should gradually go from 0 to 2pi and then to 2pi to 4pi (so 0 to 2pi again) etc... BUT, if you did not calibrated the SLM correctly, this means that you do not know accurately the value corresponding to 2pi, you will have sharp edges. If for example the value you put for the 2 pi pixel value really corresponds to 1.8 pi, the value would go gradually from 0 to 1.8 pi then from 2 pi to 3.8 pi, so you will have a sharp jump from 1.8 pi to 2 pi. These sharp edges will give rise to amplitude modulation in a finite numerical aperture system.
1) Already replied in another thread. You measure the matrix in the basis you use for the excitation. If you use a basis different from the canonical one, you need to perform a change of basis afterward.
2) Indeed, leaving the reference will give an addition background to the focusing, hence reducing the signal to noise ratio. The reference is useful for measuring the phase, once you have measured the matrix, you do not need it. If you can remove it, you would get better focusing without it.
With a phase only modulator, you cannot directly switch of the pixel. But you can for example send a high spatial frequency grating (sequence of 0 and pi phase) on the reference part that would be filtered out in the Fourier plane to get rid of it.
Yes, using equation 7 is the way to go.
As explain in the paper, we did send Hadamard vectors on the SLM (with +1/-1 pixels corresponding to 0/pi phases).
The matrix is measured in the Hadamard basis, meaning that the first output complex field measured corresponds to the first Hadamard vector in input, not to the first pixel of the SLM.
Of course you need to perform a change the basis to obtain the TM in the pixel basis (this is the matrix you want).
However, the output basis is always the pixels basis of the CCD as you directly measured the field on each pixel, no conversion need for the output basis.
Once recovered the matrix, you do not need to convert any input or output pattern in the Hadamard basis.
At this point you should have the matrix in the pixel basis (linking one pixel of the SLM to one pixel of the CCD) the Hadamard basis basis is just for the measurement.
There is many things that can go wrong and it is difficult to guess what.
Considering the tests you did, it is more likely to be something in the code.
In such a situation, as in a standard programming issue, the ultimate solution for debugging is to start the code from scratch again...
Ok, one last thing I can suggest:
Try to do the sequential algorithm to focus light through the scattering medium.
See more here : https://www.wavefrontshaping.org/post/id/30
Basically, when you do so, the optimal phase mask that to focuses light onto one CCD pixel corresponds to the phase conjugate of the line of the TM that corresponds to that point (plus any random constant phase).
You can try to measure the TM and see if the phase of the line at that point corresponds (at least approximately) to the phase conjugation of the one you obtain from the sequential algorithm focusing.
You should see if there is a conjugation that is wrong, or it corresponds to a column instead of a line (meaning a transpose operation is missing) etc...
Besides that, doing the code again is the only thing that worked when I was in such a situation.
First of all, what do you mean by " determined the actual TM via singular value decomposition"?
In the paper you cite, the singular value decomposition performed was not done in the context of phase conjugation.
It was a test to see if we obtain the distribution one expected.
In the same idea, the filtering to remove the effect of the non-flat reference was here to clear its effect on the distribution, but is not necessary, nor wanted, for a focusing experiment.
The equation 16 is indeed what you want to perform phase conjugation to focus light.
K is the the transmission matrix you measured (no singular value decomposition, no filtering).
You then take the transpose conjugate (K^\dag), both operations are important, and multiply by what is your target.
If you want to focus light on one small spot, your target is a one at a given pixel and zeros everywhere else.
This operation is equivalent to taking one line of K^\dag.
Then, because you only have a phase modulation, you only take the phase of this vector and display it on the SLM.
The efficiency (signal to noise ratio at the focal point) with phase only modulation is ~78% of what you would have with phase and amplitude modulation (cf eq. 29).
If it does not work, it can be due to the way you measured the matrix or more likely to a small mistake on the treatment of the matrix i.e. a transpose operation missing, en error displaying the phase mask on the wrong pixels, etc...
As a simple check, you can try to measure the transmission matrix without the scattering medium (free space) and see if you can find the phase mask (some kind of lens shape) that focuses light on a tight spot of your camera.
Another trick is to use a different number of pixels on the SLM and on the camera. This way, not having a square matrix, it is easier to check if your matrix is not transposed compared to what you should have.
Yes, liquid crystal SLM are originally designed for amplitude modulation. If it is a twisted nematic SLM, put a polarizer and an analyxer in a cross polarization configuration before and after the SLM. If it is a linarly arranged SLM, simply excite the SLM at 45 degrees compared to the phase modulation axis and put an analyzer at -45 degres after the SLM.
Again, amplitude modulation is the initial function of the SLMs, you should a lot of information about it online and in the the literature.
Let me be sure I understand your questions;
1/ You want to send N images circularly, meaning you want to send 1,2,3...N,1,2,3..,N,1,2,3etc... with a given - accurate I guess - frame rate.
To easily send any sequence of images on the SLM, you can find the pieces of information in the tutorial section: "How to control a SLM with matlab using Psychtoolbox".
However, if you need a precise frame rate, this tutorial would not be enough. A first solution would be to use Psychtoolbox to create a movie with your images you want to send and then play this movie in an infinite loop with a given framerate. You will find more information here:
I am not an expert of this toolbox, look in the help and in the forum, there may be other solutions.
Another thing important to know, when dealing with matlab for synchronization issues or time measurement, is that there is many way to measure the time, but even if the number returned is given with a fine precision, the actual accuracy is usually very low. To get an accurate value of the time, I recently find this link:
(you need to compile the .c file typing mex hat.c - of course you need a compiler for matlab)
2/ I guess I need more information on the hardware and the way to communicate with this hardware. Sometimes, you can send a TTL pulse either using a port of your computer (RS232, USB...) or a specific hardware. Again, that depends on your device.
However, the sync signal you would send with Matlab would not be that accurate compared to a hardware synchronization since you have the latency of the computational time. The best way is to have a hardware output trigger on the SLM itself you can use. You can ask the manufacturer of your SLM if it is possible to add an output trigger signal. I did that once with a Holoeye device, and the company nicely accepted to add the output trigger for a reasonable price.
3/ I do not use Labview, I am not the best one to help you with that. Maybe someone else in the forum know a direct and clean way to control the SLM with Labview. One thing possible (probably not optimal) is to use a Matlab code inside Labview. So you may use the same thing as in the tutorial in a Matlab code in your Labview programm (see Labview help for more details).
You use a phase modulator, so when you illuminate it with light at the working polarization, if you change the value of the pixels, you cannot directly switch off the amplitude after the SLM, you can only modulate the phase.
That being said, as I already replied to your previous post, you can use polarizers to obtain amplitude modulation. At the working polarization (usually horizontal) you change the value \phi of the phase by tuning the value of the pixels. At the orthogonal polarization, you do not change the phase nor the amplitude. If you arrive at 45 degrees, after modulation by the SLM, you have a phase difference \phi between the horizontal and vertical polarizations. If you project back to a +/-45 degree polarization, you have a field in cos(\alpha). Then you have your amplitude modulation.
I am just saying that if you have a drift in frequency in an interferometer, how much the interference figure changes depends on the difference of path length.
The phase shift between two arms in an interferometer depends on the difference of path length, the wavelength and the optical index. If the difference of path length is big, you will be more sensitive to a small frequency drift.
Does that make sens?
What do you call unstable? Do you see them changing in real time on a camera or with higher frequencies?
It is possible that you have a drift of the central frequency.
After modulation/filtering, you may make paths interfere with bigger optical path length differences than before. If the frequency move, the path length difference will change too.
The easy way to test is to build a Michelson interferometer and increase the path difference between the two arms and check if the stability changes.
Another thing to check is the AR coating of the SLM, if you work at the wrong wavelength, you may have parasite reflections that come into play and interfere with the signal.
I did not understand exactly what is all this for, and to be honest, I am not ready to read the thesis for this.
However, about the Δk issue, the text seems pretty clear for me. When you say you have a grating with k*x with k=4 or 8 with x in pixel units, then k has the dimension of the inverse of pixels, so I would use 4 not 1/4.
About the formula itself, I have some doubts. If Mholo, Marray and Δl are in pixel units and Δk in inverse of pixels, then A is not in pixel units, it has no dimension...
First of all, the Pluto SLM is a phase only SLM, you cannot directly display amplitude grating masks.
Then, if I understand correctly, your question is how to link simulations to experiments?
This is purely a geometrical problem that you seem to have already solved.
You calculate the FFT using Matlab/Python, you end up with an image where each pixel corresponds to a spatial frequency (it goes from -f_s/2 to f_s/2, where f_s is the spatial sampling frequency, for more information look that the doc of the function you use).
Then, your formula is correct, x~lambdaffx, just apply this coordinate transformation to have the position in the spatial coordinates.
You have everything. If you have a 1920 x 1080, you calculate the FFT and using the above formula you have the image on the CCD.
If you want a given pattern on you CCD plane, use the inverse operation to find out what is the correct amplitude and phase mask to apply on the SLM plane. As you cannot modulate in amplitude and phase but just in phase, a first approximation is to just send the phase of the calculated field. This is not optimal at all, to find a better solution, you can use a Gerchberg–Saxton algorithm (See wikipedia.org/wiki/GerchbergSaxton_algorithm).
One tip, usually the pattern you want to see in the Fourier plane is quite close to the center of the origin of your FFT image. You may need to increase the number of output points of your FFT function to be able to zoom in and have the details of the image you want (look at the docs of the FFT function).
Thanks for your answer, I have got some stimulating notes on ResearchGate. I post the link just in case someone would have the same problem:
Finally I have solved this issue by synchronization SLM with a camera.
I used few Holoeye SLMs and there is some issues with phase stability. Last time I contacted them (Nov 2013), they said that they were able to increase the stability without being able to get rid of the issue totally.
However, this fluctuations seems to be due to the refresh process of the SLM and one can trig the camera with the SLM not to see these fluctuations.
You should be able to use the output trigger of the SLM to do so. Contact Holoeye if you have some issues with it.
Other brands (Hamamatsu, BNS) do not have such instabilities, that does not mean that Holoeye SLMs are not as good. They all have their drawbacks and and benefits.
I would be happy to help but I would need more information.
What is a GCH?
Where is you HWP, between the SLM and the cube or before the cube?
What is your grating? Is it the mask you display on the SLM?
I am not sure I see how it can work with a PBS.
You are suppose to feed your SLM with one given polarization (horizontal I think) because only at this polarization you would have a nice phase modulation. Light reflected off your SLM have the same polarization (I think it is how it works for the Holoeye LETO).
So, your PBS an only provide H or V polarization and the incident and reflected light of your SLM have the same polarization. I do not find a way to have your modulated beam reflected by the PBS instead of transmitted, but I can be wrong. You would need to provide me more information.
Anyway, before using the SLM, you should characterize the complex modulation as explain in my tutorial. It is the safest way to be sure that everything works the way it should.
About the diffraction pattern. If you send a ideal square signal between the phases 0 and pi/2, you still have a high 0 order because of the DC background. This is due to the fact that the average of your signal is not zero (0.5+0.5j in this case). If you calculate the Fourier transform, you have half the energy in the 0 order and a quarter in each +1/-1 orders. That is what you seem to observe.
If you take instead a grating with a square wave between 0 and pi or between -pi/2 and pi/2, the average value is 0 and then all the energy is in the +1/-1 orders (half of the energy in each).
I enclose an image of the power spectra (intensity of the FFT) for a square wave with two phase values 0 and pi/2 (left) and for a square wave with phase values -pi/2 and pi/2 (right). You see that in the first case you have half the energy in the 0th order and not in the other one.
I think the answer depends on the type of SLM you use (I assume that it is a liquid crystal SLM).
For a modulator that is supposed to give a phase only modulator for a given input polarization, they usually give a maximum angle you can use. For instance, I just checked in the technical notes for the Hamamatsu X10468 serie, they say to use an angle "less that 10 degrees". Because the system is calibrated for a normal incidence, the tolerance is quite small. If you increase that angle, you have a mix of amplitude and phase modulation and the phase modulation would not be linear anymore.
In the other hand, I think things will be different with older models (for instance the Holoeye LC-R 2500 that I used few years ago). For these models, the SLM does not give right away a phase only modulation, you have to use a polarizer before and after the SLM and find the best configuration (combination of polarizations) that give the best phase modulation with the lowest amplitude modulation. My guess is that with these devices, if you work at a bigger angle, you may find a suitable combination of polarizations for a good phase modulation, even if this combination is not the same one as for a zero angle.
When the modulation is not anymore a phase only modulation, you may still use a technique as the one presented in that Vellekoop et al. paper (dx.doi.org/10.1364/AO.47.002076) in which they use a combination of four pixels to achieve a phase only modulation.
I never tried to quantify the effect of the angle. It is possible to do the calculation for a given configuration of a the liquid crystals, I never did that. But again, it will depend on the SLM you are using.
Hope it helps,