Science topics: MathematicsAlgebraMatrix
Science topic

# Matrix - Science topic

Explore the latest questions and answers in Matrix, and find Matrix experts.
Questions related to Matrix
Question
y=Ax+e where y is observed vector A is observation matrix n x is signal to be estimated by observing less than required as dimension of y is k n x is m(k<m).
Strictly speaknig, the matrix must fulfill the restricted isometry property (RIP) (http://en.wikipedia.org/wiki/Restricted_isometry_property). This is most easily achieved by generating A randomly - for example with entries from a standard normal distribution. The size of the matrix matters; ideally you will want it as "low/wide" as possible, but there is a limit to how much it can compress. You can see expressions for it in for example: http://www-stat.stanford.edu/~candes/papers/CompressiveSampling.pdf (Section 3.4).
That being said, the restricted isometry property is not considered that relevant anymore since it is a very conservative requirement. In practice you can reconstruct compressively sensed signals from considerably fewer measurements than what the RIP dictates. See for example: http://people.maths.ox.ac.uk/tanner/papers/DoTa_PUT.pdf
In practice, you can also encounter much more structured matrices A, dictated by underlying hardware that the measurement equation models. This could for example be the random demodulator: http://users.cms.caltech.edu/~jtropp/papers/TLDRB10-Beyond-Nyquist.pdf
Question
I need to determine only the first natural frequency of the elastic beam, in order to continue to take the optimization of certain parameters of the beam. Given that the proposed model matrix occur with larger dimensions in symbolic form, the determination of the frequency is very slow. Therefore, I need a way to quickly determine only the first eigenvalues of the matrix with dimensions nxn in symbolic form.
Aleksander:
If all you want is the smallest eigenvalue and the associated eigenvector, then the inverse power method suggested by others is a straightforward iterative approach that costs only n2 flops per iteration (due to the matrix-vector multipy at each iteration.  I suspect you are interested in such a 'cheap method' because to find all the eigenvalues and eigenvectors of an nxn matrix costs n3 flops (such as occurs when implementing eig( ) in MATLAB).  Since you mentioned that you were keen to find the smallest eigenvalues (i.e., multiple eigen-pairs), then I wanted to add to the discussion that another iterative method that will give estimates of multiple eigenvalues, that also costs only n2 flops per iteration, is the Lanczos method.  It is easily implemeted in MATLAB via the command eigs( ).   It will find estimates of the "well-separated" (including the biggest and smallest) eigenvalues of a matrix.
Question
Does anyone have a good source for correcting for spatial auto correlation when comparing a species assemblage (site X species matrix) to a geographic distance matrix? I know a mantel test will tell me how correlated the variables are but how do you correct for this effect in subsequent analyses?
One simple solution is to use multiple regression on distance matrices (MRM), with geographic distance as a predictor (or covariate) of plot-plot compositional dissimilarity (uses permutation to test significance). That way, with suitable sampling so that environmental and geographic distances aren’t excessively correlated, the distance effect can be partialled out when testing the influence of environment on composition, either by including it as a covariate, or possibly by modelling the residuals of a dissimilarity ~ distance regression against other variables.
Lichstein 2007
If you had some 1-dimensional representation of species composition at sites, then you can use PCNM (a spatial eigenvector) as a covariate in the same way (basically its an ordination axis calculated from a geographic distance matrix).
Borcard & Legendre 2002
Question
In general it is assumed to be positive semi definite. What are the consequences of assuming the cov matrix as positive  definite
It is not "assumed" positive semi-definite, it is positive semi-definite. It's the tensor product E[x xT]   (assuming all means are zero), where x is the vector of observed variables. If you multiply that fore and aft by some vector a, one gets aTE[x xT]a = E[(a.x)2] and say what you like that is most certainly non-negative.
So no "assuming".
If you drop the word "assuming" from your question, "what are the consequences of the covariance matrix being positive definite" can be answered. It says "what does it mean for the covariance matrix to be positive definite".
I don't know. I can't think of anything. It usually is positive definite. All you are ruling out is the possibiity of one of its eigenvalues being exactly 0, i.e. of the matrix being singular.
You can get a singular matrix in several ways. For example by having two rows exactly equal. That would be two random variables whose covariances with themselves and each other and all the other variables are exactly the same. They'd almost always be exactly the same value.
Or you could have an entire row of zeros .. uh, no. That's a variable that has zero variance about its mean, among other things, so maybe not!
The most general statement seems to me to be that there is some fixed linear sum of all the variables that is uncorrelated with each of the variables individually. We know that can happen .. it happens whenever two different variables have the same covariances, for example. Then x1-x2 fills the bill. One needs the covariances to be 1 between x1 and x2, so they would be the same almost all the time - differing at most on a set of trials of measure zero.
--
Ah. Stupid me. My plane flight let me think of an answer. The covariance matrix is not definite iff it has a zero root iff E[(a.x)2]=0 for some a. But that means that the variance of a.x is zero (since I assumed all the xi had mean zero in order to make things easy .. just change origin so it is so ... and hence the mean of a.x is zero). Having variance zero means that it is some single number  k=0 [in this case where all the xi have mean 0], which means that x is in the affine plane a.x=k (almost always).
So that's that. The covariance matrix is only not definite (positive) when the observations x lie in some affine plane almost always.
Question
Is there any good reference to the conditions on matrices A,B,C,D such that
the equation of the form:
XCX+XD-AX-B=0
have any solution over a general finite field F ?
If affirmative, is there any description of all the solutions ?
Also, is there any formula for the number of solutions ?
Specifically, are there any known conditions for a unique solution and a formula for the unique solution ?
This question seems to be asked periodically in RG.
(that's the one you want).
The answer is more or less yes there are. The most useful characterization I saw was that a certain extension of the domain space had to be invariant under a certain linear operation. As far as I recall one turns the problem from nxn to 2nx2n by choosing a set of basis vectors of a n-D subspace of the 2n-D extension, the bottom n ordinates of which are just (1000,01000,0010,...). Then one invents a transform on this n-D space embedded in 2n-Dspace that takes two applications to rotate it back into the same n-D space again, if it ever does do that. Something like (x,y)->(y,Bx). That turns out to be isomorphic to a transform on the original space (consider the action on the bottom n ordinates) that solves the equation.
You have (X sqrt(C)-A sqrt(C)-1)(sqrt(C)X+sqrt(C)-1D)=B+AC-1D, btw.
It might be nice to  write it as (XC-A)(CX+D)=B.
Question
How can I use HIFOO in order to find a minimal-norm static-output-feedback ? Specifically, if A,B,C are the state space matrices of a continuous time linear system, how can I find, using HIFOO, a matrix K such that A-BKC would be stable and has a minimal norm ?
Also, if the system is a descrete time system, can I use HIFOO in the same way ?
It's easier for me to type in Latex. Please refer to the attached file for some comments.
Question
Is there any possibility to approximate any general/square matrix to an idempotent matrix?
The answer to your question (in general) is no.  Just look at 1 x 1 matrices. The only 1 x 1 idempotent matrices are 0 and 1, which clearly can't approximate the matrix 42 (for example) using any reasonable sense of the word "approximate".
I'm confident that the answer is again no for larger matrices as well. Consider the 2 x 2 matrix A whose diagonal entries are 0 and whose off diagonal entries are a large number say M.  It's a straight forward problem to classify all 2 x 2 real idempotent matrices. After that's done, its easy to see that the Frobenious norm of the difference between any real 2 x 2 idempotent matrix and A is large (on the order of M or larger I believe). Using any matrix norm to define a notion of closeness between A and a 2 x 2 idempotent matrix will yield a similar result.
Question
I want to do a long term experiment with photocatalytic TiO2 under outdoor conditions and I am looking for a matrix that does not inhibit TiO2 photocatalysis very much and on the other hand itself is stable against the photocatalysis. Has anyone experience with that?
TiO2/β-SiC foam-structured photoreactor for continuous wastewater treatment
β-SiC foams as a promising structured photocatalytic support for water and air detoxification
Question
In formula (7) the min operator look to applied only to matrix Bj1 , but in my understanding it should be applied to the whole formula. Am i correct?
The minimum is applied to the whole formula on the right-hand-side; the expression is a cost function whose results is the argmin of all the terms on the right. In fact, if you read the paragraph that follows equation 7 you will see mention of the fact that this cost function depends on all the constituent terms, not just the matrix B.
Question
Hi.
I used PCA to extract the principal components of a set of 5 variables. The eigenvalue of the first component is 1.98, and for the second is 0.98. So I retain the first PC (because it is >1). The loadings matrix (prcomp.object$rotation) is: Standard deviations: [1] 1.8964529 0.9809027 0.5126452 0.4047367 0.1211575 Rotation: PC1 PC2 PC3 PC4 PC5 v1 0.4854578 -0.1426120 -0.3791216 -0.7605055 -0.14795533 v2 -0.4461265 -0.3337826 -0.8067325 0.1899892 -0.05144943 v3 -0.3395503 -0.7385043 0.3986846 -0.3292091 0.26830763 v4 -0.4364794 0.5355879 -0.1179392 -0.4308451 0.56841368 v5 -0.5094048 0.1897576 0.1805279 -0.3025383 0.76182615 It strikes me a bit that the loadings of all the variables in PC1 (the only component to be retained) are quite low (<0.51). How should I interpret this result? Isn't PCA appropiate here? Or is it just normal (and the correlation between the original variables and PC1 is just low)? Thanks! Relevant answer Answer If Your aim is to reduce observed data, the PCA is a good choice. However with just 5 variables You are probably looking for latent factors and therefore You should perform exploratory (or the best choice would be confirmatory) factor analysis. Another question is what is the difference between factor with eigenvalue - 1.01 and 0.98? This is extremely arbitraty and using Kaiser criterion for deciding on this is inappropriate. Try to use more sophisticated methods such as parallel analysis, this would resolve the problem of the number of factors. Question 2 answers I am about to collect data for my project which is about wound healing. I will use Oasis as a collagen based matrix. I looked to the literature and I didn't get enough information. Relevant answer Answer I got so far what you listed to me, I appreciate that; however, I didn't find new things in the literature. Thank you Ourania Castana very much. Question 4 answers Hi. I want to build the Fock Matrix in Gaussian. First of all, I create the Z-matrix using Avogadro software, and later I use Gaussian with IOP (iop(5/33=3)), but I don't know how to search this matrix....sometimes appear "Fock matrix is not symmetric: symmetry in diagonalization turned off" in the log txt. How have to search the fock matrix, when appear it? Thanks in advance. Relevant answer Answer Hello, The Fock matrix is printed in the output at each SCF cycle. You can find it by searching "Fock matrix (alpha)". Question 7 answers Where can use composites that have a higher storage modulus than the matrix. Please help. Relevant answer Answer Viscoelasticity is one of important parameters for char-acterization of processing and use properties of poly-meric materials. For polymer blends or inorganic parti-cle-filled polymer composites, the relationship between structure and properties tends towards more complexity owing to the formation of an interface between compo-nents, as well as between the fillers and matrix. The vis-coelastic parameter of polymer materials may be meas-ured using a dynamic mechanical analysis instrument, such as storage modulus, loss modulus and mechanical damping, etc. In addition, dynamic mechanical meas-urements over a range of temperatures provide valuable insight into the structure, morphology and properties of polymeric blends and composites. A lot of dynamic me-chanical analyses on polymeric blends and composites have been done Question 17 answers Is there a relation between rank of a matrix polynomial and its eigenvalues? Somewhat like what exists between rank a matrix and its eigenvalues. Relevant answer Answer My example. Can we have a matrix and determinant, whose elements are other matrices? Question 4 answers If this exists then how can we find eigenvalues of such matrices? Relevant answer Answer Determinants need commutativity which fails for matrices. However, there are exceptions. Hermitian 3x3 matrices over the quaternions or octonions have a determinant and eigenvalues. These can be considered as matrices whose entries are matrices themselves since left multiplication with quaternions or octonions can be viewed as 4x4 or 8x8 matrices. Best regards Jost Question 4 answers In the structural soil interaction, is it possible the non-diagonal element of the mass matrix to be zero? In what condition is it possible? Thanks in advance Relevant answer Answer When your trial functions are orthogonal toeach other. Question 2 answers I have a 3D heat transfer model. I want to get state space of 3D, but when I got state space, the matrix is so large. Does anyone know how to reduce matrix or how to convert 3D to 2D model so that I can get smaller matrix. Thank you very much for you help. Relevant answer In COMSOL Multiphysics you can import and export 2D CAD drawings on the DXF file format, a common format for 2D CAD drawings. The DXF import is available for 2D geometries and for work planes in a 3D model. Question 4 answers I need to invert a large singular matrix Relevant answer Answer First, a singular matrix doesn't have an inverse! So, you will have to find a good approximation to an inverse, in the sense of something that gives close to the identity when you multiply it by your matrix. I would recommend looking at the book "Numerical Recipes" by Press et al. It has computational codes, and very nicely written, concise overviews of each possible algorithm that you could use. It does not give you all of the details, but it does give you roughly the right amount of information to make an informed choice of which method to use. However, if by "large singular matrix" you mean extremely large, then you may have to use a different set of methods (e.g., see comment above). Question 22 answers I'm making use of AUC values got from weka tool as result.....found that tool is using trapezoidal approximation method... but not yet very clear..... Relevant answer Answer Dear Rahman, AUC is a Area Under ROC curve. 1) First make a plot of ROC curve by using confusion matrix. 2) Normalize data, so that X and Y axis should be in unity. Even you can divide data values with maximum value of data. 3) Use Trapezoidal method to calculate AUC. 4) Maximum value of AUC is one. If you achieve AUC value near to one then your developed method is statistically sound. For example, AUC>0.9 is good one, AUC<0.5 is a random or poor one. All the best :) How do I normalize a matrix? Question 35 answers How do I normalize any matrix? Is it column or row wise normalization? Is it right that we have to divide each column of a matrix by the square root of the sum of each element of that column? Relevant answer Answer Dear Vanita Pawar, If you want to get a matrix with unit norm, then the normalization method depends on the norm. If you want to get the so-called normal form of a matrix, the method of its obtaining is described in textbooks on linear algebra. Is it possible to create a geographical distance matrix without the pop coords worksheet for the Mantel test using Genalex software? Question 16 answers I am using GenALEx software for my codominant microsatellite data. In the genalex tutorial 3 ex 3.3 mantel test for isolation of distance they have mentioned a worksheet named pop coords which contains UTM values this worksheet is activated for geographic distance matrix. Is it possible to create geographical distance matrix without pop coords worksheet. If not how to create that sheet. Relevant answer Answer Hello, Enclosed text files may be useful to you. Question 5 answers Second phase/particles can have coherent/semi-coherent/non-coherent bonding with the metal matrix. Is there a way to have a feeling about the second phase coherency using EBSD? Can KAM analysis around the second phase show us anything in this regard? Can interface coherency be analyzed by Geometrical Necessary Dislocation measurements? Of course one challenge would be the EBSD resolution that is a few ten nm.. Relevant answer Answer TEM is expensive and slow, but it is, if you really want much more accurate. We have written some day some paper both for CBED and HRTEM to, compare: Question 15 answers Using the Neel mechanism, we hope that the size can be fairly well controlled so that a predictable relaxation rate results. The particles would probably end up in a matrix of surfactant or something similar. We hope that the bulk susceptibility (blocking/DC) will be at least 2. Relevant answer Answer Dear Remi Cornwall, Referring to your question, size controlled nanoparticles can be made by using gel / droplet method. Where one can dissolve the particles in desired surfactant for stability and homogeneity. Many literature's are available for size controlled nano particles. Best Ayan Question 2 answers I have found that sometimes inverse and pseudo inverse of a covariance matrix are the same whereas it is not true always. is there any relation between pseudoinverse and nonsingularity? Relevant answer Answer Consider the 3x3 diagonal matrix D= diag(2,1,a) where a is a small positive number. Then D^(-1) = diag(1/2,1,1/a). Notice that 1/a can be very large if a is very close to zero. Now, computationally, we never work with real numbers; rather, we work with floating point numbers (essentially, real numbers modulo base changes, truncations, rounding,...). Suppose that your machine has an error tolerance eps. That is, when eps >= a >=0, your machine sets a = 0. In this case, D^(-1) no longer exists since 1/0 causes all manner of logic issues. On the other hand, the pseudo-inverse (typically the unique Moore-Penrose inverse) is well-defined, with pinv(D) = diag(1/2,1,0) for the given example. In theory, for any square matrix A, pinv(A) = A^(-1) when A is invertible, and when A is not invertible, pinv(A) inverts the invertible portion of A and suppresses the zero-eigenvalue component of A. More specifically, every square (nxn), real or complex matrix A has SVD (singular value decomposition) given by A = U S V where U and V are unitary (real and orthogonal when A is real; and further, U = V* when A is hermetian, and U = V^T when A is real and symmetric). The matrix S is diagonal with the diagonal entries nonnegative and in decreasing order s1 >= S2 >= ...>=Sk > 0. The rank of A is k, and when k is less than n. Then pinv(A) = V* pinv(S) U* where pinv(S) is the diagonal matrix diag( 1/S1, 1/S2, ... , 1/Sk, 0, ... 0). It turns out that there are very good algorithms for finding the SVD for real, symmetric matrices, which includes covariance matrices. These algorithms were developed in the 1960's by Gene Golub and others (see for example, Golub and Van Loan's book on matrix computations). SO, to summarize, when A is invertible, the pseudo-inverse (and every other respectable generalized inverse) is, in theory, the same as the inverse of A. When A is not invertible (or when A is within a small distance from a matrix that is not invertible AND the null space of that nearby matrix is not close to orthogonal to another eigenspace), then the pseudo-inverse is much more robust than the inverse. The pseudoinverse can be shown to solve an inconsistent Ax=b by finding the least squares "solution" that minimizes the 2-norm of Ax - b; and it solves a consistent Ax = b with multiple solutions by finding the unique solution x of minimum 2-norm. Hope that helps. Question 6 answers I have a matrix and i have shown the output of matrix one by one element in FortranForm. the problem is i want to put "&" continuation charater in output because when I am copying the data into fortran Code, Fortran is showing warning that it exceeds 2048 characters . So i wanted to put ampersand character in each output of matrix element in FortranForm. please help Relevant answer Answer you could use a format as follows: 100 format(6(1x,f12.6),a1) do i=1,nrow write(10,100)(mat(i,j),j=1,6),achar(38) enddo **** or 100 format(6(1x,f12.6),'&') do i=1,nrow write(10,100)(mat(i,j),j=1,6) write(10,100)(mat(i,j),j=7,12) enddo **** the achar(38) is ampersand... the following program will give you part of the table. program achartable do i=1,200 ; write(*,100)i,achar(i) ; enddo 100 format(i3,1x,a1) end **** please give your feedback Question 3 answers I would like to test dissimilarities among different groups in a dist matrix. For instance, if I have a dist matrix with three factors a, b, c, functions like adonis and betadisper in R will test the aa, bb and cc combinations. However, I am interested in testing the ba, ca, and cb combinations. How can I test for these combinations? a a a b b b c c c a aa a aa aa a aa aa aa b ba ba ba bb b ba ba ba bb bb b ba ba ba bb bb bb c ca ca ca cb cb cb cc c ca ca ca cb cb cb cc cc c ca ca ca cb cb cb cc cc cc Relevant answer Answer Why don't you try the nonparametric Multi-Response Permutation Procedure - MRPP (Mielke PWJr, Berry KJ (2001) Permutation methods: a distance function approach. Springer Series in Statistics. Springer Verlag, Berlin.)? It is very convenient to do in PC-ORD. MRPP compares if the distances among sample units within the group are significantly smaller than it would be expected if they were randomly assigned to other groups. The reported test statistic T describes the separation between the samples in different groups; the more negative is T, the stronger is the separation. The chance-corrected within-group agreement (A) describes the homogeneity within the group compared to the random expectation; A increases with the increasing similarity among the samples assigned to one group. good luck! Question 3 answers If I have some symmetric vector, representing radial distribution of data (the Line Spread Function (LSF) of the optical system, in my case, and a priori I know that PSF has circular symmetry), how, from this vector, can I create a circularly symmetric matrix (representing the PSF of the system)? And vise versa? For example, I have a vector representing 1D Gaussian, and I want matrix of 2D Gaussian. Thank you in advance! Relevant answer Answer For the 2d-gaussian from the 1d-version, it is "easy" since the gaussian function is "separable" G(x,y) = G(x)*G(y) So, if g1d is a vector column of the gaussian, then, g2d = g1d*(g1d.') will give the 2d version. For a non-separable radial-symmetric function, it could be done with a simple 1d-interpolation. Let f1d the 1d function with respect the radial value rho. Let x and y the two vector in column and row dimensions. Then the following lines in Matlab will construct the 2d version of f1d with radial-symmetry. [Mx,My] = meshgrid(x,y); Mro = sqrt(Mx.^2+My.^2) ; f2d = interp1(rho, f1d, Mro,'lin',nan); good luck Question 1 answer If the Wave Function of a quantum system is known, how do I obtain the dipole transition matrix elements? Relevant answer Answer You have to calculate the mean value of <psi_i| mu E(t)|psi_j> = <psi_i| mu n|psi_j> E(t) where mu is the dipole moment (vector) , E the electrical field and psi_i the initial state, psi_j the final state n the direction of the electrical field. Consider the hydrogen atom submitted to an electrical field parallel to the direction z. The dipole moment in the direction z spherical coordinates reads : mu=ercos(theta) (e being the the charge of the electron, r and theta the spherical coordinate of the electron position). If we want to calculate the dipole transition element between two pure states one have to separate the integrals on r and theta and phi ( the eigen states being the product of a function of r with a spherical harmonic which is function of theta and phi). For more complex problems for example many electrons atoms or molecules one have to use ab-initio calculations (hartree fock for example). If you have a particular a problem please (vibrationnal transition, electronic transition ...) please specify it so i would be able to help you. Question 4 answers Can anyone send me or tell me where i may find the piezoelectric, the dielectric and elastic properties parameters/constants matrixes of the following compositions or any other compositions of PVDF copolymers and the terypolymers? P(VDF-TrFE)56/44 mol%-P(VDF-TrFE)68/32 mol% P(VDF-HFP) 85/15 mol% P(VDF-CTFE)88/12 mol% and P(VDF–TrFE–CFE) 68/32/9 mol% Thank You ! Relevant answer Answer I can try to help you with the following composition P(VDF-TrFE) COPOLYMER 75%/25% Young’s modulus (MPa) - machine direction, 950 ± 20% - transverse direction, 1500 ± 20% Tensile strength at break (MPa) - machine direction, 90 ± 15% - transverse direction, 30 ± 15% εr about 9.4 ± 10% tan δ about 0.014 ± 10% (0.1kHz - 10kHz) hope this helps! Question 14 answers Any suggestion/resources are appreciated. Relevant answer Answer Depends on what you mean by the derivative. If each entry is a function of some variable, just differentiate the entries. For the second, you could start by diagonalising, then taking the appropriate root of the diagonal entries. (But remember, you'll get lots of roots---for a two by two matrix, there will be four square roots, one for each choice of sign on each element, and so on.) Question 1 answer where [KL] is stiffness matrix linear and [KNL] is stiffness matrix non-linear? [M]{d2D/dt2}+[G]{dD/dt}([KL]+[KNL]){D}={0} Relevant answer Answer Now either use finite difference approximations for time derivatives or use an ODE solver to find solutions. Question 3 answers a c 0 … 0 1 b a c 0 … 0 0 b a c 0 … 0 0 0 . a c 0 . b a c 1 0 … 0 b a Relevant answer Answer This is a good question with more than one answer. You may find the following article helpful: Y. Eidelman, I. Gohberg, V. Olshevsky, Eigenstructure of order-one-quasiseparable matrices, Linear Alg. and its Applications 405, 2005, 1-40: See Section 1.4, p. 6 for an overview. An introduction to tridiagonal matrices starts in Section 1.1, p. 2 (see the examples, starting on page 3).d Question 12 answers I have 12 constructs meant to be measuring different aspects of the same construct - will I have remove some of these? or are there other remedies Relevant answer Answer Hi Jennifer I believe you mean 12 items measuring a single (latent) construct. Generally, when AMOS reports in such way, there's likely you have encountered Heywood case in your dataset. It's very likely that this is caused by outliers or extreme violation of normal distribution or sometimes, multicollinearity. A guessing game is that, you may want to run factor analysis using with: 1. Extraction method: Maximum Likelihood, rotation: any oblique (e.g. direct oblimin). 2. Extraction method: Principal axis factoring, rotation: any oblique (e.g. direct oblimin). Look at the factor loading, scan for any 'impossible' value, i.e. loading >1.0 If there's any, almost certainly your dataset suffers Heywood cases. You can later check and treat for outliers which is very time consuming. Alternatively, you can opt for PLS-SEM instead of covariance-based SEM (e.g. AMOS). The link below gives you an idea how PLS-SEM works. Best wishes Saiyidi Question 1 answer This is for stable isotope analysis Relevant answer Answer Institut des Sciences Analytiques 5 rue de la Doua– 69100 VILLEURBANNE - France Contact pour les demandes d’analyses et informations relatives : Le bureau des analyses (Tel.+33 4 37 42 36 36 – bda@sca.cnrs.fr) Question 4 answers Hi all, I am trying to prepare a computational model of a particular protein (matrix metaloproteinaise 1 (MMP1)) found in the extracellular matrix of humans. However, I am struggling to find a reliable source for the measurement of water density in this environment. If anyone knows of an article with this values, and/or a description of the ion-solvent composition, I would be extremely grateful. Many thanks Relevant answer Answer Hi Anthony, I don't know if this will help you, but Matrigel is considered to be a good stand in for in vivo ECM properties. I don't have the references at hand, but I do remember that the water contents of Matrigel is somewhere around 30-40% w/w. Question 2 answers All possible combinations should be used. Relevant answer Answer thanks sir Question 10 answers The problem has been fixed! Thanks Relevant answer Answer My PhD student use the R-package adegenet (more details here: http://adegenet.r-forge.r-project.org/) Question 3 answers By "solution", I mean a PseudoInverse. By "exact", I mean a formula, not a fit (such as SVD) or a decomposition. Relevant answer Answer C.C. MacDufee apparently was the first to point out, that a full-rank factorization of a matrix A leads to an explicit formula for its Moore-Penrose inverse, A^{+}. Let A be any complex matrix of order m by n and let A = FG be a rank factorization. Then it can be easily verified that F^{+}= (F^{*}F)^{-1}F^{*}, G^{+} = G^{*}(GG^{*})^{-1} and then A^{+} = G^{+}F^{+}, where A^{+} is mean its Moore-Penrose inverse, and F^{*} is the conjugate transpose of F. Question 8 answers Or why are good dispersion and bad distribution of filler good for electrical conductivity? Thanks in advance Relevant answer Answer Distribution is the way the particles fill the space, whereas dispersion is the way these particles are agglomerated or not. With a good distribution, each particle is as far as possible from its nearest neighbour, so that the space is homogeneoulsy filled with particles. With a good dispersion, all particles have the same shape and size, as small as possible, as no agglomerates exist. Therefore, it is quite possible to have good distribution but poor dispersion, or poor disribution and good dispersion, see attached Figure. If particles are conducting and if you want a high conductivity, you shoud indeed prefer the situation in which particles can make a conducting path by touching each other, but agglomerates should be avoided as many particles would be useless because behaving as dead ends for the conducting path. Alain Question 5 answers We have known that the asymptotic determinant of the covariance matrix when its order goes to infinity. I am wondering whether there is the close form in the finite case? Thanks, Relevant answer Answer This was dealt with to some extent by Hansen in Journal of Econometrics Volume 141, Issue 2, December 2007, Pages 597–620 with article entitled Asymptotic properties of a robust variance matrix estimator for panel data when T is large Question 2 answers >> A = sym('A', [2 4]) % symbolic matrix without having to define its elements. A = [ A1_1, A1_2, A1_3, A1_4] [ A2_1, A2_2, A2_3, A2_4] >> A' % transpose of A ans = [ conj(A1_1), conj(A2_1)] [ conj(A1_2), conj(A2_2)] [ conj(A1_3), conj(A2_3)] [ conj(A1_4), conj(A2_4)] By default Matlab sets symbolic elements as 'Complex Number'. >> syms x y real % x y declared a 'Real' >> f=[x y] f = [ x, y] >> f' ans = x y I need a way to declare the elements in the matrix as 'Real'. Relevant answer Answer To set this math problem in matlab is complicated and you can loss a lot of time, My advise is to use python instead of Matlab with numpy and scipy libraries. Python language could give you more options and benefits. Question 5 answers My Y axis values are overlapping. (attached the image below) it seems I need an .m2p file. COMMANDS I USED do_dssp -s md.tpr -f md.trr -o dssp.xpm xpm2ps -f dssp.xpm -di scale.m2p -do scale.m2p -o dssp scale.m2p ; Command line options of xpm2ps override the parameters in this file black&white = no ; Obsolete titlefont = Times-Roman ; A PostScript Font titlefontsize = 20 ; Font size (pt) legend = yes ; Show the legend legendfont = Times-Roman ; A PostScript Font legendlabel = ; Used when there is none in the .xpm legend2label = ; Used when merging two xpm's legendfontsize = 14 ; Font size (pt) xbox = 2.0 ; x-size of a matrix element ybox = 2.0 ; y-size of a matrix element matrixspacing = 20.0 ; Space between 2 matrices xoffset = 0.0 ; Between matrix and bounding box yoffset = 0.0 ; Between matrix and bounding box x-major = 20 ; Major ticks on x axis every .. frames x-minor = 5 ; Id. Minor ticks x-firstmajor = 0 ; First frame for major tick x-majorat0 = no ; Major tick at first frame x-majorticklen = 8.0 ; x-majorticklength x-minorticklen = 4.0 ; x-minorticklength x-label = ; Used when there is none in the .xpm x-fontsize = 16 ; Font size (pt) x-font = Times-Roman ; A PostScript Font x-tickfontsize = 10 ; Font size (pt) x-tickfont = Helvetica ; A PostScript Font y-major = 20 y-minor = 5 y-firstmajor = 0 y-majorat0 = no y-majorticklen = 8.0 y-minorticklen = 4.0 y-label = y-fontsize = 16 y-font = Times-Roman y-tickfontsize = 10 y-tickfont = Helvetica Relevant answer Answer Increase the value of y-box in the .m2p file. Question 4 answers The equation of the system is dx(t)/dt=A(t)x(t) where A is 2*2 matrix In floquet-lyaponov theory it is said that using two independent initial conditions we can calculate PHI(t) matrix which is state transition matrix. let A=[0 exp(t) ; 0 1] By taking independent initial conditions [1;0] and [0;1] how we can calculate phi matrix? Relevant answer Answer This idea works equally well in the n-dim case: if [s_1,...,s_n] would be linearly independent vectors in R^n, then solve dx/dt=Ax with each initial condition x(0)=X_j to get a solution x_j(t). Let S be the matrix be the matrix with the columns [s_1..] and let X(t) be the matrix with the columns [x_1(t),...,x_n(t)]. Then Phi(t)=X(t)S^{-1}. Question 5 answers First at all, thanks (Afendras) for your help. I really appreciate it. I want to apply resultants matrices in Legendre base to avoid change of basis and apply resultants matrices in the non monomial basis such as in legendre or chebysheve base without conversion between basis. Can you please suggest some papers or other helpful information? Thanks Relevant answer Answer Hi, The Legendre basis is an orthogonal basis of polynomial. It is easy to compute a resultant matrix ... once you have a product formula. I have some notes (not very well written) on the subject. I can provide you this document if you need it. I wrote a code only for Chebyshev basis (maple code but very old). I attach a draft for the Chebyshev basis (aborted work) but I can send you the one for Legendre basis once I will be back to my office wesday. My original interest was to use extended Euclid algorithm to compute Pade approx in Legendre basis. By the way, if you are interested in collaborating in that subject, I am interested since I have spent a lot of time to have a clear view on the subject. Do you have an application in mind ? Question 4 answers In fact, I downloaded it and I face a problem with the output of the ([label_vector, instance_matrix] = libsvmread('../data');) It gives me usually as empty matrix, and I usually get the same error. although the input matrix for the libsvmwrite a sparse matrix(double).Please,could you help me as faster as you can. Notes: I'm working on matlab with windows 7, and I'm working on multi classification problem. Thank you in advance. Relevant answer Answer If you're interested in a multi-class problem, use this useful code on MATLAB exchange: Question 10 answers Let a matrix contains a11= 2^2+a^2 then how to show the output to be like a11=2.d0**2+a**2 Mathematica user can solve this problem. Relevant answer Answer Since I do not know what type of elements that are in your matrix, I cannot define a general rule for transforming your elements. But the ideas I have given you so far should help you construct the rule or rules that are needed. Good luck Question 2 answers we must determine Cr in the matrix "air", we can collect samples of dust by high and low volume sampler. What are the best techniques for sampling, pretreatment and analysis? Relevant answer Answer Hi, In our PMLab project (see my profile) we sampled Cr(VI) in PM10 by a low volume sampler. Our analytical partners at the university of Hasselt in Belgium used an analytical method based on: " Determination of hexavalent chromium in ambient air: A story of method induced Cr(III) oxidation, Kristof Tirez et al., Atmospheric Environment 45 (2011) 5332-5341 From our technical report: Fine dust is collected on an alkaline (NaHCO3) impregnated ashless cellulose filter (Whatman 41, 47 mm – 20 µm) during 23 hours (at a flow of 2,3 m3/h). After extraction of the filter in an alkaline medium, Cr(VI) is separated from Cr(III) by means of IC. The detection of Cr(VI) is performed by an on-line coupling with ICP-MS. Be aware of conversions between the two Cr species, use (separate) spiked filters with CrIII and CrVI, in concentrations according to the expected levels in PM. Question 5 answers Hi, guys. I got a problem in my recent research. It can be sketched as follows: How to theoretically choose a positive \alpha such that the matrix D=B*B^T/\alpha+B*inv(A)*B^T is nonsingular or well-conditioned? Here B is an m-by-n matrix (n >=m) and A is an n-by-n nonsingular matrix . Relevant answer Answer I think that you need to have more conditions on B. For example, if B has all zero entries, then D=0 for any alpha, so there is no alpha that meets your criteria in that case. I have found that a useful way to find the conditions you need to obtain the results you desire (D non-singular or well-conditioned for some value of alpha) is to try fairly non-restrictive conditions, and then find counter-examples. The counter examples can provide insight on what other conditions are needed. Question 1 answer (SHHY) Algorithm The standard Hussein, Hind and Yahya method for removable salt and pepper image noise. Relevant answer Answer dear friend it is used to reduce the complexity of computations. I hope it helps you. Best regards Dr. Indrajit Mandal Question 3 answers I want to make a double layered system consist of Hydrogel and electrospun matrix. How to place a hydrogel on the top of electrospun fibrous matrix? Relevant answer Answer If you are using electrospun plla or a similar material it may need wetting before the gel adheres uniformly. To do this place the scaffold in 20% ethanol for a few seconds and drop the gel on it while it is still wet. Some materials may need 40% Ethanol. Question 12 answers My question is quite simple. I am implementing a greedy algorithm technique over the coefficients of a matrix and selecting the best coefficient according to some criteria. The matrix size is 50x1000. In every iteration a single coefficient is selected for operations. The problem before the initiation of the algorithm is out of 50,000 coefficients I have 25,000 candidate coefficients which I should process. Now in the first iteration I have to select the best coefficient among the 25,000 and in the successive iterations I have to select one from 24999, 24998, 24997. So as you see, the algorithm is quite time consuming because in each iteration I have to calculate the benefit of all the coefficients and select the best one according to the benefit and this benefit processing is a further matrix operation which is also time consuming. Do you have any idea how to reduce the computation time of the algorithm? My implementation language is java. Any help or ideas are appreciated. Relevant answer Answer Please, give to us your problem description as a model. Why do we have to assume "what do you mean"? Why do we have to ask like "Maybe you can ...", "May you want ...", "May you mean ...", etc. Your exact model description lets us to give you absolutely relevant answer on your problem. Question 4 answers I'm looking for the efficient matrix manipulation free (open source/GPL/etc.) library for the .NET framework (v.4.5 would be the best). Relevant answer Answer and http://numerics.mathdotnet.com/Matrix.html for further information. Question 2 answers I have a matrix X with n*f dimensions and a matrix A with f*f dimensions. I need to calculate for each row of matrix X, X(i,:)* A* X(i,:)' where X(i,:)' is transpose of X(i,:), because of speed issues, I don't want to use loop, is there any way to do this multiplication without loop in MATLAB? Relevant answer Answer I think matlab can do it directly. Using X*A*X' Question 8 answers I have developed a mass and stiffnes matrix for my problem..natural frequency and modeshapes are also calculated..damping ratio is given..how can i develop damping matrix? Which equation should I use? If I use the equation 2*si*w*M, will i get d correct damping matrix? Relevant answer Answer Dear Reseem You mentioned you have a damping ratio and wish to form the damping matrix. The nature of damping in structural systems is very complex and not properly understood. Some experts suggest that damping is hysteristic in nature and frequency independent and propose the use of a complex stiffness defined by K* = K(1 + 2βi) where i = √(-1) and β = the loss factor. Others suggest that damping is viscous in nature and that the resistance due to damping is given by c(du/dt) where c is the damping coefficient. The damping ratio is then given by r = c/(2Mω). The idea of a single damping ratio is strictly correct for a SDOF system for it appears that it depends on mass alone. In order to simplify the modal analysis of damped MDOF systems a so called proportional or Rayleigh damping is assumed whereby c = αM + βK, that is damping is dependent on both mass M and stiffness K. It can then be shown that the damping ratio varies with frequency and is given by ri = (α + βω2)/(2ω), where in modal analysis ω = ωi = the ith natural frequency. It is obvious that the basic assumption of proportional damping is that c are constants such that the damping ratio varies. I believe that you adopt this damping model with the remaining task being how to estimate α and β. This can be done experimentally by determining the damping ratios of a structure at two separate frequencies and solving for α and β. However, for structures made up of more than a single type of material, where the different materials provide drastically differing energy ­loss mechanisms in various parts of the structure, the distribution of damping forces will not be similar to the distribution of the inertial and elastic forces; in other words, the resulting damping will be nonproportional. A nonproportional damping matrix that will represent this situation may be constructed by applying procedures similar to those discussed above in developing proportional damping matrices, with a proportional matrix being developed for each distinct part of the structure and then the combined system matrix being formed by direct assembly (Clough and Penzien 1995, page 242). I advise Reseem to consult this textbook by Clough and Penzien for more information on synthesis of damping matrices. Clough, R. W. and Penzien, J. (1995) Dynamics of structures,Computers and Structures, Berkeley, CA Question 1 answer Is it related to the rank of the Jacobian matrix? I know in continuous time this is true, but I am not sure about discrete time. Any references and links are welcome. Relevant answer Answer The reachability and observability is a very famous in discrete time system. You can find your answer in the book "Linear System Theory" by Wilson J. Rugh, Second edition p. 462-472. Having studied this chapter you will find your answer. Question 1 answer I want to use some hybrid functions of orthogonal functions for approximation and the problem of finding Operational Matrix of Integration, Product, Delay of Hybrid Functions (i.e block pulse with Taylor, chebyshev, hermite, B-Spline, Bernstein,...) Relevant answer Answer • Optimal control of linear delay systems via hybrid of block-pulse and Legendre polynomials. Journal of the Franklin Institute, Volume 341, Issue 3, May 2004, Pages 279-293. H.R. Marzban, M. Razzaghi. • Numerical solutions of optimal control for time delay systems by hybrid of block-pulse functions and Legendre polynomials. Applied Mathematics and Computation, Volume 184, Issue 2, 15 January 2007, Pages 849-856 Xing Tao Wang. • Hybrid functions approach for optimal control of systems described by integro-differential equations. Applied Mathematical Modelling, Volume 37, Issue 5, 1 March 2013, Pages 3355-3368. S. Mashayekhi, Y. Ordokhani, M. Razzaghis. Question 6 answers I have measured the scattering parameters of carbon matrix in GHz frequencies, results indicate the dielectric constant is very high Relevant answer Answer M. ESSONE MEZEME, S. EL BOUAZZAOUI, M. E. ACHOUR, C. BROSSEAU, Dear Hong, See my paper "Uncovering the intrinsic permittivity of the carbonaceous phase in carbon black filled polymers from broadband dielectric relaxation", J.Appl.Phys., 109, 2011, 074107(1)-074107(11). Question 6 answers Does anyone know some applications of the smallest or largest singular value and vector of an M-matrix? I have learnt some in the synchronization and asynchronization of coupled chaotic dynamical systems, where these quantities of the M-matrix are desired. Any more description or references are much appreciated. Relevant answer Answer You may also see this paper for an application which takes eigen-value rather than singular value. Question 9 answers I need to convert a similarity matrix into a vector, ie, a variable that represents the matrix. Relevant answer Answer The question is a bit unclear! Do you mean "symmetric matrix" or "similarity matrix"? Also what you really mean to convert a matrix to a vector (variable that represents the matrix)?? Do you want to vectorize the matrix?, i.e. in Matlab you can say V = A(:); Also the n*n symmetric matrix has the property that it only contains n*(n+1)/2 independent variables instead of n^2 variables. So if your matrix is symmetric you can vectorize it with a vector size of n*(n+1)/2, almost half the size of the general non-symmetric matrix. Again, Roberth, clarify your question and I will better answer your question. Hope that helps Question 13 answers I have a .txt file that contains information in the following pattern : The data is separated in the form of 255,205,0 102,235,39 206,89,165 ....... (that is, 3 uint8 integers separated by commas, and the groups of 3 separated by whitespaces). There are a total of 30*60 = 1800 triplets of numbers separated by commas and the triplets are separated by spaces. Basically, these are pixel intensities of the 3 channels in an RGB image. I need to store them in a 2 dimensional array such that the first element of each triplet goes into the 1st column, the second element into the 2nd column and the 3rd element into the 3rd column. Effectively, at the end of this operation, I should have a 2 dimensional matrix of 1800x3 size. Please help me with this. I have attached a sample text file. Thanks in advance. Relevant answer Answer @Hossein: numpy.getfromtxt won't work. The input data is not formatted correctly for this type of read. You could, however, reformat the text file to allow for simpler reads, e.g.: f = open('Image Data.txt') d = f.read() f.close() import re d = re.sub(' ', '\n', d) # replace white space character with newline f = open('Image Data Formatted.txt', 'w') f.write(d) f.close() Now you can easily read the tuples into Python with numpy: my_data = numpy.loadtxt( fname='Image Data Formatted.txt', delimiter = ',' ) shape(my_data) # (1800,3) Question 6 answers I am doing in vitro matrix degradation assay with Enatmoeba histolytica and I observe different behavior of the pathogen towards two different components of the ECM viz Fibronectin and collagen type I. The pathogen infects the intestine and in severe infections can breach the mucosal lining of the gut and escape to the liver causing liver abscess but still persons infected with it can also act as asymptomatic carriers and some people show severe infection with the infection flaring up and causing massive tissue damage. So, can the ECM composition trigger such kind of response, which can vary from one person to the other? Relevant answer Answer Dear Michel, Do you have any information about the possibility of varying ECM composition between individuals. Do u think ECM can vary with age and between gender and also can it vary from individual to individual? Any suggestion for an open-source matrix library? Question 54 answers I am finalizing my algorithm on Matlab so I need to transfer the code to C++ language in order to test it on large-scale problems by using PC clusters. Do you have any suggestion for an open-source linear algebra matrix library? Reliability, memory and computation efficient implementation and flexibilty in usage is requested. Relevant answer Answer One popular solution for C++ is the Eigen library (http://eigen.tuxfamily.org/). It uses expression templates to achieve high performance while maintaining a simple interface. You don't have to understand what expression templates are in order to use the library efficiently. BLAS and LAPACK are the most common choice. If you want a high level C++ interface for this, you might be interested in Armadillo (http://arma.sourceforge.net/). Make sure to configure Armadillo to use BLAS and LAPACK for optimal speed (it also works without additional libraries, but might have reduced performance). Question 4 answers For example, consider shaft as shown in the picture and consider the elements type as shown below. Relevant answer Answer Hi this question is very general and basic. You can find your answer in finite element text books. Question 4 answers In many practical problems, we get a large sparse matrix. Can we find determinant of this matrix efficiently? Relevant answer Answer Please give your large sparse matrix, we can talk about it. Because the types of sparse matrices are very huge, we can not answer you. Question 2 answers I am modeling a pipe. I have to calculate a mass matrix and stiffness matrix using the finite element method. Every node has two degrees of freedom. Relevant answer Answer Damping is usually complex to model as it is not inherent to other properties. The most basic models are using Rayleigh damping (or proportional). These models assume that your damping matrix C is a linear combination of your mass and stiffness matrix. C = a* M +b* K These models are absolutely not physical and can only be calibrated regarding a and b for specific frequency bands. Besides the use of a will tend to overdamp low frequencies. Regarding structural vibrations, one prefers the use of the modal damping matrix, that is diagonal in the system mode subspace and associates to each mode its damping ratio, that can be experimentally measured. Physically speaking, you are not providing a lot of information concerning your target application, so we cannot see what is the role of the introduction of damping in your model. For pipes, most damping may come from local junctions, so that a uniformly distributed damping associated to your material shall not yield an acceptable behavior. Question 1 answer Structurally decentralized fixed mode. Relevant answer Answer could you please explain what do you mean by 'generic rank' and a 'structural matrix'? Question 7 answers c = all(b>=-100*eps); b is a matrix I already have. what can you say about matrix c? 2.2204e-16 is the value of eps which command window shows. Relevant answer Answer The epsilon of the machine (short: eps) is the minimum distance that a floating point arithmetic program like Matlab can recognize between two numbers x and y. Try this: >> format long e >> x=1;y=x+eps; >> y-x ans = 2.220446049250313e-016 >> x=1;y=x+eps/2; >> y-x ans = 0 You see that y-x=0 and Matlab cannot recognise a difference less than eps: >> eps ans = 2.220446049250313e-016 Question 2 answers . I have Ax = Iy, where A matrix and I Identity matrix. Relevant answer Answer If y is a matrix, just use the command: rref([A y]). I must say that I use octave instead of matlab, but I believe the commands and structures are the same. Question 6 answers We found the following rather simple result. We suspect its been proven decades ago, but we couldn't turn up this fact after several searchers. (If this is known, please provide a reference.) Let B be a square matrix over a finite commutative ring with unity. Then det(B) equals zero or is a zero divisor if and only if the set of row vectors of B is dependent. Relevant answer Answer otherwise look at the beatiful book of Brown: "Matrices over Commutative Rings", chapt. 4 (where the notion of rank is discussed). In chapt. 5 you find the aforementioned Theorem of McCoy. Another good book on the topic is McDonald, "Linear Algebra over Commutative Rings", 1st chapter. Constrained generalized inverse of a non square matrix? Question 14 answers Suppose that we have a system of linear equations given by: Ax = b ;$ A \in R^{m,n}\$ subject to: x_min < x < x_max How can we obtain the minimum norm solution respecting the constraint "without optimization"?
The answer given by Peter T. Breuer  needs some correction. Namely, when one searches for the smallest norm of a unitary combination of vertexes one cannot restrict to combinations of only those vertexes that are the closest to the origin. An easy counterexample comes from the linear segment in R2 that has  the following 2 vertexes: (1, 3) and (1, -1). The closest to zero vertex is (1, -1), but the closest to 0 element of the segment is (1,0), and its representation as the convex combination of vertexes needs both endpoints of the segment.
Question
I have got a data matrix generated on some plant species as proportions from certain quantitative measurements. Since such data (proportions, percentages, probabilities) are generally known to be skewed with unequal variances, I intend to transform mine before ANOVA. I understand that the arcsine or logit transform can be the best for such data. However I am constrained by the fact that the proportions obtained by me do not only lie between 0 and 1. Many of them include values above 1 (i.e. percent increase, giving values greater than 100%). How best (with reasons) can the data be transformed prior to the proposed analysis?
You may not need to transform your data at all. Proportion data bounded by 0 and 1 can definitely exhibit skewness. With these kinds of data, values at the extremes (for example, >0.8 and <0.2) don't respond to transforms very well. You can test this for yourself by arcsin transforming extreme values. I would generally avoid transforming data before analyses without good reason.
For the kind of proportional change in your data, you don't have any boundaries to your data. So, you might have a normal distribution. Have you looked at a histogram of your data? You could also just go ahead and run your ANOVA and check the normality of the model residuals.
Question
I have some problem related to matrix inversion.
Matrix inversion is a good example for the conservatism in science. Unless you have very large sparse matrices (e.g. > 1000 x 1000 most matrix elements 0) there is only one recommendable method: Penrose's pseudo-inversion, which works for arbitrary mxn matrices and has in all cases a meaningful result which for invertible matrices reduces to the usual matrix inverse. Most books on linear algebra don't mention the method. Mathematica implements it so that it works also for complex-valued matrices. Its name here is PseudoInverse. The method is a simple derivate from the deep and insightful 'singular value decomposition' also known as SVD. You have to study it if you want to know the state of the art in inverting matrices. For the classical methods (such as the one mentioned in the first answer) you always have a problem with getting accurate results for matices with nearly linearly dependent rows or columns. In the many cases in which I made comparisons, Pseudo-inversion was always much faster and much more accurate than LU-decomposition or Cholski-decomposition.
Question
13
For   finite dimensional cases spectrum of a matrix is  relevant to the concept of eigen value of that matrix. In fact, there are several methods where these concepts are being used, for example: spectral method for solution of PDE, inversion problems.
What has present day spectrometry inherited from the low-level gamma ray scintillation methods introduced by Leonidas D. Marinelli, beginning in 1950?
Question
In 1950 Marinelli invented and developed a "twin" scintillator method for dosimetry and spectrometry of fast neutrons and its application to the measurement of cosmic-ray neutron background." (Patent 27795-703, June 11, 1957.) By 1953 he achieved the sensitivity of the method to the measurement of the natural K-40 content of man and animals.  In 1958 he presented the Janeway Lecture on "Radioactivity and the Human Skeleton."   In 1969 he published "Localization of scintillations in gamma-ray cameras by time-of-flight techniques:Linear resolutions attainable in long fluorescent rods." In 1970 he published "Time-of-flight" gamma-ray camera of large dimensions" with he objective of "measuring time intervals in the nanosecond region. This suggests the use of time-of-flight techniques in the localization of scintillations within large fluors and hence the estimate of the distribution of low-level radioactivity in vivo.  We devote special attention to the solution of Fredholm's equation of the first kind, linking the scintillation matrix to the radioactivity distribution matrix to develop the kernel of the equation (and hence the physical collimation) most appropriate to the objective."
In a slightly different way we utilise low level scintillation counting for estimation of U, Th and K-40 content of samples collected in grid pattern from a survey area. We have in situ measuremets of all these data also. Once we found that lab data show a reversal of U and Th with respect to in situ values in some samples, an inhomogeniety indicative of a different geological phenomenon.
Question
I got the scheme graph as attached. Can I determine the molecular weight of the polymer? I don't understand whether the data is correct or not.
Iam not sure these signals true or not. These signals seems to be noise. I was also find these type of signals  with sample or without sample also. Since absolute intensity very less ,signals also very week. In MALDI-MS signal intensity 3 times higher than noise. I think you can try with ferulic acid as a matrix.  It can work for high molecular weight compound. Further more polymer spectrum never look like these. We will get signals like tree with continuous. See the attachment paper may help you.
Question
Different anions have no same infects on bacterial,How to choose the functional group? Any articles I can consult in?
hi, I'm not an expert in this field but found this article. Hope this is useful.
Question
I have a hollow tapered vertical tower..I have developed stiffness matrix considering translational and rotational using the function defined in BATHE's book..but, since I need to use tuned liquid column damper at the top, I have to develop stiffness matrix in such away that only translational motion along horizontal should be considered. Can you suggest any method? I couldn't find any suitable method. I tried to do in ansys also using beam 54 element restricting ROTZ and UY..but my frequencies are not matching with the previously calculated frequency.
Hi Raseem,
You can give a try to MASS21 and COMBIN14 elements.
Question
I dont have the analytical expression of the periodic solution at which i compute the monodromy matrix but i have proved that it exists through the gaines and Mahwin continuation theorem.
Thank to all of you, i have finally solved the problem...
Question
Actually I already downloaded and installed markovchain package in R but I have difficulty following the instruction guide of the developer. I am estimating the transition probabilities of loan data from different states of delinquencies.
Dear Rudy,
can you further explain your problem or give more details about the data? In so doing, it will be easier to help you.
Best regards,
Luca
Question
I have established a SMC/EC 3D co culture system. Now I am trying to do Immunofluorescence of the co culture. I am having troubles like the ECs form a uniform monolayer on the top of the SMCs. The SMCs are in a collagen matrix. So I need to do anti alpha actin IF of SMCs. How can I do it? I can fix the sample and use triton x for permeabilization, but it would disrupt the ECs adhesions. Also if I do collagenase and make cells suspensions, it might interfere with the cell morphology and what not. Is there a way to do IF without disturbing both cells? I am using µ slides from ibidi.
Question
In doping, Whether increment of metal percentage on SC matrix or phase control of SC or varies metal on SC or some other important.
The quantity and type of doping will depend on what type (p- or n-) semiconductor you want to produce and to what end. In order words, the optimal property you are seeking depends on what/ where your intend to use the doped SC. Hope this helps.
Question
What are the advantages of using pearson correlation matrix instead of using polycoric corelation matrix in factor analysis?
I am writing an article and I need convincing reasons to use pearson correlation matrix I did not find an answer anywhere.
With my best wishes
In other words, pearson corrs are the most commonly used, especially when you have normal distributions.  Polychorics are when you have binary or very odd distributions to your variables.
Question
As I understand the working of guass quadrature and calculation of global displacements in FE method,
Gauss quadrature is used to quickly calculate the values in the stiffness matrix which is much easier to do in element's local co-ordinate system eta and zhi.
On using a single point integration for this purpose a much softer value is returned in the global stiffness matrix which is later solved for displacements.
So, finally there will be non-zero nodal displacements and thus a strain is induced in the Finite element. Why is it said that there is no strain energy in hourglass mode?
I have read some articles which explain this using the concept of integration points. According to them, the integration point has not moved - so there will be no strain. But arent we calculating all values on nodes in global equation [K][X]=[F] than on integration points?
In the FE method the various integrations are performed numerically using Gauss Quadrature and the number of Gauss points used dictates the degree of the integrand that can be integrated exactly.  If a lesser number of points are used the method leads to an approximation.  The standard conforming elements used in most if not all commercial FE packages are described as over-stiff in the sense that in a coarse mesh the displacements resulting from a force driven problem will be less than the true values.  They will, however, converge to the correct values with mesh refinement.  To provide some counter to this over-stiff behaviour FE software offers reduced integration schemes which use less points than required for exact integration.  This 'fiddle' leads to better numerical results for some types of behaviour as the stiffness is reduced.  Unfortunately, however, it also induces other unwanted behaviours that can sometimes upset the results of an analysis - the hourglassing issue.  An hour glass mode might be considered as a spurious (in that it would not occur in reality and with exact integration) kinematic mode or mechanism.  A mechanism is a mode of deformation that requires no forces to drive it.  Hour glass modes occur at the single element level and may or maynot propogate to the entire mesh depending on the type and degree of the element you use.
So returning to your question, the stiffness matrix is formed by integration at the Gauss points but, as you point out, the displacements are based on nodal values.  So if in your FE model you load it such that the nodal displacements cause no strain at the Gauss points then that mode of displacement will induce no strain energy in the element and therefore the element will have no stiffness to resist this load.  With full integration there are no such hourglass modes.
To develop your understanding you might look at, for example, the plate-membrane (plane-stress) elements both four and eight noded forms and with reduced and full integration.  Try single element tests with applied nodal displacements and forces to excite or drive the element in one of it's hour glass modes.
Question
2. I am working on UHTCs (e.g. ZrB2-SiC composites)
I think the micro hardness method must be accompanied with other mechanical test like Tensile Strength ,Shear Stress or some time Wear test to obtain comprehensive result for analysis the different aspects of phase deformations, change of mechanical /physical properties of CMCs.
In addition doing some post analysis on SEM and optical images for determining the porosity percentages and making a relationship between them to mechanical properties will be great.
Question
In the attached paper by Francois Leyvraz, I could not get past eqs.(6-7b) for the following reason: if you differentiate eq.(5) with respect to t, you obtain R(dot) super T times R + R super T times R(dot) = 0 , or, using nomenclature from the paper, Omega sub b + R(dot) super T times R = 0. Omega sub l does not appear. Moreover, if R is antisymmetric (as it must be), that does not imply that R(dot) is also antisymmetric. The product of any matrix with an antisymmetric one does not have to be antisymmetric...
Firstly, by taking the transpose of Omegab you obtain dot(R)TR, so that the first equation gives Omegab+OmegabT=0, so Omegab is antisymmetric as claimed. The same trick can be used for Omegal after multiplying eq.(6) with R from the left and R-1 from the right.
Second, R is not antisymmetric, it is an orthogonal matrix. dot(R) is not antisymmetric either, in general, it fulfills:
dot(R)T=-R-1dot(R)R-1
Question
I need to determine  Pb in food samples by VARIAN 240FS AAS. What Is the best matrix modifier for the determination of Pb by VARIAN 240FS AA fast sequential atomic absorption? What type of chemical modifiers would you suggest?
the best modifier fo for the determination of Pb by VARIAN 240FS AA is an "KMno4"
Deleted!
Question
Deleted!
@Mohamed: Sir! What you are saying is correct for the system of linear equations(fsolve), but here equations are system of matrix equations. @Hazim: I downloaded the book sir, I will check it out and let you know. Thank you sir :)
Question
For a linear measurement equation l=Ax+v and a linear equality constraint Cx+d=0, where l is the measurement vector, A is the design matrix, x is the parameter vector to be estimated, v is the measurement error which is zero-mean with known covariance matrix R, I known how to estimate x using the Lagrange multiplier, the question is how to calculate the covariance matrix associated with this estimate?
If the covariance matrix of v is Cv=s2 P-1 and setting N=ATPA the unconstained solution is x0=N-1ATPl and the constrained one x=x0-N-1CT(CN-1CT)-1(Cx0+d).
Covariance propagation gives the covariance matrix of x as
Cx=s2[N-1 - N-1CT(CN-1CT)-1CN-1]
v is estimated by v = l - Ax and s2 by s2 = vTPv/f
where f are the degrees of freedom.
For the detailed derivation see the attached file
How can I solve a system of matrix equations?
Question
For example if I have system of matrix equations like this: (A *X*B) -(C*Y*D)-(I*X'*G)= Q1, [EQ1] (E*X*F)+(G*Y*H)= Q2.[EQ2] ** || all the matrices are of size n*n  ||** Where A,B,C,D,E,F,G,H,I,G,Q1,Q2 are known matrices[all are invertible ]  and X&Y are unknown matrices. I tried to use vectorization technique  but because of X' &X together  in EQ1 i am unable to do that. Can anyone suggest me some method to solve it (Preferably by using MATLAB) ? Thanks in advance.
Fist of all X' should be specified, if it is the  transpose of X, then we have two liner equations of matrices with two variable matrices, hence solution exists. Or, otherwise  a question of consistency of the system. Assuming X' as transpose of X. Since all matrices are non singular and of same order Equations 1  and  2 reduce to (C-1 *A*X*B*D-1) -Y-(C-1 *I*X'*G*D-1)= C-1 *Q1*D-1  (G-1 *E*X*F*H-1)+Y=G-1 *Q2*H-1) By eliminating Y from these two equations, we can solve the single equation (C-1 *A*X*B*D-1) -(C-1 *I*X'*G*D-1)+(G-1 *E*X*F*H-1)= C-1 *Q1*D-1 +G-1 *Q2*H-1) with fact that  X =x_{ij} and X'=x_{ji}.
Question
I transferred 2D slices in 3D single Matrix.
It seems to be a volume rendering problem. There are some free implementations on the web for volume rendering.
Question
A[4,4]  is divided into 4 blocks of an[2 2] then, Axx -> an(y,z)
Suppose A =(aijnxn and A = ( Bpq ) where each B is a mxm matrix and m divides n. If  n= m x N, then p and q run from 1 to N. Let i = m. I + s and j = m.J + t , then aij = (s, t) entry if s >0 or t >0 else s= m or t = m  of BI+1, J+1 Block matrix.
Question
The eigenvalues of εA+(1-ε)B (0<ε<1) locate inside the convex hull formed by all eigenvalues of both A and B.
Is the above statement true or false? (I guess is true)
How to prove it?
I am a little surprised about the precise question here: Are you talking about matrices with real eigen values only? Or do you assume positive definite?
For hermitian matrices a lot of work was done in recent years, have to check the literature for this.
Also I'd like to see reference for the theorem (Weyl?) mentioned by Peter T Breuer.
Question
If S matrix is given for a general optical waveguide, I need to know about effective index & loss associated with that. The real part of the complex propagation constant gives effective index & imaginary part gives me attenuation coefficient.
See Lecture 15 at the following link.  It is the first or second topic as I recall...
Question
Suppose that A is a  matrix which is marginally stable and K stabilizes A through the fact that A+K is asymptotically stable. Then is it true that A+εK must also be asymptotically stable for any arbitrarily ε subject to ε>0? So how to prove it?
It seems that  something like thios can be also useful to get conditions:
If
1) A marginally stable
2)A+K stability matrix
either     K(transpose)*A+A*K
or     K(transp)*K
is positive definite ( brief notation" >0")
then
B(epsilpon)=A+epsilon*K
is a stability matrix for all epsilon in (0,1).
Define
C(epsilon)=(A+epsilon*K)(transpose)*(A+epsilon*K)
=A(transpose)*A+(epsilon**2)K(transp)*K+ epsilon*(K(transp)*A+K*A)>0
for epsilon in (0,1) since
A(trans)*A>=0( symmetric, semidef. positive)
C(1)>0(symmetric,  positive definite) : otherwise, A+K would be singular, then it cannot be a stability matrix.
C(epsilon)>0 for epsilon >0 since A(transp)*A>=0, (epsilon**2)*K(transp)*K>0, epsilon*(K(trans)*A+A*K)>=0 ( the sum of a positive definite matrix with two  semidefinite ones)
eigenvalues of C(epsilon) are real non negative ( since symmetric positive at least semidef pos) and are strictly increasing as epsilon increases since
C(epsilon)/d(/epsilon)=2*epsilon*K(transp.)*K +K(transp)+A*K>0 for epsilon nonnegative
so its eigenvalues are real , nonnegative and grow with epsilon.
It can be also considered to relax
(K(transp)*A+K*A) pos. definite to semidefinite if, in addition,  K is  nonsingular since then
K(transp)*K is positive definite and C(epsilon)>0 for epsilon >0.
Question
Is this matrix a positive or semi-positive definite matrix ?
[ 2 0 0 0
0 2 0 0
0 0 2 0
0 0 0 0 ]
Question
such as  A R2 + B R + C = 0.
where A, B, C are matrices of nXn order. its solution is needed in order to get A minimal matrix.
As suggested by Dr. Peter above take equation as R2+BR+C=0 .Find eigen-values and eigen-vectors of B2- 4C and diagonalised it  as D by P-1 (B- 4C) P, P matrix of eigen-vectors. Compute diagonal matrix D1 such that D1x D1 = D that is D1/2 = D1. Find matrix Q = P D1 P-1 . Now your R = ( -B +Q)/2  or (-B - Q)/2 .
Question
In a Quantitative measurements using thick targets by relative method, how to correct for stopping power when the proton beam has different range in standard and sample. Also standard and sample are of slight different in matrix. Since beam will completely stop in the targets, so I am thinking how to use stopping power correction or Range correction of data when samples are thick?
There is a standard formula for the thick target correction where you can take into account of standard stopping power and sample stopping power ratio. One can calculate the stopping power of elements with the help of  TRIM programe.
Question
I want to prepare the arlequin input file for my 0 and 1 matrix data (dominant marker). The data are huge. I want to mention the numbers of haplotypes that there are between individuals of one population and also between populations of one group and so on. But how can I find the haplotypes in my huge data? Is there any software or suitable way of doing it?
Dear Najmeh,
For your question, you can use DnaSP software because you can easily find  haplotypes and also this program enables to create file for Arlequin.
All the best,
Serkan
Question
I am trying to develop polymer matrix patch. Can anyone please explain what will be minimum and maximum polymer % concentration in transdermal polymer matrix patch?