Science topics: EngineeringModeling
Science topic

Modeling - Science topic

Explore the latest questions and answers in Modeling, and find Modeling experts.
Questions related to Modeling
Question
4 answers
I am a biologist who is trying to become more quantitative as well as more model savvy. One excellent tool I have stumbled upon is the SimBiology toolbox for MatLab where you can graphically layout your model. This has been very useful but I have come up against a problem: I cannot use a conditional statement.
My goal is to make a bacterial growth model such that when there is sugar present, the bacterial population increases while if there is no sugar present, the cells start to die. Does anyone have experience with this kind of "if-then" statement in this program?
I am also open to any reccomendations on modeling in general or other EASY programs that can be used to make models.
Relevant answer
Answer
Here are a few tools that you might want to consider: ModelBuilder (http://model-builder.sourceforge.net/), Cain (http://cain.sourceforge.net), and xppaut (http://www.pitt.edu/~phase/). I am not sure how hard or easy they are to use. I don't know what sort of modeling approach you use (ode? stochastic?), so I am not sure how relevant these tools might be to what you wish to do. All are open-source, and the first two are available in Ubuntu repositories. Since you talk about bacteria in a sugar(free) environment, are you aware of http://www.ncbi.nlm.nih.gov/pubmed/19186128 ? netlogo (ccl.northwestern.edu/netlogo/) might offer similar capabilities; in any case, netlogo is fun to play with.
Question
4 answers
Currently, I'm doing physicochemical studies of ionic liquids (ILs) and MDEA aqueous mixtures. I have density, viscosity, and surface tension data for the ternary mixtures. Do you have any idea or suggestion on how to relate the ternary mixtures (IL-MDEA-water) and an each pure compounds (ILs, MDEA and water) using a model or by setting up an empirical equations.
Relevant answer
Answer
Simple yet effective method for representation of the density (but also dielectric constants, surface tensions and absolute viscosities) is Jouban-Acree model. It allows calculation of physical properties of mixtures at various temperatures using pure substance properties and parameters regressed from experimental data.
Model for density of binary and ternary mixtures is presented in article "Mathematical representation of the density of liquid mixtures at various temperatures using Jouyban-Acree model" http://nopr.niscair.res.in/bitstream/123456789/18069/1/IJCA%2044A%288%29%201553-1560.pdf
Question
13 answers
In Mathematical and computational models, when you estimate and assume the values of some parameters, there is always a strong tendency that your model will not predict an accurate result or such model can even generate some errors.
How do you correct the errors generated by your model especially during a predictive process? and How do you ensure that your computer model generates an accurate result?
Your answers will be highly appreciated
Regards
Relevant answer
Answer
Try looking at the work of Kennedy & O'Hagan on Bayesian Calibration of Computer Experiments, or a short review I wrote: http://eprints.soton.ac.uk/146605/1/Black-box_calibration_for_complex_systems_simulation.pdf . The idea in these works is to model unknown effects with what is, essentially, a statistical model of the difference between your model and your data.
Question
5 answers
The numerical modeling of dam-break based on shallow water equations have some similarities between the flow which is produced by a gate valve in a open channel (like upstream head water, downstream kinematic wave propagation of turbulent, etc.). Also the scale is not the same, can they be used as the reference method for modeling the flow water with sediments transport in the sewer collection? If not, which models can be used for this aim? The aim of my study is to model the sediment dynamic under flushing effect of a radial gate valve in a sewer and evaluating the efficiency of the valve.
The collector is free surface with 3m of height and 2m of width.
Relevant answer
Answer
Recently, dam-break models have been massively used to model flushing in sewer systems (Carravetta et al. 2000, Campisano et al. 2004, 2007, Creaco et al. 2009). In the case of radial gate valves, the correct implementation of the upstream boundary condition should be important, but I think that the sudden and complete removal of the gate could be a sufficent modelling choice if the opening is very fast.
With reference to the sediment movement, maybe a non-equilibrium formulation (Cozzolino et al 2014) is better.
On the best of my knowledge, HEC-RAS SHOULD NOT BE USED for this type of rapid transient problems, and state-of-the-art Shallow-water or De Saint Venant models should be used.
Best regards
References
- Campisano A., Creaco E., Modica C. (2004) Experimental and numerical analysis of the scouring effects of flushing waves on sediment deposits. Journal of Hydrology 299, 324-334.
- Campisano, A., Creaco, E., Modica, C. (2007). Dimensionless approach for the set-up of flushing gates in sewer channels. ASCE Journal of Hydraulic Engineering 133, 964–972.
- Carravetta A., Del Giudice G., Di Cristo C. (2000) Una valutazione dell'efficienza dei sifoni di cacciata. IDRA2000 - XXVII Convegno di Idraulica e Costruzioni Idrauliche (in Italian)
- Cozzolino L., Cimorelli L., Covelli C., Della Morte R., Pianese D. (2014) Novel numerical approach for 1D variable density shallow flows over uneven rigid and erodible beds, ASCE Journal of Hydraulic Engineering 140, 254-268.
- Creaco E., Bertrand-Krajewski J.-L. (2009) Numerical simulation of flushing effect on sewer sediments and comparison of four sediment transport formulas, IAHR Journal of Hydraulic Research 47-195-202
Question
7 answers
I'm trying to model the sediment dynamic in a sewer collection under the effect of hydraulic forces (flushing energy) produced by a gate valve. The aim of this research is to develop a 2D model which solves the Navier-Stokes equations. So this model will be the coupling of certain models like hydrodynamic, sediment behaviors, turbulent flow, etc. Which approach can help me in this modeling of this two-phasic (liquid/solid) turbulent flow? Do you have any idea about the type of software that could be used and its applications?
Relevant answer
Answer
If you are good coding i suggest you to implement the two phase flow in SPH, i think it could be a good solution.
In DUalSPHycis some researchers are working on that edge I suggest you the work of Georgios Fourtakas, He is a partner of mine and has a lot of work on modeling soil movement.
What should be the percentage of support vectors from the entire training dataset in order to decide upon good model performance?
Question
9 answers
From a training dataset of 84 instances I have got 38 instances as support vectors. Is my model performing well? Is there any underfitting?
Relevant answer
Answer
Do you coinsider a two class problem? And what is the dimensionality of your data? Typically, if your input data is n-dimensional, you should expect to find about n+1 support vectors for binary problems. In multiclass problems, the number of support vectors can easily grow to up to the total number of data. However, if you obtain rather few support vectors (38 is not that much) but observe acceptable accuraciy, you should consider yourself lucky! The fewer support vectors there are, the faster your final program.
Question
10 answers
Does anyone have expertise in modeling micro hydropower in simulink or maybe have some advice on good reading material?
Relevant answer
Answer
Question
21 answers
I was thinking about the Cost-231 Hata Model for the path loss to modelize the LTE signal strength in the metro. Am I wrong?
Relevant answer
Answer
I think you need to use Indoor formula for COST 231 but it all depends upon the number of walls and structure of metro.
Question
11 answers
Every sensor behaves different in different environments. How do you objectively compare them?
Relevant answer
Answer
If I understand correctly, you have two modes of detection (a laser and a Wifi fingerprint technique, I do not know this technique)
it is normal that two modes of investigation give you two different kind of results
you need to mathematically analyze these two signals that in order to extract relevant information.
Does this answer your question?
Question
3 answers
I am a PhD student looking at developing models for virtual collaborations in some specific areas such as manufacturing. I wonder if there is any model validation method I could use to validate the models I develop.
Relevant answer
Answer
In as much as developing instructional design models in the virtual world with focus in manufacturing, I would say there are Universal Design Learning design standards. I like UDL methodology in designing courses. but, there are other instructional design/authoring tools like ADDIE project design. The biggest issue is making sure all meets ADA of 1990s  (Americans Disability Act) standards.
Question
67 answers
A million dollar research grant was issued to reject null hypothesis X. Unlucky researcher A could not find statistical evidence to reject X. With his test, he found a non-significant p value of 0.1. You still believe in the alternative hypothesis and replicate Researcher A's study. You, too, find a p value of 0.1. What is your conclusion? How does this finding influence your beliefs about the null and alternative hypothesis?
Relevant answer
Answer
As I am not a statistician I leave the interpretation of the question from this point of view to the professionals (there are already great answers). Yet, I got to point out a few general issues:
First: "the million dollar research grant" certainly was not issued "to reject null hypothesis X" (well, maybe it indeed was after all, as embarrassing as that might be...), but to find and investigate evidence, whether the hypothesis could be considered false. Second: Always remember that an isolated p-value doesn't tell anything about the practical importance of an effect, as p depends on the sample size. And because p isn't what we are actually interessted in (namely P(H_0 | x) != P(x | H_0) = p).
Whether the repeated finding of "an insignificant p value" would change anything about my beliefs about anything would depend mostly on the quality of the study design and the reported (raw) data (if it actually *is* reported...), not on the outcome of some null-hypothesis-signficance-test nonsense. Why is a p-value of < 0.05 significant, why p >= 0.05 not? Right: It's an *arbitrary* decision. And thus are the binary reject-accept conclusions drawn from such tests: arbitrary.
The idea behind Fisher's original concept of a p-value wasn't to "reject" or "accept" a hypothesis, but he thought of it as a “… rough numerical guide of the strength of evidence against the null hypothesis.” (R. A. Fisher) In Fisher's framework there was no concept of an (disjunct) alternative hypothesis or power/error rates. In Fisher's thinking, a small enough p could justify repeating an experiment, but wasn't evidence enough to reject or accept anything.
Neyman-Pearson on the other hand where interested in minimizing the long-term error rate of *repeated* decisions. Think quality-control and the the like. This kind of thinking is - by design - fundamentally *not* applicable to single studies, but only to a series of repetitions: “... no test based upon a theory of probability can by itself provide any valuable evidence of the truth or falsehood of a hypothesis. But we may look at the purpose of tests from another viewpoint. Without hoping to know whether each separate hypothesis is true or false, we may search for
rules to govern our behavior with regard to them, in following which we insure that, in the long run of
experience, we shall not often be wrong.” (J. Neyman and E. Pearson, 1933)
Mixing both concepts together leaves us with the useless, yet seemingly objective null-hypothesis-signficance testing as it is forced onto scientists these days... The only sensible advice to such a question I can thus provide is: Don't be a slave to the p value or any mechanically applied statistical testing procedure!
“But given the problems of statistical induction, we must finally rely, as have the older sciences, on replication.”
— Cohen, 1994
“If Fisher and Neyman–Pearson agreed on anything, it was that statistics should never be used mechanically.”
— Gigerenzer, 2004
Further reading:
- Belief in the law of small numbers (Tversky & Kahnemann, 1971)
- Statistical inference: A commentary for the social and behavioural sciences (Oakes, 1986)
- Things I have learned (so far). (Cohen, 1990)
- The Philosophy of Multiple Comparisons (Tukey, 1991)
- p Values, Hypothesis Tests, and Likelihood (Goodman, 1993)
- The earth is round (p < .05). (Cohen, 1994)
- P Values: What They Are and What They Are Not (Schervish, 1996)
- Toward Evidence-Based Medical Statistics. 1: The P Value Fallacy (Goodman, 1999)
- From Statistics to Statistical Science (Nelder, 1999)
- Calibration of p Values for Testing Precise Null Hypotheses (Sellke et al., 2001)
- Misinterpretations of Significance (Haller & Krauss, 2002)
- It's the effect size, stupid (Coe, 2002)
- Mindless statistics (Gigerenzer, 2004)
- The Null Ritual (Gigerenzer, Krauss and Vitouch, 2004)
- Why P Values Are Not a Useful Measure of Evidence in Statistical Significance Testing (Hubbard & Lindsay, 2008)
Question
2 answers
Trajectory tracking control of dynamic nonholonomic systems with unknown dynamics.
Relevant answer
Answer
Nonholonomic mechanics describes the motion of systems constrained by non-integrable constraints i.e., constraints on the system velocities that do not arise from constraints on the configurations alone.
The books by Montforte J(orge Cort´es Monforte, "Geometric, Control and Numerical Aspects of Nonholonomic Systems, Springer, Lecture Notes in Mathematics, Volume 1793, New York, 2002) and Bloch (Bloch, A. M., "Nonholonomic Mechanics and Control", Springer, Interdisciplinary Applied Mathematics, Volume 24, New York, 2003) provide detailed study on nonholonomic systems and their control and stabilization that include various aspects like controllability and accessibility, motion and trajectory planning, optimal control, trajectory tracking and point and set stabilization.
However your question is about trajectory tracking for a nonholonomic system with unknown dynamics. For any feedback control whether it is for stabilisation or tracking/setpoint, an approximate model as close to the dynamics of actual physical system will be required. Certain portion of the dynamics of this model could be unknown (or neglected for e.g. for order reduction) or uncertainties in the model parameters are present. I am not sure if the dynamics can be completely unknown for control of that system.
Question
7 answers
What are the main components of a good undergraduate computational physics program? Which resources if any are available for new faculty engaged in such initiatives?
Relevant answer
Answer
Today world has changed to,"signals world" or indirect mathematical. With advent in computers/micro controllers for doing all kind of works, related pure science/medical science/engineering etc.Mathematical modeling, the only way to understand the subject in detail,with physical phenomena (using Matlab simulation technic , better understand can achieved by under graduate students, the future doctors/engineer/scientist ).
Question
12 answers
Does anyone have experience of modeling a PMSG on the d/q reference frame? I built one simple model according to the governing equation. The inputs are Vd, Vq and Omega. Outputs are id and iq. I set values for input variables which corresponds to zero d-axis value in a steady state of the generator. So I was hoping a zero id value would be obtained. But the results was not as that I wanted. I checked the equation and everything, seemed like nothing was wrong. Anyone had this problem before? Please tell me how to solve the problem.
Relevant answer
Answer
Wei,
I have attached a modified model with a simple mechanical governor which ensures the system is able to settle down into steady state operation. I have created a states file called gdxFinal which gets saved at the end of each simulation run and which is loaded at the next run to ensure the states are saved. it appears to work ok. I have saved as *.mdl as I have ver 2013 which saves as *.slx.
Try it and see how it goes. It uses Vd = 500*sqrt(2) and Vq=0. In this case Iq <=>0.
Check the mechanical part as I have modified Te to now be correct and multiplied by 3/2*p
In steady state the states are
[157.0796;43.5977;59.9076:-11.0792]
[ Wr:Teng:Iq;Id ]
It is possible to choose Vd and Vq such that Id =0 but you can do this from the steady state equations by hand.
Question
1 answer
I am trying to model pinned supports for a rectangular RC floor slab in Abaqus. Does anyone know a good way of doing so? I am new to Abaqus software.
Relevant answer
Answer
I want model a pin in the hole?
If you find help me too
Question
8 answers
I have a model and want to know if it can be calibrated for the economy of Pakistan.
Relevant answer
Answer
I'm not sure that would be a good idea as there may be a divergence between wages earned and the income per capita. I am trying to back my way into it, if I use the data from 1960-1999, before they changed the accounting base I get the following in a 3SLS model:
C = constant + b1(GNP) + b2(C(t-1))
I = constant + b1(GNP(t)-GNP(t-1))
GNP = b1C + b2I + b3NX
they actually include government spending in net exports so the system should still be robust.
Equation system, 3slswithNX
Estimator: Three-Stage Least Squares
Equation 1: 3SLS, using observations 1961-1999 (T = 39)
Dependent variable: C
Instruments: C_1 const d_GNP NX
coefficient std. error z p-value
-----------------------------------------------------------
const 16086.3 12630.4 1.274 0.2028
GNP 0.870531 0.167848 5.186 2.14e-07 ***
C_1 −0.104563 0.221156 −0.4728 0.6364
Mean dependent var 282293.1 S.D. dependent var 157057.7
Sum squared resid 6.23e+10 S.E. of regression 39968.46
R-squared 0.933577 Adjusted R-squared 0.929887
Equation 2: 3SLS, using observations 1961-1999 (T = 39)
Dependent variable: I
Instruments: C_1 const d_GNP NX
coefficient std. error z p-value
-------------------------------------------------------
const 31203.4 8497.08 3.672 0.0002 ***
d_GNP 2.07156 0.482688 4.292 1.77e-05 ***
Mean dependent var 63427.82 S.D. dependent var 30565.14
Sum squared resid 2.40e+10 S.E. of regression 24829.10
R-squared 0.322759 Adjusted R-squared 0.304455
Equation 3: 3SLS, using observations 1961-1999 (T = 39)
Dependent variable: GNP
Instruments: C_1 const d_GNP NX
coefficient std. error z p-value
-------------------------------------------------------
C 1.57050 0.626783 2.506 0.0122 **
I −1.67497 2.89913 −0.5778 0.5634
NX −0.0918786 0.391882 −0.2345 0.8146
Mean dependent var 337158.8 S.D. dependent var 192532.1
Sum squared resid 1.57e+11 S.E. of regression 63486.72
At the moment this seems fairly robust, but I am going to play with it a little more. This would mean the MPC is .87 resulting in a multiplier of around 7.5, so Pakistan is pretty dependent on consumption, which should eventually effect the choices of children and education. What I want to do is see if I can work back now from the Aggregates to the Solow/New growth theory stuff so that we can develop a metric to calibrate the agent model to the Macro statistics.
Let me know if this triggers any ideas, I will be playing around with this next week and see where I get.
Question
4 answers
I've been using J-test for model selection but apparently it's not a good measure when models have different degree of freedoms. To my understanding, Akaike information criterion (AIC) or Bayesian information criterion (BIC) could be used as they consider the degree of freedom as well. I believe either of these two criteria can simply be used in the Indirect Inference method as we have the maximized value of the likelihood function, but how about in the Method of Simulated Moments? Can I just use the negative of the Argmin function or it's more sophisticated than that?
I’m looking forward to hearing your suggestions for model selection (on the use of AIC, BIC, or any other techniques) in Indirect Inference and Method of Simulated Moments.
Relevant answer
Answer
If you have the same data when trying to chose the best model, you may use any suitable GoF method to select the best model, as the degrees of freedom do not change much from one model to another. I would use some statistics of the residual errors, which helps to avoid serial bias as well. (E.g. Durbin-Watson statistics.)
Question
3 answers
I need to know how to determine the discharge from the groundwater model, so that optimum pumping rate can be suggested to remediate the contaminated groundwater using pump and treat method.
Relevant answer
Answer
I agree with Peyman Babikhani....in fact that is the only way to estimate the inflow and outflow from any groundwater system if you use MODFLOW....only thing is that the accuracy depends on the number of zone (assumed during model) .... If you do the donation based on groundwater .....u will get good result....all the best....feel free to contact me if u have any query to rsaran@annauniv.edu
Question
2 answers
I am attaching my procedure of unit conversion; I am confused with units of doping density.
Relevant answer
Answer
Dear Rawat, For bulk semiconductor materials the doping is measured by the doping density which is the number of dopant atoms per cubic centimeter of the material. It can also given by the ratio of the dopant atoms to the native atoms. In principle by doping the material one gets a solid solution. The doping can be defined as a ratio of masses of the two materials in the solution.
What you named molar fraction is just the ratio of the dopant atoms to the native atoms as i mentioned before. It is the mixing ratio.
In case of very thin film materials you want to define the number of dopant atoms in the width of the ribbon per unit ribbon length as the doping density.I would propose to call it the doping density for certain ribbon width per unit ribbon length. In this case your calculations will be right and meaningful.
Wish you success.
Question
10 answers
Diffuse reflection boundary condition (bc) for the Boltzmann equation is widely used. However, it is not easy to find results on the derivation of this bc and more generally on bc for the Boltzmann equation. I should be interested in any references you may have pertaining to this question.
Relevant answer
Answer
Hi Pierre
I think the boundary condition might be the only unsolved puzzle in rarefied gas dynamics; formal derivation is rare, and your work with Stéphane Brull and L. Mieussens is a nice one, as well as the one from Aoki (if I recall correctly). Also please have a look at our JFM paper "Assessment and development of the gas kinetic boundary condition for the Boltzmann equation" where Henning and I have assessed the accuracy of some kinetic BCs.
I think the work of FREZZOTTI, A. & GIBELLI, L. 2008 "A kinetic model for fluid wall interaction" is very physical, where the BC is modelled by the Enskog-Vlasov type equation so that the repulsive and attractive forces are built in.
In addition, please also have a look at the recent works from a different perspectives:
T. Liang, Q. Li, and W. Ye. “A physical-based gas–surface interaction model for rarefied gas flow simulation”. Journal of Computational Physics, 352: 105-122, 2018.
T. Liang, Q. Li. " Accurate modeling of Knudsen diffusion in nanopores using a physical- based boundary model", Journal of applied physics, 126, 084304, 2019.
Question
2 answers
Which models can be used for simulating EIS of PEM fuel cells?
Relevant answer
Answer
Question
6 answers
Some researchers declare that the location is a strategic, while routing is a tactical problem in the location-routing problem. Also, they express that the routes can be re-calculated more and more. Moreover, locations are usually for a much longer period. Therefore, they claim that it is inappropriate to integrate the location and routing in the same planning framework.
Relevant answer
Answer
Logistics costs often represent large portion of them. To reduce them, depot location and vehicle routing are crucial decisions.
Location is strategic decision problem and routing is either tactical/operations decisions problem. Most of the time two decisions are tackled differently, it results increase in total costs(i.e. sub optimal decisions).
you can develop an algorithm that has to consider the principle is to alternate between a depot location phase and a routing phase, exchanging information on the most promising edges.
Question
5 answers
I haven't come across any other dance being modeled for automation. Can anyone cite me some papers?
Relevant answer
Answer
There is a choreography modelling software called "Life forms", maybe it could be of some use to compare with your own development?
Merce Cunningham experimented with it and used it to compose some dances.
Good luck!
Daniel Olsson
Question
4 answers
I used bootstrapping to draw my observations from a population and a structure of a multivariate model. In order to alleviate sampling bias, I intend to estimate the model, say, 1000 times and somehow infer the parameter estimates from the 1000 different models.
I'm thinking of using the mean of these 1000 estimates as my models to estimate for each parameter and use their standard deviation as the parameter's standard error. Is this recommended? Is there literature utilizing this method?
Relevant answer
Answer
You can also use the bootstrap estimates to construct uncertainty intervals which will give you some information on the range of parameter values.
Question
10 answers
We have two sets of differential equations for neuron dynamics and astrocyte Ca2+ oscillations but with different time scales (Di Garbo 2009). Is it correct that for simulation of these sets simultaneously you must use integrators with 1000 coefficient for rescaling one set?
Relevant answer
Answer
There are dimension reduction methods for time scales systems (slow-fast system) which can be useful for this kind of problems. Try, for instance,  
P. Auger, R. Bravo de la Parra, J.C. Poggiale, E. Sánchez, L. Sanz, Aggregation methods in dynamical systems variables and applications in population and community dynamics, Physics of Life Reviews, 5(2):79-105, 2008.
Question
13 answers
I'm putting together a list of ongoing modeling works (from different research areas) which try to understand and predict the behavior of an interesting phenomenon. Just a few examples to kick off the discussion:
- In environmental systems, global climate change modeling seems to be a perpetual challenge. Making climate change models accurate enough for quantitative prediction has been bedeviling such models.
- In obesity and nutrition, the current literature provides over 100 statistical equations to estimate basal metabolic rate (BMR) as a function of different attributes (e.g. age, weight, height, etc.), yet understanding how BMR is precisely modeled based on those attributes is an interesting area of research.
There seem to be tons of examples in biology, psychology, economics, engineering, etc, but what are the 'publicly interesting' challenges that you would like to add here?
Relevant answer
Answer
@Fernando Pimentel
You may be interested in this.
"Early-warning signals of topological collapse in interbank networks"
This is a graph-based modelling of the inter-banking structures that shows early signs of bank failure.
Question
2 answers
My research is based on the knowledge management for construction. I am exactly aiming for Building Information Modeling (BIM). What information do you think would be relevant for me to collect through a survey if I want to tie it up with lean construction? Any suggestions on how I should proceed? Thank you.
Relevant answer
Answer
You may find the following paper useful:
Information management for concurrent engineering: research issues
B Prasad, R S Morenc, R M Rangan in Concurrent Engineering: Research and Applications (1993)
as a whole. The paper outlines major requirements facing concurrent engineering (CE). It focuses
Question
11 answers
I have made a 3D interpolation in SGeMS | Stanford Geostatistical Modeling Software.
The target grid used was a masked 3D grid, after interpolation, the results show the grey rectangle which blocked the target grid (see attachment).
Does anybody know how to remove the grey rectangle box?
Relevant answer
Answer
One thing that you could try is the "preferences" tab of the visualisation panel. First select your object grid in "Preferences for", then tick "Use Volume explorer" and enter the range of values that you would like to be transparent (you might need to hit <Enter> when you change the transparent ranges).
Hope this helps,
Question
2 answers
Please help me with some meso scale modelling of concrete.
Relevant answer
Answer
It is impossible to simulate the crack propagation using meso-scale model using beam elements. Beside you have to be extra carful about element size and fracture energy. Personally I don't recommend meso scale model for crack propagation. Please read my article "Flexural and Interfacial Behavior of FRP-Strengthened Reinforced Concrete Beams". You can find detailed discussion about meso-scale model and crack propagation simulation.
Question
1 answer
Using different models except SWAT model.
Relevant answer
Answer
 
It depends on your goal. <br />Your search for distributied model or a lamp on? or event base or continuous model?
Your goal,  available data, scenario which you are considering change in model, determine the model<br />
What aspects of operational modelling can be formulated by mereotopology?
Question
3 answers
Mereotopology was used already for modelling assembly process of products. I am interested to use it for defining operational structure of products and developing a formal definition for design process.
Relevant answer
Answer
If the polynomial expression can be incorporated into the operational model, why not? Play around with the Chebyshev and Boubaker Polynomial schemes. It makes some sense.
Question
5 answers
By multiscale models I mean models which range across multiple spatial and/or temporal scales. I would be particularly interested if there is a methodology describing how to integrate submodels from different spatial/temporal scales. No specific area was mentioned (e.g. computational systems biology) because I would be interested in the general theory common to most multiscale models, if such a theory exists.
Relevant answer
Answer
Here is a review of multiscale modeling methods that may be of interest:
Question
7 answers
By grey system theory I mean the one defined by Deng Julong in 1982. 'As far as
information is concerned, the systems which lack information, such as structure message, operation mechanism and behaviour document, are referred to as Grey Systems.', Deong Julong.
A lot of research in grey system theory has been done since 1982. There are books and articles, but performing all the calculations manually is not effective, especially when some changes have to be introduced repeatedly.
Relevant answer
Answer
This link has grey system software for downloading for free.
Question
2 answers
I am looking to validate the EPIVIT model for in-season virus spread in potato. The original model is in a very old computer language (pascal) and might be difficult to acquire.
Has anyone worked with this model? Any leads on where I can find an updated code?
Relevant answer
Answer
Pascal is still working, what is bad in it?
Question
2 answers
I'm trying to model a deep excavation near an existing tunnel, using TNO DIANA.
Relevant answer
Answer
Dear Ehsan,
Please refer to the below website:
Also, you can have a look at the attached document. Further, the below utube link about the Finite Element Analysis Program- Introduction to DIANA is very useful.
Best of luck.
Haider
Question
2 answers
I am using COMSOL 4.3.
Relevant answer
Answer
After you ran the simulation, in the results node, right click the velocity or any other parameter, you can get stream line option. Also you should disable the surface option which is default
Question
1 answer
I will be conducting a research on "Revisiting Republic Act No. 9512: An Act to Promote Environmental Awareness Through Environmental Education: Basis for the Development of Environmental Education Model". How are models be developed?
Relevant answer
Answer
You can try:
1. Wisconsin’s Model Academic Standards for Environmental Educationhttp://standards.dpi.wi.gov/sites/default/files/imce/standards/pdf/envired.pdf ;
2. PROCEDURES FOR DEVELOPING AN ENVIRONMENTAL EDUCATION CURRICULUM:  http://unesdoc.unesco.org/images/0013/001304/130454eo.pdf
Best Regards
Question
1 answer
Suppose that a model developed in Ansys with boundary condition free-free is validated by free-free experimental results. Is it correct to use the model with fixed boundary conditions and still hope the results will be real?
Relevant answer
Answer
One of the purposes of using models is indeed for its prediction ability once it has been experimentally validated in a specific case.
What has to be assessed is that the features that have been validated are sufficient for the prediction application.
Speaking about linear structural dynamics, if the dynamic behavior of a component has been validated, its behavior once coupled should be correct.
What has to be assessed in such a case is that the new interface area is properly numerically converged. The level of strains in a free area or in a clamped area are not the same, and would not require the same level of mesh refinement to obtain numerical convergence.
Question
2 answers
With special concern to multimodel selection, linear regression and theoretic-informational approaches (e.g. AICc).
Relevant answer
Answer
It depends completely on the statistics of the system to be modelled. For a linear Gaussian multivariate model there is no problem since least square methods provide the accuracy of the model internally. For non-Gaussian models, especially for time varying models or stable statistical distributions it is much more difficult.
Question
11 answers
I want to study the dynamic behavior of a complex power system, including some HVDC links. In which simulation software can I study this?
Relevant answer
Answer
Hi Dear Mohammad
I think DIgSILENT PowerFactory (14.1.3)  is the best for your simulation
Question
1 answer
Usually the constant head or the constant flux were added in the literature, both cases neglected the inland groundwater level oscillation caused by precipitation and evaporation.
Relevant answer
Answer
In many groundwater models you such as MODFLOW you can use the recharge package to incorporate the effect of precipitation and evaporation (positive rate for precipitation boundary conditions and negative rate for evapotranspiration boundary condition).
Question
2 answers
I wonder if someone knows how to model springs in Abaqus with nonlinear stiffness and yet this stiffness is temperature-dependant?
My aim is to model a bond deterioration between reinforcement and concrete at high temperature.
Relevant answer
Answer
Hey, I will really appreciate it if you can share your experice in thermal simualtion using spring or connector element..
Can anyone help with controlling for covariates and types of sums of squares?
Question
12 answers
In the R vegan package, the functions 'rda', 'cca', and 'capscale' can have a conditioning matrix in the formula to control for the effect (`partial out') of some covariates before next step. These functions are for modeling multivariate data. Also, I have learned that type II and III sums of squares are better for testing the significance of one factor while controlling for the levels of the other factors (for example, the explanation on the types of SS here: http://mcfromnz.wordpress.com/2011/03/02/anova-type-iiiiii-ss-explained/). However, the author also cautioned that if there is a significant interaction, type II is inappropriate while type III can still be used, but the interpretation on the effect of one variable is difficult. In my data, there is only one dependent variable Y, and four independent variables A, B, C, and D. As C and D do not have a significant effect on Y, I can drop them from my model. It is also known that there is a significant interaction AxB. So is there a way I can still tell the effect of B after controlling for the effect of A and AxB somehow? Or, how should I interpret the effect of B in the presence of a significant AxB interaction?
Relevant answer
Dear Zhao, First try to understand what is the functional dependence between your dependent variable (Y) and independent A and B. If it is non-linear, better use non-linear regression fitting. If you have to apply non-linear statistics - start first with A and define the maximal variability, which it can describe. Then do the same with B variable. So you will receive information which of them have greater impact in variability of your dependent variable (Y). Finally, make multiple regression model using A and B - eventually for predictive purposes. Similarly, if the relations between your variables are liner - you can apply the similar approach, despite that the linear multiple regression models formally "separate" the impact. I never trust to such an automated judgment, because the result depends even on the order of A and B in your equation. Of course, before playing with statistics, it's better to learn more about the causal relations between your variables. Regards: Natalya
Question
4 answers
There are straightforward analogies between electrical, mechanical, acoustics, thermal, hydraulic and fluid systems which are intuitive and useful. Sometimes, we use them as we teach mechanical, thermal and fluid systems. Is it possible to build a neural network using analogs? For example, resistors acn be analogous to the weights of a neural network, etc.
Relevant answer
Answer
Question
1 answer
What are most recent techniques and models being used for urban growth modeling?
Relevant answer
Answer
Hi Anita,
Recently i have read a book called "GIS for smart city" and they have implemented the GIS tools in the urbanization of India states very well you could check it out . The implementation of the GIS makes balance in the advantages and disadvantages of cities large sizes.
Question
6 answers
Are there any other models that can do spatial allocation in a model other than cellular automata?
Relevant answer
Answer
Hi Anita,
I do agree with Barbara Quintela.
NetLogo is a very effective tool and I think it will take care of all your concerns.
Apart from easy to learn it has a good set of documentation and good amount of help is available. I have adopted this for my research work and the results are remarkable. The models developed in NetLogo can very well be expanded in spatial as well as temporal scale. You can develop models in NetLogo that are scientific and can also conduct experiments by means of behaviour space.
Question
4 answers
Multinomial or crdered choice. Which one is applicable?
Relevant answer
Answer
The Multinomial Logestic Regression model is the most suitable one for your case.
Question
20 answers
I am trying to model a surface by using software
Relevant answer
Answer
Design of expert version 11.7
Question
1 answer
The enzyme is obtained from wild bacteria. Its crystallographic structure has not been determined, but the structure of similar enzymes have been determined. How can I study the effect of a ligand on the enzyme structure using computational tools?.
Relevant answer
Answer
Dear Miss Hadizadeh,
Actually you have two problem with your protein.First you need a molecular modelling software (such as MODELER,etc) for modelling your protein. Then you need a molecular dynamics software such as Amber , gromacs , .....
this packages can compute effect of ligands on protein structure using RMSD , do-dssp ,... analysis.If you need more information about details contact with me using below email:
Question
24 answers
I do a lot of modelling and system analysis. The best mean for that is paper, however, it would be handy to have a piece of software to build these diagrams on a computer for publication, presentations, or for teaching. Up until now I have used vector image software such as Inkscape or Adobe Illustrator.
Relevant answer
Answer
There are a number of software packages that you can use to design causal loop diagrams. Here are a few examples:
Vensim (free): http://vensim.com
AnyLogic (free trial): http://www.anylogic.com/
I personally prefer Vensim to design a CLD.
Question
7 answers
I wander if there is a menu driven software which can perform variance partitioning (Borcard et al., 1992, Borcard and Legendre, 1994) in order to separate the importance of spatial dependence and spatial autocorrelation for community distribution data?
Relevant answer
Answer
Yes, most of the times R is the answer to our problems... I tried running away from it for a long time but i had to eventually surrender.
Anyway, in R there are some "menu driven" user interfaces that have been created, but they are very limiting. What made me overcome some of my fears was an interface called RStudio. It is quite simple and only has a few menus that make a lot of simple tasks more intuitive and user-friendly (like importing and exporting data, loading packages and reading help info on different commands).
Moreover, it also helps because you can "rehearse" all the steps you need for your analysis by writing them in a .txt file (you can even add some text explanations to tell to your future self what you were doing with that code). Then you just change the ".txt" extension to ".R", open the file in RStudio and it will be there right under the R console and you can run code simply by selecting portions of this txt file and pressing "run". Almost like a menu! :)
Question
4 answers
How can I determine the stress intensity factor from the simulated sandstone particles?
Relevant answer
Answer
Have you get the SIF using PFC? Personally, it may not suitable for SIF calculation.
Question
29 answers
This is something I have been pondering lately after attending a number of related academic and industry-led events, yet no definition is ever made clear: The term 'Big Data' has become a very popular buzz word, yet researchers in many scientific and mathematical fields have been analysing and mining large datasets for many years (eg. satellite data, model data). Does it then, refer to big data in a social sciences or business context, or rather, does it more correctly refer to the increasingly accessible, ubiquitous, real-time nature of the multitude of datasets we are now exposed to (e.g. data from sensors, WSNs, crowdsourcing, Web 2.0)? Or indeed both?
Relevant answer
Answer
Michael Stonebraker currently is writing an interesting Blog@CACM series on the different aspects of Big Data. The first four parts are already available:
In these posts he addresses all the different facets of Big Data you are asking about.
Question
5 answers
For a prospective occupational cohort where everyone is exposed to one or more chemical agents, examining BMI at follow-up compared to a specific chemical exposure at baseline, is it necessary to control for baseline BMI? Is it better to model change in BMI or BMI at follow-up? There is no unexposed group -- just cohort members unexposed to some agents versus others. All analyses are within-cohort.
Relevant answer
Answer
A paper by Glymour et al (2005) in the Am J Epidemiology suggests that the OLS regression adjusting for baseline BMI can be problematic. The econometrics model using differences in differences works okay at the individual level.
The way we model this longitudinally in my field is that we reshape the data so that we consider these to be repeated observations that are taken within a group-level variable (here, individuals) that structures the variance (basically, the independence assumption is violated pretty severely). Thus, we use a multi-level model to account for this variance structure. We then include a variable for "change in BMI over time" (basically just wave, though you could also use time in study). We finally adjust for exposure to chemicals and interact that with the variable that estimates the change over time (slope) to get an estimate of: 1) the relationship between exposure and baseline BMI (selection - if it's randomized, this is usually null) 2) secular change over time (among those not exposed - using BMI this will likely increase) and 3) the relative impact of change over time among those who are exposed (this is your 'causal' effect of the chemicals). You can also obviously include covariates that are either measured at baseline or that change over time in this way.
Plz suggest some best workstation for modeling studies ?
Question
5 answers
Hello all, I wanted to purchase workstation for modeling study. If any one suggest some software names/ workstation suitable package for docking and molecular modeling in one. Thanks
Relevant answer
Answer
Hi look at these workstation from Dell.... They may be costly but service is very good....... May be you can also try IBM workstations http://www.dell.com/in/business/p/precision-t7600/pd http://www.dell.com/in/business/p/precision-t5600/pd http://www.dell.com/in/business/p/precision-t3600/pd
Question
13 answers
I am working on a protein that has a functional site on a surface loop. Papers on docking studies that I went through do not report any docking at surface loops, although internal loops are reported. I am likely to dock ligands at the loop and predict the motion of the protein. Will docking at a surface loop be reliable (because one of the residues is a glycine)? Can anyone suggest any articles that report such kinds of docking.
Relevant answer
Answer
The rigid docking in this case could be unreliable, but a general answer is not possible.
Is the loop under observation known to be extremely flexible? If so you should consider to obtain more conformations of the loop (e.g. with Molecular Dynamics+ Clusterization, or NMR experiments).
Otherwise you can consider to change you approach... for example with a metadynamics flexible docking.
Question
9 answers
There are many temperature-dependent insect studies that model distributions of development time (not to be confused with development rate) by applying the popular Weibull function (which is intensively described by Wagner in his 1984 paper “Modelling Distributions of Insect Development Time: A Literature Review and Application of the Weibull Function").
However, most published studies or web-based integrated pest management programs stop short or provide little information regarding the incorporation of this work into phenology simulation models. Are there acceptable methods for applying modelled distribution of development time for different insect life-cycle stages to phenological/voltinism simulation models or is the practice kept separate to avoid introducing even greater variability to an already complicated predicting process?
Relevant answer
Answer
The reason why the Weibull function used in lieu of normal or logistic distribution models must be simply because the observed biological data fit better by using the Weibulll model. The frequency distribution of insect development time is often somewhat deviating from normal distribution. If the normal or logistic model fits better to the biological data, it is better not to use the Weibull model. Please refer to the website of Dr. Sharov's Population Ecology Online Lecture (http://home.comcast.net/~sharov/PopEcol/lec8/combine.html).
Question
2 answers
For example, models using/suggesting data about food, exercise, etc. I am not interested in individual models for recommendation food (recipe or ingredient-base approaches, e.g.), rather a holistic approach for complete healthcare (self) management.
Question
4 answers
I am currently working on the task of creating information systems of tidal flooding. Can anyone recommend software to make the modeling application?
Relevant answer
Answer
Its about the modelling of the water spread area for future forecast. I use arcGIS.shp time series. I need to build the application web-based and pc-based for that purpose. Thanks for your time.
Question
2 answers
The current model uses differential equations as it basis. Put could replacing these with a spiking neural network based model be a good next step?
Relevant answer
Answer
It depends on application, now few papers coming up with spiking nn for classification problems.
Question
3 answers
I want to bind a classical project scheduling problem with Cmax objective function. For example, the first and the second activity can start at the beginning of the project. I want to limit the model in which 1st and 2nd activities never run at the same time. But I don’t know which one is scheduled earlier. Moreover, in my model, just these two activities use the same resource. This resource is bounded, hence model is a special case of RCPSP. How can I handle the resource constraint on my model?
Relevant answer
Answer
If you really only have one pair that can't run simultaneously, solve your model twice, once with 1 constrained to precede 2 and once vice-versa. If you have a lot of pairs, model your problem as a mixed integer program with binary variables controlling
which job in a pair finishes before the other starts.
-cat
Question
3 answers
I used GMSYS software and I had to put the density contrasts for different rock blocks between the station and the sea level (gravity base level). I have seen this in the manual. I am wondering if this is the correct way or do I need to put zero value for the topography because we are supposed to have corrected the data for topography and slab below the station when the Bouguer is used for modeling.
Relevant answer
Answer
If I have understand your question. For GMSYS, as i know you can also put the full density of the geologic unit(not the contrast). In modelling with GMSYS 2.75D the topography of your profile is flat. It means that all points have the same elevation. The block above your profile stations represents the air and has a zero density.
Hope that , i have answered your question!
Best regards!
D Boubaya
Question
2 answers
I tried to apply Sobol 2002 and 2007 sensitivity analysis to a LSM to evaluate for the different parameters the first order and total sobol index respectively Si and ST. But I always find non significant results (eg: ST=0.000000009 and Si= 0.999999222 for all the parameters). For the beginning I thought it was a problem of Monte Carlo dimentionality problem so I tried to make the same experiment but with a huge ensemble size (N= 600000 for only 2 parameters). I found the same results. I'm sure that my codecs are correct because when I use the Ishigami and the sobol benchmark I obtain significant results (e.g., in the attached paper link in the page 11). Could any one help me to understand this problem? Thanks in advance for all you answers and comments.
Relevant answer
Answer
The First-Order Sensitivity Index S_i can be assessed according to the paper:
Sobol, I, (1993).
“Sensitivity estimates for nonlinear mathematical models.”
Mathematical Modeling and Computational Experiment, 1, 407-414.
and the total-effects sensitivity Index S_Ti can be computed according to the paper:
Homma, T. and Saltelli, A., (1996).
“Importance measures in global sensitivity analysis of nonlinear models.”
Reliability Engineering and System Safety, 52, 1-17.
Question
4 answers
I need one that has realistic models of muscle and vascular tissues.
Relevant answer
Answer
Dear Dennis!
As a Human Anatomist, with special interest in human dissection, I am particularly interested in your question and the several answers and contributes it may give rise to. Thank you !
I particularly enjoyed reading Prof. Wolgang H. Muss, because he accurately offered the best answer to your question, in the sense that nothing substitutes "the real thing"... (Von Haggens modelling perfection currently uses real human body parts through modern conservation techniques, such as plastinization or the clearing technique, with permanent technical improvements of these original german methods).
No artificial, man-made model can correctly substitute the natural perfection of our human body, either for treatment or for medical training/teaching purposes... In this sense, and in a contrary movement of what has been the modern trend of most Medical Schools across the World, our Anatomy Department of the Lisbon Nova Medical School has developed efforts to maintain the habit of routine human dissection classes either for undergraduate as also for post graduation in Medicine, and we have developed new embalming techniques, with no formaldehyde, that allow us to share our knowledge in International terms, and with less damage to the health of those that work at the Anatomy lab (as compared to the older formaldehyde times...) 
Nothing compares to the real thing.
And it is never too late, to restart human dissection as a fundamental tool to Human Medical studies.
PS - I attach the link to one of our laparoscopic training courses with cadaveric material.
Question
1 answer
I want to decompose an image into multiple scale bands using the TVL1 model in matlab. I have the source code for the TVL1 model but how can I use it for decomposition into multiple bands. Can anyone help me in this regard?
Relevant answer
Answer
I have same question now,so can you share your code about it?I want to use it to image denoising.Thank you!
Question
2 answers
I am looking for an explicit formula that weighs the predictions from each database into a combined one.
Relevant answer
Answer
Thank you for the suggestion. Indeed we have been working with model averaged methods, in the field of credit risk management, and they have proved valid. I am now looking for competing methods to benchmark with. do you know applications of the mixed modelling approach you suggest in predictive problems?
Question
1 answer
I know already "femm" 2D and static
Relevant answer
Answer
Hi,
Some time ago I had to deal with finite element methods and I used FreeCAD to build 3D-models and Elmer to solve the equations. In my opinion I think FreeCAD is good and is very easy to use, but I guess it depends on the complexity of your model. I had some problems with Elmer for very detailed 3D structures, so again it depends of the complexity of the system to solve. You might want to use Gmsh or another mesh generator instead of using what is built-in Elmer.
You can still start with FreeCAD and decide which solver to use afterwards. I don't have a great experience, but I hope this can help.
FL
Question
1 answer
I'm looking to find a copy of FITEQL but after extensive searching can't seem to find it anywhere online. Any help would be appreciated.
Relevant answer
Answer
I am looking for it too, if you found it, i will appreciate if you share it with me.
Regards
Question
4 answers
Want to be able to do something like this x=[0:1:50]
Relevant answer
Answer
save an array('tut') with first column x=[0:1:50], and following with disturbances
in simulink block 'from workspace' write 'tut'
and run simulation .1st column will be taken as simulation time. check it.
tell me if I am wrong, understood your question or my answer.
Can anyone recommend sources of algorithms and numerical methods for modelling and simulation?
Question
3 answers
I am looking for good source of algorithms and numerical methods for modeling and simulation mainly oriented to structural bioinformatics. I have found this book "BIOLOGICAL MODELING AND SIMULATION A Survey of Practical Models, Algorithms, and Numerical Methods Russell Schwartz". I would like to know of other sources. I prefer a programming-language-agnostic presentation of the algorithms or a python, Fortran, C oriented examples.
Relevant answer
Answer
Maybe you can try this rather classical book: Numerical Recipes in C - The art of scientific computing http://www2.units.it/ipl/students_area/imm2/files/Numerical_Recipes.pdf
Question
92 answers
I have a legacy code that is written in f77, I am looking for a f77 compiler. I have been using gfortan but it doesn't work well.
Relevant answer
Answer
Question
3 answers
We have designed a conceptual framework that could be useful to address the problem of documenting, storing and executing models and algorithms created by ecologists and environmental manager. The attached file shows the most important functions.
Relevant answer
Answer
The framework looks comrehensive. Should, however, be accompanied by a set of rules and procedures. Rik
Question
3 answers
-
Relevant answer
Answer
This is an interesting quesiton but with two aspects.
(1) The mathematical or logical construction techniques and approaches. Such aspects are well covered by the literature and text books, but the (a) model, (b) parameterisation and (c) validation all seem to appear in different articles, with a limited overview of approaches in any one succinct place.
(2) The use of these methods in real life. There appears (to me at least), to be no overview of how to use the models to make real-life decisions. Having sat on a number of reviews/steering groups for modelling-related projects the biggest aspect that is lacking is not the creation of the model, but in creating a model that is most useful to answer the question.
Question
3 answers
The UML Class diagram is a little more extensive, I mean it is more flexible in things like multiplicity, generalization, etc. But in the end it looks like a very detailed ER diagram, so I wonder if a relationship between these diagrams exists?
Relevant answer
Answer
Dear Reinaldo,
I have taught both types of diagrams, and am also teaching them at the moment. As Sany had said, in class diagrams, classes also have operations so there are three compartments: class name, attributes, operations (in an ERD, you only see the first two). Even when it comes to relationships/associations and cardinalities/multiplicities, there are subtle differences in the rules and notation so someone should study those carefully. A class is a result of the object-oriented philosophy, where each object has attributes, behavior, and state. In older approaches, these were covered in separate diagrams. Although the ERD is an older diagram, it is still very important for designing relational databases. Relational databases are the most common form, as opposed to object-oriented or object-relational. So each of these diagrams has a different purpose. The class diagram is useful in designing applications where you would use an object-oriented programming language such as Java or C#. Especially in cases where the application maintains its own data, although this does not always have to be the case. The ERD is limited to just databases. If you want to use it together with an application, you would need to create an interface, or use middle-ware drivers, and so on. Best luck in your work with these and other diagrams, 
Emre 
Dr. E 
Question
1 answer
Solar PV Performace is affected by various factors such as temperature of cell, irradiation, dust etc. Many methodologies are developed numerically and measured. What are the latest trends in this area of research?
Relevant answer
Answer
I invite you to read our recent publications in Solar energy.
Question
1 answer
I am using multi-agent for modeling Wireless Sensor Networks. Is there any omnet++ extension modules for multi-agent system?
Question
44 answers
How to develop hybrid model?
Relevant answer
Answer
A deterministic model is a model where:
1 - the material properties are well known, i.e. deterministic. none of them is random
2 - The applied load are also deterministic
A Stochastic model has on the other hand:
1 - random properties, e.g. the Young's modulus is a random variable with uniform distribution [E1, E2]; or normal distribution (of a given mean or standard deviation)
2 - The applied load is random variable, e.g. Wind Load, earthquake (vibration of random amplitude and displacement)
The Hybrid model is a "mixture" of both Deterministic and Stochastic. Its treatment is quite similar to the Stochastic model. The presence of a single random variable in the model necessitates the consideration of the stochastic treatment.
Question
1 answer
The aim of the question is gather from your opinion the models/proposals/framework which are used to measure/define a certain quality level of modeling language for requirements engineering. So far I have a few ideas like QM4MM to evaluate the maturity of the metamodels of these languages or SEQUAL (ant works of Krogstie.)
Relevant answer
Answer
At last RE conference in Rio, there were a workshop for comparing requirements modelling approaches. This workshop was the third one and the two first were held at MODELS and focused on comparing modelling approaches.
A common case-study has been modelled in a wide variety of modelling language. Based on such common modelling, it makes the comparisons a little bit easier. The comparisons made during the workshop, and published in proceedings, are based on a set of comparison criteria covering a large set of features. Comparing modelling approach is sometimes tricky, especially when addressing different issues or with different focus.
It might be interesting to start by reading their work. See http://cserg0.site.uottawa.ca/cma2013re/ for more informations about the workshop.
Question
10 answers
Simulation of water treatment processes constitutes an important research theme. In comparison with real works at the industrial scale, where is the real place of simulation in developing water treatment technology?
Relevant answer
Answer
Simulation provides a convenient platform for us to change the condition and observe the results. As such, you could change the process conditions such as pH and see how treatment efficiency changes. This will help you to identify the process conditions that are best for an efficient treatment. Otherwise, you may have to conduct large number of experiments to find out the best conditions for efficient treatment.
But, there is a risk. Because, the accuracy of the simulation outcomes depends on the accuracy of your model that you have used to simulate. There are ways to create models and to check the accuracy of the model, i.e. how accurate the model replicates the process. Any standard text book on modelling should explain the fundamentals.
You could initially use simulation technique to identify the best conditions, test it an industrial level and see whether it works. As I mentioned before, you could avoid tedious and sometimes expensive experiments. Hope this helps :)
Question
1 answer
How do you estimate or measure the impact of computational simulations and mathematical modelling on infectious disease research in America, Europe, Africa, Asia and the Middle East? Is there anything like computational epidemiological modelling metrics?
Relevant answer
Answer
Dear Dr. Olugbenga Oluwagbemi, The Computational complexity, mathematical modeling are two different identities. There are ways to determine the computational complexities and algorithms. For mathematical modeling different techniques can be applied and which suits better to be determined. As for as the question of different countries the parameters are different that must be considered in mathematical modeling or one has assign a feedback function in the model.
Question
26 answers
I am trying to study DNA (ligand)-Protein (receptor) Interaction by molecular docking, hence i need .pdb file format for DNA as ligand?.
I would highly appreciate if you could please lend your valuable suggestions/advice to me.
Relevant answer
Answer
You can try 3D-DART webserver (http://haddock.science.uu.nl/services/3DDART/) to model the 3D structure of DNA molecule. Also you can use Discovery studio Visualizer 3.0 for the same.
If there is any problem occurs you can contact.....................
Question
3 answers
In a directed network of agents passing knowledge to each other how fast does information in a node grow with the number of edges incident on it? Linearly or exponentially with the number of edges? If on one hand the knowledge of the node agent seems to grow with the sum of the knowledge shared through the incoming edges, on the other hand each item of information shared through one edge might recombine with each item of other edges incoming information, to form new items of information. For example if agent A tells me that there is a traffic jam near the shopping center and agent B tells me that today morning there are big sales there, I get to acquire a third peace of information from combining what agent A and Agent B told me, which is that the traffic jam is caused by the sales and won't stop until the sails are over. Any ideas on which model better suits reality? Sorry if this sounds as a rather naïve question, but since it is related, but not central to my research, I did not get to do a literature search on it.
Relevant answer
Answer
a while ago, i wrote some papers on computational semiotics that are at peripherally relevant. The ``Get Stuck'' Theorems show (1) that no matter what finite representational mechanism you use, if you are trying to represent more and more complex phenomena, you will get stuck (the size of the description exceeds your computational capacity to compute with it), and second, that even if humans are adding to a base of knowledge, eventually the knowledge will become too cumbersome to change, mainly because there will be too many interconnections
reference:
Cognitive Technology: Instruments of Mind: 4th International Conference, CT ...
edited by Meurig Beynon, Chrystopher L. Nehaniv, Kerstin Dautenhahn, Springer LNAI 2117 (2001)
the relevance is that a faster increase in knowledge can lead to getting stuck sooner, so it isn't always the most desirable behavior
Question
4 answers
I have the reflectance profiles of soils samples from 360-2500 nm in 1 nm intervals. I want to model the soil chemical attributes using these reflectance data but 1 nm intervals is cumbersome to work with. So far, I've been using the reflectance every 10 nm as predictors in my models. My question is whether there is some way to maximize the strength of the relationship between the reflectance and soil chemical attributes without regressing each 1 nm reflectance value against each soil attributes? My thinking is that since reflectance values are positively correlated with proximity in wavelength, there must be some resampling methodologies.
Relevant answer
Answer
Hi.
So you do a manual MLR? Why don't you use PLS or PCR instead, then you don't have to choose the wavelengths?
Question
7 answers
Do partial differential equations have a lot to do with electrochemistry?
Relevant answer
Answer
Partial differential equation, namely, Fick's second law is the basis for the treatment of most time-dependent diffusion problem in Electrochemistry.
Question
2 answers
Using SRTM data, I want to make different elevation zones with ArcMap. Can any expert guide me with a simple procedure that I can use in ArcMap for preparation of a number of polygons having certain elevation ranges, in my area of interest.
Relevant answer
Answer
It takes only 2 steps. First, convert the SRTM data into contour line by using the 'contour' tool. You can set your desired elevation range/ contour interval. Then, use the 'feature to polygon' tool to convert the contour line into polygon. 
Question
2 answers
Steam turbine transfer function modelling.
Relevant answer
Answer
I am not sure what you are modeling, the mechanical part or the electrical part or both
Question
10 answers
In designing classifiers (using ANNs, SVM, etc.), models are developed on a training set. But how to divide a dataset into training and test sets? With few training data, our parameter estimates will have greater variance, whereas with few test data, our performance statistic will have greater variance. What is the compromise? From application or total number of exemplars in the dataset, we usually split the dataset into training (60 to 80%) and testing (40 to 20%) without any principled reason. What is the best way to divide our dataset into training and test sets?
Relevant answer
Answer
Generally, k-fold cross validation; e.g. 10-fold cross validation, is the best . For a minimal dataset, then LOO (leave one out) should be preferred.
Question
6 answers
Studying the karst aquifer behavior by using Spring Discharge Time series
Relevant answer
Answer
Hello, Ibraheem.
I used time series analyses (autocorrelation and crosscorrelation) for investigating the bahaviour of karst springs in Classical karst in Slovenia (Europe). Thise is a statistical approach, which is suitable for regional chracterization of karst aquifers. Here you can follow the link to one of my articles regarding this subject (see references in this paper for others). http://carsologica.zrc-sazu.si/downloads/392/Kovacic.pdf
Regards,
Gregor
Question
25 answers
I’m trying to install a free Windows Fortran compiler to run Abaqus subroutines. I found a couple of compiler, gcc and open64, but they seem not windows compatible (I couldn't install them). If you have any alternative Fortran compiler, please let me know. In addition, would be helpful an explanation about how to link the compiler to Abaqus (I mean what to write in the environmental variables and so on).
Relevant answer
Answer
As a student, you can use the Intel Fortran Compiler (ifort) for free (and ifort definitely works in Abaqus). If you want to use gfortran in windows, you have to install MinGW. But I'm not sure, if gfortran works in Abaqus.
Question
4 answers
Does anybody see an example for hybrid of Lagrangian relaxation and Benders decomposition in MIP?
Relevant answer
Answer
The following papers might interesting in this context, allthough they might not fully answer your question.
Question
3 answers
I’m writing a report about modelling. I’d like to have your opinion on something.
Which are the best quotes for these 2 paragraphs ?
1- In the classical general linear model the response variable is continuous and it follows a normal distribution (linear regression analysis, ANOVA, ANCOVA), with the main estimation method of least squares.
I have only quoted a book: “MacCullagh, P., & Nelder, J. A. (1989). Generalized linear models (Vol. 37). CRC press.".
2- There are a recent increased interest in the application of statistical modeling to medicine, biomedicine, public health, and biology.
I’d like to quote a similar quote of tutorial or review of Ben Bolker: “Bolker, B. M., Brooks,et al., (2009). Generalized linear mixed models: a practical guide for ecology and evolution. Trends in ecology & evolution, 24(3), 127-135.”
Relevant answer
Answer
Not that I had better references, but there are minor flaws in the first sentence:
not the response variable follows a normal distribution but the residuals! Further, this is not a mandatory prerequisite; it is an *assumption* under which the the estimates are (most) *reasonable* and the p-values would have a well-defined meaning. It would be even more correct to state that the expectations about the residuals(errors) are modelled using the normal probability distribution. This is not identical to the statement that the residuals actually have a frequency distribution of the shape of the normal distribution.
Apart from this I would think of citing RA Fisher for general linear models, since this was largely his invention (apart from Legendre and Gauss). McCullagh & Nelder are better if you want to point to generalIZED linear models, although the unification of the models for exponential disperion family was published by Nelder & Wedderburn (1972).
Question
2 answers
I need to simulate a distribution gas network, in particular I need to perform steady-state analysis on this network.
Relevant answer
Answer
Dear Anirbid Sircar Anirbid Sircar
Can we use CFD Fluent tool to simulate gaseous fluid under steady state condition.
Question
17 answers
Hello,
there are plenty of codes on the net.. commercial and open-source.
Generally it ends up like: PhD-students develop some promising open-source code. Projects can be funded, a start-up may be started and this is the end of the open-source code. Since no community has grown on this code. Probably the Comsol software has been born like this. Or the Student gets its title, and the code falls asleep.
Why isn't there a central place to coordinate al these openEMS, openFOAM(-extra), FreeCAD, Cluster-DEM, openCASCADE/SALOME (at the limit believed to what open-source should be) and many more?
Lack of homepage maintainer? No academic publications to live on? No but probably a good reputation to earn.
Thanks
Relevant answer
Answer
Really good point that you made there, Lukas. I agree with Agnieszka that the major reason why it doesn't happen that we get really get good open source simulation code (not only for multi physics simulation) is that people from different branches don't work together.
About ten years ago I managed and directed a huge research program in Germany funded by Fraunhofer with 3.5 million Euros the purpose of which was explicitly to develop Multiscale (or multi physics) simulation software. 9 research institutions and more than 30 scientists and engineers were involved in this and the idea was to have a product in the end that can be used by the non-expert. You should think with that huge load of knowledge and intelligent people involved and that much money for three years you should be able to write the best software in the world, but, what, do you think happened? There were lots of arguments among the project "partners" about who really "owns" the software in the end (there was lots of distrust and malevolence) and then there were partners who were only interested in performing a bunch of experiments, not really developing software, albeit they had stated this in the proposal to get the project money in the first place. Then there was a group of scientists of an institute that had some good programmers (C-plusplus, parallelization on their own institute's super computer and GUI stuff), but did not want to share source code with the other partners and only provided binary source code for the others to alpha and beta test it (which pissed the others off: "I am a scientist, not a beta tester for them"). Other groups in the project obviously were not capable to develop software (they were just engineers) and did not to use the project money to hire people who could do it, but instead bought commercial software licences for a garbage "Multiphysics" software from the company Accelrys. I think they spent about 300 k€ for a one year licence only to discover that nothing worked with their "ab-initio" simulations of ceramic structures. The hotline could not help, of course, other than suggesting things like "set this parameter to 1.5 or that one to 3.7". To make a long story short, there were too many people involved, too many different interests, too much dishonesty, too many lies (institutes that just write a proposal to get money somehow with no real interest in the things that are promised in the proposal). In the end, there was no real software package developed that could be used by anyone but an expert and only one of the 9 institutes still has the working code today (and I have no idea whether they are using it for anything). When it became obvious that there wouldn't be a common software tool for all "project partners", the others started to develop their own little packages for their own little purposes and the idea of bringing everything together was basically given up during the project; it just didn't work because there was no common will to really do it. The individual interests of all participants were stronger. It think things like this happen all the time: The next PhD student in a research group starting over with the same developments that were done by five PhD students during the last ten years before him but then abandoned when they had gotten their PhDs and left the group.
Since my experience from this three-year project with 3.5 Million € budget, I only have been developing my own code with capable students every so often and gradually improving the features and the QT-based GUI attached to it. Whenever I think this code is ready for general distribution I will post it on my website, however, then the problem will be to really make it known and get it distributed. For many people working in academia the main reason for distributing a software package for free is mostly building up a reputation until they made it and got their professorship. From this moment on, they usually lose interest in maintaining code because it is often only a means to an end on the rough route to a permanent professorship.
Question
5 answers
Moreover, I really appreciate any idea in the case. This is a semester project and I have no idea about it. Besides, there is not much literature published and I have difficulty coming up with an idea.
Relevant answer
Answer
Check this out. ETABS, Integrated building design software. Hope this helps.
How useful this approach (or very similar ideas) will be to the development of a mature Systems Biology?
Question
10 answers
http://pysb.org/ Python framework for Systems Biology modeling.
Relevant answer
Answer
Dear Martin, As I am new on this sit I am not sure your question was directed to me. If it was I would ask you to which approach you are referring too. Please give me more specifics. Best, Ariel
Question
6 answers
I have a problem with modeling PSF according to defocus (depth). So far I just know that the shape of PSF is similar to the aperture but scaled. Here my aperture is not a circular one so (scaled) Gaussian kernel is not good.
The attached image shows the shape of apertures, white means transparent. Currently I just load in the image and use imresize in Matlab to get the size I need, but it is too naive I guess.
So could one teach me how to get a better model about PSF whose size is changing in accordance with depth(defocus) ?
Thanks !
Relevant answer
Answer
Thanks Eldad! You are right, measure is the best way. In fact, in the original paper they used a set of measured PSFs with different scale. I have this question just because I also see someone did simulation without measurement, and I don't know how their simulation accurate. For me, your suggestion is clearly better. Thank you!
Question
6 answers
I am currently thinking about what deliverables to request from the students in a new software engineering / software development project. Although the project will involve a significant implementation part, the software engineering part is crucial. To ensure that the students design and model their software system well, I would request the students to model their system to be developed using a variety of notations such as :
During the requirements phase :
• An *activity diagram* to provide a high-level view of the behavior and flow of the software system to be developed.
• Either *use cases* or *user stories* to capture the different usage scenarios that the system should be able to handle.
During the analysis phase :
• A conceptual UML *class diagram* for describing the main concepts of the system to be developed.
• Either an *ORM* (Object Role Modeling) or *ER* (Entity Relationship) diagram to describe the data that the system will need to handle.
During the design phase :
• A detailed UML *class diagram* for describing the static structure of the system in terms of classes and operations.
• UML *sequence diagrams* for describing the dynamic behavior of the system.
• A *relational schema* of the database that the system will be using.
Of course apart from the above many other kinds of diagrams exist such as goal diagrams, feature diagrams, agent models, state machine diagrams, formal specification schemas, object diagrams, deployment diagrams, package diagrams, component diagrams, architectural models, interaction diagrams, message sequence charts, design patterns, and many more.
In your opinion, what kind of diagrams should or could be part of such a project and which diagrams are less important?
Relevant answer
Answer
In my class, students design and model their software system using:
- For requirements: Activity diagrams and use case diagramas with a good specification of each use case. (Name, Precondition, Normal Sequence, exceptions, postcondition).
During this phase, we also ask them the software prototype, the UI at least, because they can understand better the user's needs.
- For analysis and design: Class diagram and Sequence diagram.
In my experience, ask them to model without prototyping is too dificult for them.
Question
7 answers
What are the existing models and it's limitations? Ohter than EIO-LCA is ther anything being used at a large scale?
Relevant answer
Answer
Dear Binita,
the impact of electricity generation process depends highly on the national energy mix. In Sima-Pro code there are various possibilities to evaluate the impact of different electricity generation scenarios to perform an LCA analysis. If you are interested in electricity consumption in buldings, you can download from Research Gate my paper:
F. Asdrubali, C. Baldassarri, V.Fthenakis: “Life Cycle Analysis in the construction sector: guiding the optimization of conventional Italian buildings”, Energy and Buildings, 64 (2013), 73-89.
Question
6 answers
Nitrogen is one tricky element to study in fields due to soil variability and the fact that N is very mobile. Experienced folks have told me that using some models, they have not been able to get realistic findings, close to actual. How can one make better simualtions of N response in cereals say in a trial of 3 seasons?
Relevant answer
Answer
did you think of using optical sensors such as a SPAD or greenseeker?
A spad (http://www.geneq.com/catalog/en/spad-502.html) is not really good for predicting but this is harldy possible in any case.
A greenseeker has more potential (see e.g http://www.plantstress.com/methods/Greenseeker.PDF)
Question
2 answers
For investigating resonant changes or displacement variation of the beam.
Relevant answer
Answer
If you need Eigenfrequency analysis, you should definitely change the geometry, not just the density.