• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • Lab. Members
    • PhD Students
    • Undergraduate Students
  • Projects
  • Publications
  • Openings
  • Gallery
  • Random Links
  • Contact Us

Computational Materials Science Lab

Texas A&M University College of Engineering

Research

Co-Ni-Ga HTSMAs

In recent years, CoNiGa  has emerged as a new FSMA system that can be a potential alternative to NiMnGa alloys [19] . In addition, some compositions exhibit  high martensitic transformation temperatures, which make CoNiGa a potential high-temperature SMA system. In this project, we investigated the effect of composition on the transformation behavior as well as phase stability of Co-Ni-Ga based alloys though experimental and computational means. One of the major outcomes of the research is the discovery that in Co-Ni-Ga, the ratio of valence electrons to atoms (e/a) ratio, and the so-called magnetic valence influence the transformation behavior in these systems:

ea

Please consult the following paper for more information about this fascinating system:

The effect of electronic and magnetic valences on the martensitic transformation of CoNiGa shape memory alloys

Computational Design of High Strength Steels

PI: Raymundo Arroyave, Ibrahim Karaman

There is a need for a new generation AHSS based on plain carbon and low alloy steels having very low production costs and permitting to retain or further increase the strength achieved with the first generation AHSS, via microstructural and micro-alloying control, at the expense, perhaps, of some of the ductility obtained in the second generation.

 
The main goal of this task is to develop alloying and heat treating guidelines for the design of TRIP-assisted multiphase steels composed of a ferrite matrix with dispersed bainite and relatively high fractions of stabilized retained austenite.


        Computational  Design

(1) Intercritical Annealing Process

Estimating the upper and lower bounds for retained austenite volume fraction.
 (2) Bainite Isothermal Treatment
Determine the effect of alloying and heat treatment on the

(a)phase stability
(b)volume fraction
(c)transformation rate of both Bainiteand retained Austenite.
 (3) Untrafine Grained Alloy

Investigate the stability of retained Austenite under mechanical loading

                                              Theoretical Models

Figure: The phase diagrams for the alloy, Fe-0.32C-1.42Mn-1.56Si. Comparing to the experiments (the “x” points on the right), the thermodynamic and kinetic models predicts the lower and upper bounds of the phase transition

The thermodynamic and kinetic models are implemented. Comparing to the empirical results, the theoretical predictions provides the upper and lower bounds of the carbon enrichments in retained austenite after the BIT treatment.

The optimum heat treated temperatures in two-step heat treatment can be predicted:

 

The red area stands the maximum volume fraction of retained austenite at room temperature which can be approached by the heat treatment. As well as the carbon content and the other phases can be estimated by the theoretical analysis.

                                           Optimization Calculation

For designing/developing a new material, the composition and proper heat treatments are significant. To optimize the micro-structure in TRIP steel, the theoretical models are utilized as the decision maker to evaluate the alloy and Genetic Algorithms (GAs) is coupling to it.


GAs is the computer based algorithm which imitates the natural evolution of the creatures. Using random selection and directional evolving, the extrema in searching domain can be rapidly approached after calculations. Furthermore, the essential “generations” of the calculations are included which avoids the focusing on local extrema values in the domain. 

 
Different Convergence Conditions
     

In GA, the similarity of the chromosome is an important factor to define the converging of the calculation. In these two picture, two conditions (1% (left) and 5% (right)) similarity are defined in two calculations. It can be seen that in 1% case, the calculation is easily got converged and restart over. So the number of the effective calculations are higher than 5% case.This also means that the 5% case is better to avoid the trapping by the local optima during the calculation.

 
 


Publication:

  1. R. Zhu , S. Li, I. Karaman, R. Arr.oyave, Multi-phase microstructure design of a low-alloy TRIP-assisted steel through a combined computational and experimental methodology, Acta Materialia, 2012.
  2. S. Li, R. Zhu, I. Karaman, R. Arr.oyave, Thermodynamic Analysis of Two-Stage Heat Treatment in TRIP-Steels, Preprint.
  3. S. Li, R. Zhu, I. Karaman, R. Arr.oyave, The kinetic model for simulating the bainitic isothermal transformation, Preprint.
  4. S. Li, R. Zhu, I. Karaman, R. Arr.oyave, A Genetic Algorithm Approach for Designing the microstructure for TRIP Steel, Preprint.

Constraint Satisfaction Problem Approach to Materials Design and Discovery

In order to come closer to materials design and discovery without having to rely on exhaustive computational/experimental approaches, the constraint satisfaction problem (CSP) approach is used. This approach uses efficient searching algorithms to evaluate a design space against user-defined constraints, such as phase stability. Currently, the CSP approach has been coupled with Thermo-Calc software to search high-entropy alloy systems as well as liquid-metal dealloying systems. The CSP algorithm is provided by the Design Systems Lab from Mechanical Engineering and the work on liquid-metal dealloying systems is a collaboration with the Demkowicz Group from Materials Science and Engineering.

Results of a single-phase search with the CSP in a near-equiatomic ternary (green is HCP, blue is BCC)

Control of Variability in the Performance of Selective Laser Melting (SLM) Parts through Microstructure Control and Design

Additive Manufacturing (AM) is a set of emerging manufacturing technologies with unique characteristics, such as the ability to  produce  parts  with arbitrary  complex  geometries  directly  from a  digital model, and  without  the need for custom and expensive tooling.

However, a  number of challenges  remain to be addressed  before AM realizes  its full potential. First is the  need for better predictive models that correlate processing parameters such as energy deposition rate and scan speed to part performance. This has been  a complicated task since the reported microstructures of AM fabricated alloys differs quite significantly from as-cast  alloys  of  the  same  composition.  These  microstructural differences  influences  performance, and  are  greatly dependent on the thermal history of the part during the AM processes.

The majority of existing AM modeling efforts do not consider the evolution of the microstructure during melting, solidification and re-heating  and these approaches thus lack the intermediate link,  i.e. microstructure, correlating process parameters to performance. These  models  are also usually ignore the inherent variability  in process parameters  and material  properties, and  are  incapable  of  identifying  key  factors that  influence variability in performance.  The  consensus  among  the  AM community is that low repeatability of metal AM parts represents the ‘Achilles Heel’ that hampers the widespread adoption of AM as a viable manufacturing method. Quantifying, and subsequently reducing, variability in AM parts is a  vital requirement in the process of material and part certification.

This project presents an experimental and physics-based modelling framework to predict the microstructure evolution of NiTi Shape Memory Alloys (SMAs) during Selective Laser Melting (SLM) process, with associated uncertainties, and enable the potential verification and quantification of sources that contribute to the overall variability in performance. Once microstructural evolution can be predicted as a function of AM process parameters, it would be possible to utilize the existing widespread knowledge base and modeling tools on microstructure-property/performance relationships to further predict the performance of the parts made out of the aforementioned materials, closing the missing link connecting processing parameters to performance.

As a part of this project, we focus on phase field modeling of microstructure evolution, particularly the rapid solidification of NiTi Shape Memory Alloys (SMAs) during Selective Laser Melting (SLM) process. A brief description of the proposed approach is described below:

  1. A three-dimensional finite element model which uses SLM process parameters as inputs will be developed to predict the thermal history of the part. This time and position dependent thermal history will be used to estimate the cooling rates and thermal gradients.
  2. A non-isothermal multi-phase field modeling framework will be developed based on the phase-field model with finite interface dissipation approach suggested by Zhang and Steinbach [1]. The predicted thermal history will be used as boundary conditions to the phase-field model to predict the microstructure evolution during SLM.
  3. The further evolution of the solidified material subject to reheating by subsequent beam passes will be investigated through the same multi-phase field framework.
  4. The predicted microstructures will be compared against experimental characterization in order to adjust model parameters difficult to measure (including interfacial energies) as well as to validate overall modeling approach.
[1] Zhang, L., and Steinbach, I., 2012, “Phase-field model with finite interface dissipation: Extension to multi-component multi-phase alloys,” Acta Materialia, 60(6), pp. 2702-2710.

Design/Optimization Under Uncertainty using Bayesian inferences

1) Computational Research on Alloy Design and Plastic Flow Behavior of TRIP Steels coupled with Estimation of the Model Parameters by Bayesian Approach

Transformation-Induced Plasticity (TRIP) Steels are a group of low-alloy steels which can provide an appropriate combination of strength and fracture toughness due to high strain hardening arising from strain-induced martensitic transformation (SIMT) during plastic deformation. These interesting properties make these high strength steels desirable for automotive industry by offering lower weight and higher safety for vehicles. TRIP steels are characterized by their multi-phase microstructure, which includes ferrite, bainite, retained austenite, and martensite. Retained austenite as a dispersed phase in ferritic matrix plays an important role to obtain high strain hardening in this group of steel. In this regard, it has been found that the main characteristics of this phase are its grain size, carbon concentration, and volume fraction.

Fig. 1

Fig. 2

Our studies about TRIP steels can be categorized in three different areas:

  • Heat treatment design
  • Modelling of plastic flow behavior
  • Bayesian parameter calibration of the plastic flow model

– Heat Treatment Design:

It was indicated that the austenite can be retained in the microstructure at room temperature using a two-step heat treatment, including intercritical α+𝛾 annealing (IA) and bainitic isothermal transformation (BIT). As shown in Fig. 3, different stages of heat treatment lead to the formation of various phases in the final microstructure. This process can be observed in the diagram of temperature in terms of austenite carbon content in Fig. 4.

Fig. 3

Fig. 4

However, we applied Thermo-Calc software to change the common two-step processing to three steps for Fe-0.32C-1.56Si-1.42Mn alloy (Fig. 5 and 6) in order to suppress the formation of martensite during heat treatment, which is detrimental for mechanical properties. According to our predictions, this processing results in the formation of about 22% austenite, which can considerably improve the mechanical properties of the given alloy.

Fig. 5

Fig. 6

– Modelling of plastic flow behavior:

In this model, Shear stress for each microstructural phase is defined in terms of the contribution of different hardening mechanisms, including the Peierls force, solid solution strengthening, long-range back stress, dislocation strengthening, and precipitation strengthening. Rivera model [Modeling Simul. Mater. Sci. Eng. 22:015009,2014] based on a thermos-statistical theory of plasticity has been used in this research to predict dislocation density evolution (dislocation hardening mechanism) of TRIP steels during plastic deformation. In fact, this model has been applied for each phase in the microstructure, and the plastic flow behavior is subsequently estimated using an iso-work approximation approach. The deformation-induced martensitic transformation has been also modeled based on Haidemenopoulos theoretical method [Mter. Sci. Eng. A 615:416-23,2014].

– Bayesian parameter calibration of the plastic flow model:

Estimation of model parameters is a fundamental matter in science and engineering, which is usually overlooked by engineers. The advent of high-speed computers causes more attention to Bayesian approach for the analysis of model parameters, particularly those are based on Markov chain Monte Carlo (MCMC) methods. It has been illustrated that MCMC-based Bayesian approaches can result in better parameter calibrations for multi-level models compared to other likelihood based techniques. Therefore, we have applied MCMC-Standard Metropolis-Hastings algorithm to calibrate the sensitive parameters in the plastic flow model through sampling from an adaptive proposal distribution for posterior probability density function of parameters. In this approach, the model is trained with different experimental data sequentially or simultaneously to estimate parameters and their uncertainties. The initial prior probability distributions of parameters and likelihood functions are determined from the data in literature. Metropolis-Hastings ratio is used as a criterion for acceptance/rejection of randomly selected samples in order to generate a number of samples which can represent the posterior probability distribution of parameters. These distributions are utilized for the determination of prior distributions for next data training in the case of sequential data trainings. The details of our methodology can be observed in Fig. 7. The calibrated parameters and their uncertainties are also shown in Table 1. For plotting strain-stress curves and their uncertainties after the calibrations, the concept of “propagation of error” has been used to propagate the uncertainty of parameters obtained from the posterior distributions to the overall error of the model, i.e., the uncertainty of stress at any given strain. Stress-strain curves and their uncertainty bands after sequential and simultaneous calibration have been demonstrated for all experimental conditions in Fig. 8(a) and (b). The black lines correspond to the plausible mean parameter values obtained after calibrations, and the blue and (blue + green) shaded areas are related to 68% and 95% Bayesian confidence intervals of the model predictions, respectively.

Fig. 7

 

 

 

 

 

 

 

 

 

Table 1

 

 

 

 

 

 

                        

Fig. 8 (a-b)

 

2) Bayesian Calibration of the Precipitation Model implemented in MatCalc

The first step for solving optimization (reverse) problem from performance to composition and processing is the calibration of the physical rigorous models that are used for optimal experimental design and subsequent model refinement. Precipitation model is one of these models, which connects chemistry and processing to micro-mechanical model. While NiTiHf thermodynamics database has been developing by experiments and ThermoCalc optimizations, we decided to develop and calibrate NiTi binary precipitation model by the existing NiTi database. This can facilitate the implementation and calibration of ternary precipitation model in the next step of project.

The most sensitive parameters for calibration have been recognized using model forward analysis, which are matrix/precipitate interfacial energy, diffusion correction, nucleation site density, nucleation constant, and shape factor (D-disk diameter/h-disk height). Markov Chain Monte Carlo (MCMC)-Metropolis Hasting Algorithm has been applied for the parameter calibrations. In this approach, prior knowledge including the initial values of parameters and their range obtained from literature or MatCalc© default in addition to experimental data are fed to Matlab MCMC toolbox. In this regard, non-informative (e.g., uniform) probability density functions (PDFs) can be considered as parameter prior distributions since no statistical information has been found for the given model parameters, and 9 experimental data has been extracted from Panchenko’s work. In MCMC toolbox, n samples of parameter vector are generated by random walk. It should be noted that there is a vector of model outputs with three elements, including Ni content of matrix, volume fraction of precipitates, and mean precipitate size, which can simultaneously be compared with its corresponding vector of experimental results in order to calibrate the above-mentioned parameters. At the end of this process, these parameter samples indicate the posterior PDFs of given parameters whose mean and covariance would be parameter optimal values and uncertainties, respectively. These calibrated parameters and their uncertainties are utilized to determine the uncertainty of model outputs using propagation of uncertainty:

Forward analysis of NiTi precipitation model showed there may be a relationship between matrix/precipitate interfacial energy and two of the model inputs, i.e., aging temperature and Ni nominal composition. For this reason, we decided to calibrate the above mentioned model parameters with each experimental data individually to find the optimum value for the interfacial energy in each experimental case. The other sensitive parameters have also been involved in calibration to consider their possible correlation with the interfacial energy. As an example, one of these calibrations is explained as follows.

After MCMC sampling, the correlation between each two parameters have been plotted to see how the change in one of them can affect the other one in optimal parameter space. One of these plots has been shown in Fig. 9a. The regions with high density of sample points (such as the red region in this figure) indicate the convergence of parameters to their optimum values. According to Fig. 9b, the posterior PDF of interfacial energy shows two picks. It seems that the peak demonstrated with blue arrow contains the optimum value for this parameter, and red arrow probably indicates a fake peak since it is around the initial value chosen for this parameter. In this case, it can result from the disability of MCMC technique to escape the local trap in the beginning of sampling. In order to find the transition point to the optimal peak, cumulative mean of samples has been plotted as observed in Fig. 9c. After around 8000 sampling points for interfacial energy, there is a sudden change in the plot trend which can correspond to this transition, Therefore, the first 8000 generations has been considered as burn-in period (left side of red dotted line) and removed for the calculation of optimum values for parameters and their uncertainties. Fig. 9d verifies that the fake peak would be eliminated after removal of the first 8000 generations. In table 2 and 3, the calibrated parameters and their corresponding model outputs with the uncertainties have been listed just for one calibration. Table 2 indicates a very good agreement between model results and experimental data.

Fig. 9 (a-d)

Table 2

Table 3

If the interfacial energy values obtained from the model calibration with each one of experiments are plotted in terms of aging temperature and nominal Ni content, a polynomial surface can be fitted as shown in Fig. 10. After inserting the fitted surface equation in the model, we have calibrated the other four model parameters with all nine experimental data together, based on the same approach mentioned above. The results for calibrated parameters and the model outputs corresponding to each experimental case have been reported in table 4 and 5, respectively. Although the model results do not fit the data exactly, most of the data is located in 95% Bayesian confidence intervals of the model results.

Fig. 10

Table 4

Table 5

It should be noted that the discrepancy function between the mean value of model results and experimental data (including natural uncertainty, missing physics, and data uncertainty) can be obtained through Gaussian process which can correct/improve our model. In addition, the co-kriging approach is going to be used in order to take advantage of physical model and surrogate model at the same time by finding the correlation of these two models in the design space.

 

3) Parameter Calibration in Phase Field Modeling

Phase field modelling is one of the most powerful approaches for the simulation of microstructural changes in materials; however, the model calibration and uncertainty quantification is a very hard task through conventional deterministic and probabilistic techniques due to high computational cost. Therefore, different global optimization approaches can be used to tackle the issue of calibration in these models such as Bayesian Global Optimizations (BGO), Knowledge-Gradient Optimizations (KGO), Efficient Global Optimizations (EGO), etc.

The phase field model parameters will be calibrated against two inter-metallic layer thicknesses. In order to not struggle with high multi-dimensionality in this work, the most sensitive parameters are found through a forward model analysis approach. In this sensitivity analysis, a two-level fractural factorial design has been considered to generate a fractional combination of model parameters for forward analysis which is an efficient technique for taking parameter interactions into account. Then, the analysis of variance (ANOVA) is also applied to rank the importance of parameters based on the p-values.

the corresponding multidimensional parameter space will be exploited and explored using EGO method in order to minimize the objective errors between the experimental data and model results; The objective error is the cumulative difference between all the available experimental time points and their corresponding model results for both thicknesses at each processing temperature. Therefore, just one scalar value is calculated for cumulative error which is minimized to find the optimum values for model parameters. This means a single objective optimization problem in this work. In this regard, there is a need for a surrogate model on parameter space with associated confidence intervals. For this purpose, a surrogate model based on Gaussian Process (GP) is built on some random points in the parameter space, which are generated by the Latin Hypercube Sampling (LHS) approach.

At the end, the expected improvement is calculated based on the surrogate function all over the parameter space. The parameter values associated with the maximum expected improvement is used as the next point for forward analysis. This process continues till a stopping criteria is satisfied.

 

High Temperature Shape Memory Alloys

Lead-free Solder Alloys

Environmental and health issues with lead
The Environmental Protection Agency (EPA) announced lead and its compounds as one of the 17 chemical poisons that threat human life and environment. When lead accumulates in the body for a long time, it will produce adverse health effects. The Environmental Protection Agency (EPA) announced lead and its compounds as one of the 17 chemical poisons that threat human life and environment. When lead accumulates in the body for a long time, it will produce adverse health effects because lead combines strongly to proteins in the body and prohibits normal functions of proteins in the human body, which causes disorder of nervous and reproductive system resulting in delaying neurological and physical development. Those effects are some of the adverse effects of lead on human health. When the level of lead in the blood exceeds 50 mg/dl of blood, lead poisoning is considerable enough to occur the adverse effects. Lead level even below the official threshold causes hazardous situation to humane neurological and physical development, especially for children.

Lead-free alloy selection
The descried issues forced one to perform examining and understanding of the implications for lead-free alternatives to tin-lead eutectic solder. The Department of Trade and Industry (DTI) developed a progressive report on the selection of lead-free solder. DTI first tasked the introducing a possible lead free solder alternatives. Once a material is selected, DTI manufactured a sample with the selected material on the same procedure of manufacturing lead-based alloys, and tested the sample to see the performance which should be equivalent to the lead-based alloys. The criteria of selecting lead-free alternatives to tin-lead eutectic solder are follows: The descried issues forced one to perform examining and understanding of the implications for lead-free alternatives to tin-lead eutectic solder. The Department of Trade and Industry (DTI) developed a progressive report on the selection of lead-free solder. DTI first tasked the introducing a possible lead free solder alternatives. Once a material is selected, DTI manufactured a sample with the selected material on the same procedure of manufacturing lead-based alloys, and tested the sample to see the performance which should be equivalent to the lead-based alloys. The criteria of selecting lead-free alternatives to tin-lead eutectic solder are follows:
1. The lead-free alloy stays with at most ternary alloys if possible. Quaternary alloys may make control difficult.
2. The lead-free alloy should be located in near the eutectic point. If it would be far away from the point, the solder have large pasty range during cool-down.
3. The lead-free alloy should have similar melting point comparing with tin-lead alloy in order to use existing manufacturing equipments.
4. The lead-free alloy should be equal or better characteristics than tin-lead alloy in reliability when it is used in electronics assembly.
5. The lead-free alloy should create equal or less cost comparing with tin-lead alloy.
6. If possible, the lead-free solder should be free from the existing patents.
7. The information of lead-free solder should be well known in the industry.
8. The lead free solder should be free from the health and environment issues.

Transient Liquid Phase Bonding (Cu/Sn/Cu structure)

Transient Liquid Phase Bonding (Cu/Sn/Cu structure)

 

Candidate alloy compositions
The second task to select the alternatives is to find a lead free solder that is free from patents of alloys because more than 30 companies have already achieved the patent about technologies and components related to lead-free alloy. Based on the criteria for selecting new lead-free alloys, the International Electronics Manufacturing Initiative evaluated more than 79 solder alloys, and then selected the solder alloy in the following:The second task to select the alternatives is to find a lead free solder that is free from patents of alloys because more than 30 companies have already achieved the patent about technologies and components related to lead-free alloy. Based on the criteria for selecting new lead-free alloys, the International Electronics Manufacturing Initiative evaluated more than 79 solder alloys, and then selected the solder alloy in the following:
1. Sn-Bi alloy
2. Sn-Zn (or Sn-Zn-Bi) alloy
3. Sn-Ag (or Sn-Ag-Bi) alloy
4. Sn-Ag-Cu alloy
5. Sn-Ag alloy
6. Sn-Cu alloy

MSGalaxy Platform Workflow Design

MSGalaxy is a branded version of Galaxy . Galaxy was created in 2005 to help mitigate the difficulties of computational science for individuals with limited computational backgrounds. As Bioinformatics has become more

Phase Stability in Al-Si-Sr Alloys

Thermodynamics and Kinetics of Hydrogen Storage in Mg-based Nano-layered Thin Films

Latest News

  • Co-Ni-Ga HTSMAs October 29, 2016
  • Computational Design of High Strength Steels October 29, 2016
  • Constraint Satisfaction Problem Approach to Materials Design and Discovery March 27, 2017
  • Control of Variability in the Performance of Selective Laser Melting (SLM) Parts through Microstructure Control and Design March 28, 2017
  • Design/Optimization Under Uncertainty using Bayesian inferences March 27, 2017

© 2016–2025 Computational Materials Science Lab Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment