A blog for discussing fracture papers

Category: ESIS (Page 2 of 3)

Discussion of fracture paper #28 – Rate effects and dynamic toughness of concrete

The paper “Estimating static/dynamic strength of notched unreinforced concrete under mixed-mode I/II loading” by N. Alanazi and L. Susmel in Engineering Fracture Mechanics 240 (2020) 107329, pp. 1-18, is a readworthy and very interesting paper. Extensive fracture mechanical testing of concrete is throughly described in the paper. The tests are performed for different fracture mode mixities applied to test specimens with different notch root radii at various elevated loading rates. 

According to the experimental results the strength of concrete increases as the loading rate increases. The mixed-mode loading conditions refers to the stress distribution around the original notch. Fracture starts in all cases at a half circular notch bottom. Initiation of a mode I cracks are anticipated and were clearly observed in all cases. The position of maximum tensile stress along the notch root as predicted by assuming, isotropic and linear elastic material properties correlates very nicely with where the cracks initiate. The selected crack initiation criterion is based on stresses at, or alternatively geometrically weighted inside, a region ahead of the crack tip. The linear extent of the region is material dependent. The criterion, used with a loading rate motivated modification, is strongly supported by the result.

The result regarding the rate dependence is different from what is observed for ductile metals, where at high strain rate dislocation motion is limited. This reduces the plastic deformation and increases the near tip stress level. It therefore decreases the observed toughness as opposed to what happens in concrete. Should the stress level or energy release rate exceed a critical value, the crack accelerates until the overshooting energy is balanced by inertia as described by Freund and Hutchinson (1985). Usually this means a substantial part of the elastic wave speed. For concrete I guess this must mean a couple of km/s. This is outside the scope of the present paper but a related question arises: What could be the source of the strain rate effects that are observed? Plasticity/nonlinearities are mentioned. I would possibly suggest for damage as well. We know that reinforced ceramics are affected by crack bridging and micro-crack clusters appearing along the crack path or offside it. If such elements are present then both decreased and increased toughnesses may be anticipated according to studies by Budiansky, Amazigo and Evans (1988) and Gudmundson (1990). Could concrete be influenced by the presence of crack bridging elements or micro-cracks or anything related? If not, what could be a plausible guess?  

Does anyone know or have suggestions that could lead forward? Perhaps the authors of the paper or anyone wishes to comment. Please, don’t hesitate to ask a question or provide other thoughts regarding the paper, the method, or anything related.

Per Ståhle

https://imechanica.org/node/24762

Discussion of fracture paper #27 – Phase-field modelling of cracks and interfaces

Landau and Ginzburg formulated a theory that includes the free energy of phases, with the purpose to derive coupled PDEs describing the dynamics of phase transformations. Their model with focus on the phase transition process itself also found many other applications, not the least because many exact solutions can be obtained. During the last few decades, with focus on the bulk material rather than the phase transition, the theory has been used as a convenient tool in numerical analyses to keep track of cracks and other moving boundaries. As a Swede I can’t help myself from noting that both of them received Nobel prizes, Landau in 1962 and Ginzburg in 2003. At least Ginzburg lived long enough to see their model used in connection with formation and growth of cracks. 

The Ginzburg-Landau equation assumes, as virtually all free energy based models do, that the state follows the direction of steepest descent towards a minimum free energy. Sooner or later a local minimum is reached. It doesn’t necessarily have to be the global minimum and may depend on the starting point. Often more than one form of energy, such as elastic, heat, electric, concentration and more energies are interacting along the path. Should there be only a single form of energy the result becomes Navier’s, Fourier’s, Ohm’s or Fick’s  law. If more than one form of energy is involved, all coupling terms between the different physical phenomena are readily obtained. By including chemical energy of phases Ginzburg and Landau were able to explain the physics leading to superfluid and superconducting materials. Later by mimicking vanished matter as a second phase with virtually no free energy we end up with a model suitable for studies of growing cracks, corrosion, dissolution of matter, electroplating or similar phenomena. The present paper 

“Phase-field modeling of crack branching and deflection in heterogeneous media” by Arne Claus Hansen-Dörr, Franz Dammaß, René de Borst and Markus Kästner in Engineering Fracture Mechanics, vol. 232, 2020, https://doi.org/10.1016/j.engfracmech.2020.107004, 

describes a usable benchmarked numerical model for computing crack growth based on a phase field model inspired by the Ginzburg-Landau’s pioneering work. The paper gives a nice background to the usage of the phase field model with many intriguing modelling details thoroughly described. Unlike in Paper #11 here the application is on cracks penetrating interfaces. Both mono- and bi-material interfaces at different angles are covered. It has been seen before e.g. in the works by He and Hutchinson 1989, but with the phase field model results are obtained without requiring any specific criterium for neither growth nor branching nor path. The cracking becomes the product of a continuous phase transformation. 

According to the work by Zak and Williams 1962, the stress singularity of a crack perpendicular to, and with its tip at, a bimaterial interface possesses a singularity r^-s that is weaker than r^-1/2 if the half space containing the crack is stiffer than the unbroken half space. In the absence of any other length scale than the distance, d, between the interface and the crack tip of an approaching crack, the stress intensity factor have to scale with d^(1/2-s). The consequence is that the energy release rate either becomes unlimited or vanishes. At least that latter scenario is surprisingly foolish whereas it means that it becomes impossible to make the crack reach the interface no matter how large the applied remote load is.

In the present paper the phase field provides an additional length parameter, the width of the crack surfaces. That changes the scene. Assume that the crack grows towards the interface and the distance to the interface is large compared with the width of the surface layer. The expected outcome I think would be that the crack growth energy release rate increases for a crack in a stiffer material and decreases it for a crack in a weaker material. As the surface layer width and the distance to the interface is of similar length the changes of the energy release rate does no more change as rapid as d^(1-2s). What happens then, I am not sure, but it seems reasonable that the tip penetrates the interface under neither infinite nor vanishing load. 

I could not find any observation of this mentioned in the paper so this becomes just pure speculation. It could be of more general interest though, since it could provide a hint of the possibilities to determine the critical load that might lead to crack arrest.

Comments, opinions or thoughts regarding the paper, the method, or anything related are encouraged.

Per Ståhle

https://imechanica.org/node/24661

Discussion of fracture paper #26 – Cracks and anisotropic materials

All materials are anisotropic, that’s a fact. Like the fact that all materials have a nonlinear response. This we can’t deny. Still enormous progress has been made by assuming both isotropy and linear elasticity. The success, as we all know, is due to the fact that many construction materials are very close to being both isotropic and linear. By definition materials may be claimed to be isotropic and linear, provided that the deviations are held within specified limits. Very often or almost always in structural design nearly perfect linearity is expected. In contrast to that quite a few construction materials show considerable anisotropy. It may be natural or artificial, created by humans or evolved by biological selection, to obtain preferred mechanical properties or for other reasons. To be able to choose between an isotropic analysis or a more cumbersome anisotropic dito, we at least once have to make calculations of both models and define a measure of the grade of anisotropy. This is realised in the excellent paper

“The finite element over-deterministic method to calculate the coefficients of crack tip asymptotic fields in anisotropic planes” by Majid R. Ayatollahi, Morteza Nejati, Saeid Ghouli in Engineering Fracture Mechanics, vol. 231, 15 May 2020, https://doi.org/10.1016/j.engfracmech.2020.106982.

The study provides a thorough review of materials that might require consideration of the anisotropic material properties. As a great fan of sorted data, I very much appreciate the references the authors give listed in a table with specified goals and utilised analysis methods. There are around 30 different methods listed. Methods are mostly numerical but also a few using Lekhnitskiy and Stroh formalisms. If I should add something the only I could think of would be Thomas C.T. Ting’s book “Anisotropic Elasticity”. In the book Ting derives a solution for a large plate containing an elliptic hole, which provides cracks as a special case.

The present paper gives an excellent quick start for those who need exact solutions. Exact solutions are of course needed to legitimise numerical solutions and to understand geometric constraints and numerical circumstances that affect the result. The Lekhnitskiy and Stroh formalisms boil down to the “method of characteristics” for solving partial differential equations. The authors focus on the solution for the vicinity of a crack tip that is given as a truncated series in polar coordinates attached to a crack tip. 

As far as I can see it is never mentioned in the paper, but I guess the series diverges at distances equal to or larger than the crack length 2a. Outside the circle r=2a the present series for r<2a should be possible to extend by analytic continuation. My question is: Could it be useful to have the alternative series for the region r>2a to relate the solution to the remote load?

Does anyone have any thoughts regarding this. Possibly the authors of the paper or anyone wishes to comment, ask a question or provide other thoughts regarding the paper, the method, or anything related.

Per Ståhle

https://imechanica.org/node/24513

Discussion of fracture paper #25 – The role of the fracture process region

The subject of this blog is a fracture mechanical study of soft polymers. It is well written and technically detailed which makes the reading a good investment. The paper is:

“Experimental and numerical assessment of the work of fracture in injection-moulded low-density polyethylene” by Martin Kroon, Eskil Andreasson, Viktor Petersson, Pär A.T. Olsson in Engineering Fracture Mechanics 192 (2018) 1–11.

As the title says, it is about the fracture mechanical properties of a group of polymers. The basic idea is to identify the energy release rate that is required to initiate crack growth. To distinguish between the energy required for creating crack and the energy dissipated in the surrounding continuum, the former is defined as the unstable material which has passed its largest load carrying capacity, and the remaining is the stable elastic plastic continuum. The energy required for creating crack surface is supposed to be independent of the scale of yielding.

The authors call it the essential work of fracture, as I believe was coined by Mai and Cotterell. If not the same, then this is very close to the energy dissipation in the fracture process region, as suggested by Barenblatt, and used by many others. Material instability could, of course also be the result of void or crack nucleation at irregularities of one kind or another outside the process region. How much should be included as essential work or not, could be discussed. I guess it depends on if it is a necessary requirement for fracture. The fact that it may both support and be an impediment to fracture does not make it less complicated. In the paper an FE model is successfully used to calculate the global energy release rate vis à vis the local unstable energy release in the fracture process region, modelled as a cohesive zone.

What captured my interest was the proposed two parameter cohesive zone model and its expected autonomy. With one parameter, whatever happens in the process region is determined by, e.g., K, J, G. The single parameter autonomy has its limits but more parameters can add more details and extend the autonomy and applicability. For the proposed cohesive zone, the most important parameter is the work of fracture. A second parameter is a critical stress that marks the onset of the fracture processes. In the model the critical stress is found at the tip of the cohesive zone. By using the model of the process region, the effect of different extents of plastic deformations is accounted for through the numerical calculation of the surrounding elastic plastic continuum.

The work of fracture is proportional to the product of the critical stress and the critical separation of the cohesive zone surfaces. The importance of the cohesive zone is that it provides a length scale. Without it, the process region would be represented by a point, the crack tip, with the consequence that the elastic plastic material during crack growth consumes all released energy. Nothing is let through to the crack tip.

Stationary cracks are surrounded by a crack tip field that releases energy to fracture process regions that may be small or even a singular point. If the crack is growing at steady-state very little is let through to a small fracture process region and to a singular point, nothing. In conventional thinking a large cohesive stress leads to a short cohesive zone, and by that, the available energy would be less. A variation of the critical stress is discussed in the paper. Presently, however, the two parameter model is more of a one parameter ditto, where the cohesive stress is selected just as sufficiently plausible. 

What could be done to nail the most suitable critical cohesive stress? With the present range of crack length and initiation of crack growth nothing is needed. The obtained constant energy release rate fits the experimental result perfectly. Further, it is difficult to find any good reason for why the excellent result would not hold also for larger cracks. As opposed to that, small, very small or no crack at all should give crack initiation and growth at a remote stress that is close to the critical cohesive stress. As the limit result of a vanishing crack, the two stresses should be identical. I am not sure about the present polymer but in many metals the growing plastic wake requires significant increase of the remote load. Often several times rather than percentages. So letting the crack grow at least a few times the linear extent of the plastic zone, would add on requirements that may be used to optimise both cohesive parameters. 

I really enjoyed reading this interesting paper. I understand that the paper is about initiation of crack growth which is excellent, but in view of the free critical cohesive stress, I wonder if the model can be extended to include very small cracks or the behaviour from initiation of crack growth to an approximate steady-state. It would be interesting if anyone would like to discuss or provide a comment or a thought, regarding the paper, the method, the autonomy, or anything related. The authors themselves perhaps.

Per Ståhle

https://imechanica.org/node/23886

Discussion of fracture paper #24 – The sound of crack growth

Carbon fibre reinforced polymers combines desired features from different worlds. The fibres are stiff and hard, while the polymers are the opposite, weak, soft and with irrelevant fracture toughness. Irrelevant considering the small in-plane deformation that the fibres can handle before they break. It is not totally surprising that one can make composites that display the best properties from each material. Perhaps less obvious or even surprising is that materials and composition can be designed to make the composite properties go beyond what the constituent materials are even near. A well-known example is the ordinary aluminium foil for household use that is laminated with a polymer film with similar thickness. The laminate gets a toughness that is several times that of the aluminium foil even though the over all strains are so small that the polymer hardly can carry any significant load. 

In search of something recent on laminate composites, I came across a very interesting paper on material and fracture mechanical testing of carbon fibre laminates::

“Innovative mechanical characterization of CFRP using acoustic emission technology” by Claudia Barile published in Engineering Fracture Mechanics Vol. 210 (2019) pp. 414–421

What caught my eye first was that the paper got citations already during the in press period. It was not less interesting when I found that the paper describes how acoustic emissions can detect damage and initiation of crack growth. The author, Barile, cleverly uses the wavelet transform to analyse the response to acoustic emission. In a couple of likewise recent publications she has examined the ability of the method. There Barile et al. simulate the testing for varying material parameters and analyse the simulated acoustic response using wavelet transformation. This allow them to explore the dependencies of the properties of the involved materials. 

They convincingly show that it is possible to both detect damage and damage mechanisms. In addition, a feature of the wavelet transform as opposed to its Fourier counterpart is the advantages at analyses of transients. By using the transform they were able to single out the initiation of crack growth. Very useful indeed. I get the feeling that their method may show even more benefits.

A detail that is unclear to me, if I should be fussy, is that there are more unstable phenomena than just crack growth that can appear as the load increases. Also regions of damage and in particular, fracture process regions may grow. When the stress intensity factor K alone is sufficient there is no need to consider neither size nor growth of the fracture process region. The need arises when KJ, or any other one-parameter description is insufficient, e.g. in situations when the physical size of the process region becomes important. Typical examples are when cracks cross bi-material interfaces or when they are small relative to the size of the process region. When the size seems to be the second most important feature, then the primary parameter may be complemented with a finite size model of the process region to get things right. There is a special twist of this in connection with process region size and rapid growth. In the mid 1980’s cohesive zones came in use to model fracture process regions in FEM analyses of elastic and elastic-plastic materials. Generally, during increasing load, cohesive zones appear at crack tips and develop until the crack begins to grow. One thing that at first glance was surprising, at least to some of us, was that for small cracks the process region first grows stably and shifts to be fast and uncontrollable, while the crack tip remains stationary. Later, of course the criterion for crack growth becomes fulfilled and crack growth follows.

Is it possible to differentiate between the signals from a suddenly fast growing damage region or fracture process region vis à vis a fast growing crack?

It would be interesting to hear from the authors or anyone else who would like to discuss or provide a comment or a thought, regarding the paper, the method, or anything related.

Per Ståhle

https://imechanica.org/node/23731

Discussion of fracture paper #23 – Paris’ exponent m<2 and behaviour of short cracks

I came across a very interesting paper in Engineering Fracture Mechanics about a year ago. It gives some new results of stochastic aspects of fatigue. The paper is:

”On the distribution and scatter of fatigue lives obtained by integration of crack growth curves: Does initial crack size distribution matter?” by M. Ciavarella, A. Papangelo, Engineering Fracture Mechanics, Vol 191 (2018) pp. 111–124.

The authors remind us of the turning point the a Paris’ exponent m=2 is. Initial crack length always matters but if the initial crack is small, the initial crack is seemingly very important for the if m>2.  For exponents less than 2, small initial cracks matters less or nothing at all. If all initial cracks are sufficiently small their size play no role  and may be ignored at  the calculation of the remaining life of the structure. Not so surprising this also applies to the stochastic approach by the authors. 

What surprised me is the fuzz around small cracks. I am sure there is an obstacle that I have overlooked. I am thinking that by using a cohesive zone model and why not a Dugdale or a Barenblatt model for which the analytical solutions are just an inverse trigonometric resp. hyperbolic function. What is needed to adopt the model to small crack mechanics is the stress intensity factor and a length parameter such as the crack tip opening displacement or an estimate of the linear extent of the nonlinear crack tip region.

I really enjoyed reading this interesting paper and get introduced to extreme value distribution. I also liked that the Weibull distribution was used. The guy himself, Waloddi Weibull was born a few km’s from my house in Scania, Sweden. Having said that I will take the opportunity to share a story that I got from one of Waloddi’s students Bertram Broberg. The story tells that the US army was skeptic and didn’t want to use a theory (Waloddi’s) that couldn’t even predict zero probability that object should brake. Not even at vanishing load. A year later they called him and told that they received  a cannon barrel that was broken already when they pulled it out of its casing and now they fully embraced his theory. 

Per Ståhle

https://imechanica.org/node/23169

Discussion of fracture paper #22 – Open access puts scientists in control of their own results

The last ESIS blog about how surprisingly few scientists are willing/able to share their experimental data, received an unexpectedly large interest. Directly after the publication another iMechanica blogger took the same theme but he put the focus on results produced at numerical analyses that are presented with insufficient information. While reading, my spontaneous guess was that one obstacle to do right could be the widespread use of commercial non-open codes. The least that then could be done is to demonstrate the ability of the code by comparing results with an exact solution of a simplified example. My fellow blogger also had an interesting reflection regarding differences between theoreticians and computational scientists and it suddenly occurs to me that everything is not black or white. Robert Hooke concealed his results and by writing an anagram, he made sure that he could still take the credit. He didn’t stop at that. When he made his result known he added some ten years to how early he understood the context. And he got away with it.

To some consolation, the EU 8th Framework programme, also called Horizon 2020, finances the OpenAIRE-, and its successor the OpenAIREplus-project that is developed and managed by CERN. The intention is to increase general access to research results with EU support. As a part of this the Zenodo server system was launched. As the observant reader of the previous blog might have seen noted, Zenodo was used by the authors of the survey we discussed in the previous ESIS blog

“Long term availability of raw experimental data in experimental fracture mechanics”, by Patrick Diehl, Ilyass Tabiai, Felix W. Baumann, Daniel Therriault and Martin Levesque, in Engineering Fracture Mechanics, 197 (2018) 21–26, with supplementary materials including all bibtex entries of the papers here  

DOI

The purpose of Zenodo is to make sure that there will be enough storage capacity for open access data for everyone. Mandatory for all Horizon2020 financed projects and in first hand all EU financed projects.

I learn from the parallel blog that there are a DataVerse, an openKIM, a Jupyter project and probably much more, in the support of open-access. It seems to me that DataVerse covers the same functionality as Zenodo.  In addition they offer an open-source server with the possibility to set up and run your own server and become integrated in a larger context, which seems very practical. OpenKIM is a systematic collection of atomistic potentials built by users. Jupyter Notebooks yet another open-source project supporting computing in any programming language. They have a written code of conduct. It is not as depressing as it first looks. In essence it summarises your rights and obligations.

It could possibly be better with one single repository or at least one unified system. But why not let a hundred flowers bloom. At the end the solution could be a search engine that covers all or a user’s choice of the open-access repositories. 

Per Ståhle

https://imechanica.org/node/23157

Discussion of fracture paper #21 – Only 6% of experimentalists want to disclose raw-data

Experimental data availability is a cornerstone for reproducibility in experimental fracture mechanics. This is how the technical note begins, the recently published 

“Long term availability of raw experimental data in experimental fracture mechanics”, by Patrick Diehl, Ilyass Tabiai, Felix W. Baumann, Daniel Therriault and Martin Levesque, in Engineering Fracture Mechanics, 197 (2018) 21–26.

It is five pages that really deserves to be read and discussed. A theory may be interesting but of little value until it has been proven by experiments. All the proof of a theory is in the experiment. What is the point if there is no raw-data for quallity check?

The authors cite another survey that found that 70% of around 1500 researchers failed to reproduce other scientists experiments. As a surprise, the same study find that the common scientists are confident that peer reviewed published experiments are reproducible.

A few years back many research councils around the world demanded open access to all publications emanating from research finansed by them. Open access is fine, but it is much more important to allow examination of the data that is used. Publishers could make a difference by providing space for data from their authors. Those who do not want to disclose their data should be asked for an explanation.

The pragmatic result of the survey is that only 6% will provide data, and you have to ask for it. That is a really disappointing result. The remaining was outdated addresses 22%, no reply 58% and 14% replied but were not willing to share their data. The result would probably still be deeply depressing, but possibly a bit better if I as a researcher only have a single experiment and a few authors to track down. It means more work than an email but on the other hand I don’t have 187 publications that Diehl et al. had. Through friends and former co-authors and some work I think chances are good. The authors present some clever ideas of what could be better than simply email-addresses that are temporary for many researchers.

The authors of the technical note do not know what hindered those 60% who did receive the request and did not reply. What could be the reason for not replying to a message where a colleague asks you about your willingness to share the raw experimental data of a published paper with others? If I present myself to a scientist as a colleague who plan to study his data and instead of studying his behaviour, then chances that he answers increase. I certainly hope that, and at least not the reversed but who knows, life never ceases to surprise. It would be interesting to know what happens. If anyone would like to have a go, I am sure that the author’s of the paper are willing to share the list of papers that they used.

Again, could there be any good reason for not sharing your raw-data with your fellow creatures? What is your opinion? Anyone, the authors perhaps. 

Per Ståhle

»

Comments

Re: Disclosing raw data

Permalink Submitted by Ajit R. Jadhav on Thu, 2018-08-23 23:19.

Thanks for highlighting the issue.

The idea that raw-data should be available seems quite fine by me, at least on the face of it, though let me hasten to add that personally, I mostly work only in theory, and for that reason, this is more or less a complete non-issue for me. Further, as a programmer, the closest thing that comes to sharing data in my case is: sharing the raw output of programs—though I would have strong objections if all parts of algorithms themselves also were to be disclosed to be able to publish a paper.

As to the latter, I was thinking of this hypothetical scenario. Suppose I invent a new algorithm for speeding up certain simulations. I want to sell that algorithm to some company. I want to get the best possible value for my effort (which is not necessarily the same as the most possible money in the immediate present). But the market is highly fragmented, and so, I don’t want to go through the hassle of contacting every potential customer. So, a good avenue for me is to publish a paper about it. Clearly, here, I can share some data but not all. Especially if the raw data itself can be enough for someone else to figure out at least the kind of algorithm I was using. Data can be a window into the algorithm, which I don’t want to open just yet. How does the proposal work out in this case?

The parallel of the programmer’s case to that of the “hard” experimental research is obvious.

Thus, in some cases, I do anticipate that there could be some IPR-related issues related to the design of the experimental apparatus itself, or of algorithms. Disclosing even just the raw-data could be, in some cases, tantamount to disclosing some other data or ideas that in themselves have some commercial value (present or future), implications for the confidentiality clauses with the clients, and/or patents.

Overall, private organizations pursuing cutting-edge research may have good reasons to pursuing a policy that has both these components: (i) not disclosing the raw data itself, and yet (ii) publishing some of their findings in a summary form, so as to keep the interested public informed about the more distinct stages that their research has reached. The twin policy results, because qua research, it needs to be published (say to gain or retain credibility); qua private data, it anyway cannot a property “owned” by “the public.”

Further, in any case, what is meant by raw-data also needs to be discussed by the research community and clarified. No one would want a worthless explostion in the amount of data. … One sure way to hide “real” information is to cover it under tons of worthless data. You can at least buy some time that way! (To wit: media reports about the Right to Information act in India.)

With all that said, in general, however, I do find the idea that “grant providing organizations should ensure that experimental data by public funded projects is available to the public” very appealing. [Emphasis added]. … Poetic justice! 🙂

Best,

–Ajit

https://imechanica.org/node/22590

Discussion of fracture paper #20 – Add stronger singularities to improve numerical accuracy

It is common practice to obtain stress intensity factors in elastic materials by using Williams series expansions truncated at the r^(-1/2)-stress term. I ask myself, what if both evaluations of experimental and numerical data is improved by including lower order (stronger singularities) terms? The standard truncation is done in a readworthy paper 

“Evaluation of stress intensity factors under multiaxial and compressive conditions using low order displacement or stress field fitting”, R. Andersson, F. Larsson and E. Kabo, in Engineering Fracture Mechanics, 189 (2018) 204–220,

where the authors propose a promising methodology for evaluation of stress intensity factors from asymptotic stress or displacement fields surrounding the crack tip. The focus is on cracks appearing beneath the contact between train wheel and rail and the difficulties that is caused by compression that only allow mode II and III fracture. The proposed methodology is surely applicable to a much larger collection of cases of fracture under high hydrostatic pressure such as at commonplace crushing or on a different length scale at continental transform faults driven by tectonic motion. In the paper they obtain excellent results and I cannot complain about the obtained accuracy. The basis of the analysis is XFEM finit element calculations of which the results are least square fitted to a series of power functions r^n/2. The series is truncated at n=-1 for stresses and 0 for displacements. Lower order terms are excluded. 

We know that the complete series, converges within an annular region between the largest circle that is entirely in the elastic body and the smallest circle that encircles the non-linear region at the crack tip. In the annular ring the complete series is required for convergence with arbitrary accuracy. Outside the annular ring the series diverges and on its boundaries anything can happen. A single term autonomy is established if the stress terms for n<-1 are insignificant on the outer boundary and those for n>-1 are insignificant on the inner boundary. Then only the square root singular term connects the outer boundary to the inner boundary and the crack tip region. Closer to the inner boundary the n≤-1 give the most important contributions and at the outer the n≥-1 are the most important.

I admit that in purely elastic cases the non-linear region at the crack tip is practically a point and all terms n<-1 become insignificant, but here comes my point: Both at evaluation of experiments and numerics the accuracy is often not very good close to the crack tip which often force investigators to exclude data that seem less accurate. This was done in the reviewed paper, where the result from the elements closes to the crack tip was excluded. This is may be the right thing to do but what if n=-2, a r^-1 singularity is included? After all the numerical inaccuracies at the crack tip or the inaccurate measurements or non-linear behaviour at experiments are fading away at larger distances from the crack tip. In the series expansion of stresses in the elastic environment this do appear as finite stress terms for n≤-1.

It would be interesting to hear if there are any thoughts regarding this. The authors of the paper or anyone who wishes express an opinion is encouraged to do so.

Per Ståhle

https://imechanica.org/node/22425

Discussion of fracture paper #19 – Fracture mechanical properties of graphene

Extreme thermal and electrical conductivity, blocks out almost all gases, stiff as diamond and stronger than anything else. The list of extreme properties seems never ending. The paper

Growth speed of single edge pre-crack in graphene sheet under tension, Jun Hua et al., Engineering Fracture Mechanics 182 (2017) 337–355

deals with the fracture mechanical properties of graphene. A sheet of armchair graphene can be stretched up to 15 percent which is much for a crystalline material but not so much when compared with many polymers. The ultimate load, on the other hand, becomes huge almost 100 GPa or more. Under the circumstances, it is problematic to say the least, that the fracture toughness is that of a ceramic, only a few MPam^(1/2). Obviously cracks must be avoided if the high ultimate strength should be useful. Already a few microns deep scratches will bring the strength down to a a few hundred MPa. 

The research group consisting of Jun Hua, Qinlong Liu, Yan Hou, Xiaxia Wu and Yuhui Zhang from the dept. of engineering mechanics, school of science, Xi’an University of Architecture and Technology, Xi’an, China, has studied fast crack growth in a single atomic layer graphene sheet with a pre-crack. They are able to use molecular dynamics simulations to study the kinetics of a quasi-static process. They pair the result with continuum mechanical relations to find crack growth rates. A result that provide confidence is that the fracture toughness obtained from molecular primitives agrees well with what is obtained at experiments. The highlighted results are that the crack growth rate increases with increasing loading rate and decreasing crack length. The tendencies are expected and should be obtained also by continuum mechanical simulations, however then not be first principle and requiring a fracture criterion.

Another major loss would be the possibility to directly observe the details of the fracture process. According to the simulation results the crack runs nicely between two rows of atoms without branching or much disturbances of the ordered lattice. The fracture process itself would not be too exciting if it was not for some occasional minor disorder that is trapped along the crack surfaces. The event does not seem to occur periodically but around one of ten atoms suffers from what the authors call abnormal failure. Remaining at the crack surface are dislocated atoms with increased bond orders. All dislocated atoms are located at the crack surface. The distorted regions surrounding solitary dislocated carbon atoms are small. 

A motivated question would be if the dissipated energy is of the same order of magnitude as the energy required to break the bonds that connects the upper and lower half planes before fracture. Can this be made larger by forcing the crack to grow not along a symmetry plane as in the present study. Without knowing much about the technical possibilities I assume that if two graphene sheets connected to each other rotated so that the symmetry planes do not coincide, the crack would be forced to select a less comfortable path in at least one of the sheets. 

Everyone with comments or questions is cordially invited raise their voice.

Per Ståhle

https://imechanica.org/node/21985

« Older posts Newer posts »

© 2024 ESIS Blog

Theme by Anders NorénUp ↑