added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2024-06-09T15:15:24.279Z
2024-01-02T00:00:00.000
270347454
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13264826.2024.2356373?needAccess=true", "pdf_hash": "8647c2ae91987b2df18a7b57781612b745336427", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41439", "s2fieldsofstudy": [ "History" ], "sha1": "290c121dbff33d4a2046a3d59507432d6c2a46b8", "year": 2024 }
pes2o/s2orc
Reform or Revolution: Architectural Theory in West Berlin and Zurich (1967–72) Abstract The article explores the evolution of architectural and urban theory in the wake of the 1960s politicisation of the architecture faculties of TU Berlin and ETH Zürich. Focusing on Oswald Mathias Ungers, Jörn Janssen, and their students, it examines a symposium, an exhibition, and a seminar that shaped divergent perspectives on architectural theory. It considers Ungers’ attempts to reform architectural education and the profession itself in relation to West German socio-economic transformations, focusing on Ungers’ 1967 symposium and Janssen’s contributions to it. It then considers student criticism through a “go-in” organised at that same event and the 1968 student-led exhibition Caution Architectural Theory. It finally examines Janssen’s 1970 seminar at ETH, which unravelled the socio-economic roots of a Zurich housing development and demonstrated the need for revolutionary change in housing and planning. These episodes, observed through their material, social and political contexts, display alternative understandings of architectural theory and, consequently, of architecture’s role in achieving change. Despite their radical difference, through their reorientations of architectural theory each offered astute readings of the current social and professional developments and suggested viable changes-even if the respective projects ended up unfolding in different places, as occurred for Ungers, who moved to Cornell in 1968, and Janssen, who went to Zurich in 1971.The ideas of Ungers, Janssen, and their students were each rooted in the specificities of this moment.And yet, if opposing architectural and political positions may be perceived as being equally timely, then the question becomes, rather, in the interest of whom did these theories develop their timeliness?What were the results in the sphere of architectural production of the transformations suggested by each party?Which class benefitted from these proposed transformations, and which did not? This paper analyses a series of cases that gave rise to reformist and revolutionary approaches to architectural theory at the turn of the 1970s, investigating them in light of their historical and material context.The paper thematises these and other questions stressing the political orientation developed by each definition of architectural theory: it first shows how Ungers' symposium, as well as Janssen's contribution to this event, allowed the possibility of reforming architecture within the boundaries of the status quo; it then illustrates how political students convincingly rejected this possibility but struggled to formulate a positive and alternative agenda for architectural theory; finally, it shows how an alternative was eventually pioneered in Janssen's seminar at the ETH, where architectural theory became an instrument for the development of a revolutionary critique of capitalism and its spatial dynamics. West German Housing in the Post-War Decades Throughout the post-war years, West Germany accumulated growing internal contradictions, despite appearing solid and stable on the surface.Even before World War Two had concluded, the Western Allies recognised the strategic importance of a prosperous capitalist Germany in countering the USSR and curbing the spread of socialism in Western Europe.Consequently, the United States and the United Kingdom chose not to punish Germany overly harshly for its Nazi history.Instead, they focused on fostering economic prosperity and political stability in their respective occupation zones. 1 This approach led to significant economic growth in the first two decades of the newly established West German state, guided by the principles of the "social market economy." 2 This economic theory advocated for the freedom of the capitalist market while assigning the state the responsibility of ensuring social protection for the weakest social groups.On the one hand, this approach resulted in impressive industrial growth, contributing to the rise of West German capital and the expansion of the bourgeoisie.On the other, it also shifted the social costs of this process onto the working class, leading to considerable levels of social inequality and immobility. 3hese dynamics also influenced urban planning and housing, which played a key role in both garnering support for the government and discouraging socialist sentiments among the working class. 4The West German government invested significantly in these sectors, implementing two Housing Acts that facilitated a "housing miracle" involving the construction of around five million new flats by 1960. 5otably, half of this initiative comprised social housing, a collaboration between the public and private sectors in which the state provided private investors with low-interest loans, tax exemptions, fee relief, and additional subsidies to build houses subject to fixed rents and social restrictions for the first twenty-five years.After this period, controls were lifted, and social housing transitioned into fully private property. 6Importantly, while around 70% of the population was eligible for a social housing dwelling, the rent for these dwellings was notably higher than that for impoverished private market flats. 7This situation favoured a small segment of the better-off working class, granting them high-quality state-funded yet privately owned dwellings while the majority of workers, without access to social housing, found themselves relegated to deteriorating nineteenth-century private dwellings. In the isolated enclave of West Berlin, the impact of these conditions was more pronounced.Disconnected from West Germany's industrial network and stripped of its status as a capital city, West Berlin faced a sharp decline in productivity and a rise in unemployment. 8Its unique position, however, made it the "showcase of the West": the symbol of capitalist opulence and freedom behind the Iron Curtain, enticing populations from the Soviet bloc to consider opposition and defection.To sustain this image, both the US and West Germany ensured the city received substantial budget subsidies, one-time investments, tax rebates, and refunds. 9he building sector, and particularly housing, was a major beneficiary of this support. 10For instance, in March 1963, West Berlin became the first West German city to adopt an extensive urban renewal plan for residential purposes, aiming to redevelop 56,000 flats, including renovating 10,000 units and demolishing and reconstructing 46,000.This was just the beginning, with a potential plan for the renovation of 180,000 flats and the demolition of 250,000 over the next twenty-five years. 11eflecting the prevailing disciplinary theories of the time, which favoured a "dispersed" and "car-oriented" urban model, the initial projects proposed tearing down large portions of the existing nineteenth-century city, segregating zones based on function, placing buildings on large-scale parks and high-traffic avenues, and reducing residential density significantly. 12Once again, the redevelopment targeted new central and modern housing for the bourgeoisie, while the working-class tenants of the depleted and slated-for-demolition dwellings would be evicted without compensation and left without an equally affordable alternative. 13hile these housing and planning models maintained strong support until the end of the decade, the early 1960s witnessed the first signs of political and socio-economic challenges to this approach.Intellectuals pointed out the authoritarian, hierarchical, and consumerist aspects of West German society, drawing connections to pre-war fascism and calling for the application of democratic principles not only in political institutions but also in everyday cultural and social life. 14Alongside political scrutiny, post-war urban theories started to face critique in the 1960s.The first among them was from economist Edgar Salin, who in 1960 delivered a speech challenging the prevailing enthusiasm for the "car-oriented" and "dispersed" city model and exploring an alternative concept of "urbanity." 15Inspired by the values of classical Athens, Salin praised the "humanistic urbanity" of eighteenth and nineteenth-century Germany and questioned the then-current functional model of urban centre redevelopment.He advocated instead a return to iconic European cities of commerce, culture, and politics. 16uccessively, a growing number of critics from West Germany and beyond began challenging the established principles of post-war planning. 17Notably, Alexander Mitscherlich's 1965 book, The Inhospitable Nature of our Cities, made a significant impact on architects and planners. 18It vehemently criticised the post-war city, examining it from an individual and psychological standpoint.Mitscherlich argued that West Germany's standardised and rationalised housing approach, evident in both social housing and urban sprawl, created an environment lacking the social and cultural qualities of pre-modern cities. Architects and planners played a seminal role in this process, accused as they were of concealing the increasing urban impoverishment and the consequent psychological risks for the city's inhabitants.Critics like Mitscherlich approached the critique of West German housing and planning without accounting for any economic foundations.In this respect, they represent the viewpoint of a discontented bourgeoisie no longer satisfied with existing urban transformations.While the "dispersed" and "car-oriented" model may have suited the post-war economic recovery and the years of austerity, it no longer met the needs of larger and wealthier bourgeois segments, which saw monotonous urban redevelopment as a constraint on their cultural and political ambitions.This critique reshaped the architectural discourse, with the new focus on "urbanity" gaining support after the late 1960s. 19his perspective did not challenge the logic of clearing older neighbourhoods, constructing satellite towns in the periphery, or urban motorways in the centres-even less did it address the social inequalities that this system produced.Instead, it aimed to strike a compromise between different sections of the capitalist class: on one side, the building industry and financial capital, which argued for the large-scale and standardised approach to urban renewal; on the other, small and medium-property owners, who had little to gain from such developments and lobbied for small-scale refurbishment. Oswald Mathias Ungers and the Anti-Authoritarian Student Movement In this quest for a compromise, Oswald Mathias Ungers was a key figure, who at the turn of the 1960s was exploring a hybrid aesthetic, blending his fascination with the alternative trends of functionalism and expressionism. 20This approach quickly propelled him to prominence in the architectural world, and by October 1963 he became a professor at the Architecture Faculty of the TU Berlin. 21Upon arriving in West Berlin, Ungers initially focused on the architectural principles of "composition," evident in both his introductory lectures and early design studios in 1964 and 1965. 22owever, West Berlin's urban dimension soon captivated him, despite considering himself an "amateur" who "did not know his way around planning." 23In both his teaching and design practice Ungers increasingly focused on the urban transformations faced by the post-war city. 24Criticising the shortcomings of urban planning's scientific methods, Ungers asserted that architecture remained the most adept discipline for interpreting the city's crucial significance, both in its aesthetic value and as an economic asset.The architectural profession was, in his conception, the only profession capable of synthesising an artistic and scientific approach that would reconcile various economic, technical, social and environmental demands. Ungers' attempt to balance architectural, cultural, and socio-economic tensions established him as an original professor, capable of engaging with pressing urban issues without forsaking the traditional leadership role of the architectural profession.This approach caught the attention of the architecture students of TU Berlin, who increasingly followed him during a period marked by growing participation and politicisation. 25As Hartman Frank later put it, Ungers' earlier engagement with the simultaneously technical, social and political dimension of architecture "ignited [us] more radically than anywhere else, as he researched the possibilities of architecture in the most extreme and coherent form." 26In Ungers' seminars, according to Ingrid Krau, "at last, aspects of the increased uncertainty with which everybody was dealing individually, were finally debatable in collective meetings.This mobilised us all." 27his mobilisation occurred in an increasingly unstable context marked by two significant events in 1966 and 1967 that disrupted the socio-economic, political, and architectural balance achieved in post-war West Germany. 28In these years the first post-war economic crisis hit, accelerating industrial rationalisation and capital centralisation.This prompted the abandonment of the "social market economy" model and the embrace of Keynesian policies.Based on vast public spending measures, these policies aimed at increasing demand through full employment, higher wages, money stability, and trade balance.This new economic orientation led to a further expansion of the construction sector, in the form of even more generous subsidies for the private building market. 29Simultaneously, on July 2, 1967, during protests against an official visit of the Shah of Iran to West Berlin, the police shot and killed student Benno Ohnesorg. 30This event fuelled the momentum of the Socialist Student Union (SDS) in the emerging opposition movement, attracting thousands of new members and standing out as the only organised political group committed to a leftist agenda. 31SDS students formed the backbone of the "anti-authoritarian movement," which gained popularity by focusing on a progressive set of educational, cultural, and political claims mostly enticing liberal bourgeois students.Their demands included the democratisation of the university and its politics, the liberalisation of moral and cultural values, the nationalisation of monopolistic media institutions, and support for national-liberation struggles. Profiting from personal and political contacts with students from the nearby Freie Universit€ at, architecture students at the TU Berlin were the first of their discipline to become organised in this setting. 32Already in late 1967 they published a brief text outlining their research aims, which included the analysis of architects' socio-economic position and their historical "alliance with the ruling elite," the investigation of the West Berlin building industry and its relationship with the political environment, and analysis of the built environment of the newly constructed West Berlin housing estates. 33While demanding changes in the professional sector, architecture students also advocated for reforms in their academic curriculum, which was rooted in an outdated image of the architectural profession.In this endeavour, Ungers strongly supported the students, proposing a new course structure that introduced career specialisation through technical courses, which also included interdisciplinary, collaborative, and intermediate examinations. 34However, Ungers' support for the students was, at least initially, not limited to academic issues: on June 2, 1967, Ungers was the only architecture professor to stop his studio and join protests against the Shah, and he was also one of the few who publicly criticised the police for having "sought confrontation [ … ] through deliberate provocation" against students. 35uring the first months of the academic year of 1967-68, the mutual appreciation between Ungers and the students reached its peak.However, this relationship turned out to be short-lived: the students were dramatically expanding their demands beyond the modernisation of education, and increasingly questioning the role of architecture within the political system and the legitimacy of the system itself; and yet Ungers showed no interest in radicalising his reformist stance and remained primarily focused on professional and institutional modernisation.After some semesters, the potential coalition between the liberal professor and his radical students waned. The 1967 Berlin Architectural Theory Symposium A notable moment marking a new phase of student politicisation was the Architectural Theory symposium organised by Ungers from December 11 to 15, 1967. 36As Ungers stated in his welcoming remarks, "after a period of extensive construction activity, and on the cusp of development on an even larger scale, it was a good time to investigate architecture's theoretical foundations."In particular, he emphasised the urgency to recognise "which phenomena should serve as the basis for a theoretical framework, or what kind of findings we might expect."The primary question revolved around "whether social phenomena, technical conditions, historical experiences, or immanent formal laws should primarily be recognised as the planes of reference." 37eflecting similar efforts in other faculties in Europe and North America, Ungers' interest in establishing architectural theory within the TU Berlin academic curriculum illustrates his attempt to reform the discipline in an affirmative and operative sense. 38 new architectural theory would not only be instrumental in conserving the position of architecture at the forefront of the building industry, but also in absorbing the contradictions between monopolist and small-scale capital.From this point of view, Ungers' concept of theory hinged on the possibility of architecture's reform and, more importantly, the reform of the capitalist system.He nonetheless believed in the possibility of resolving architecture's more recent challenges without questioning the socio-economic foundations on which the system was based. The four areas he identified as potential frames of reference for theory demonstrate this reformist orientation, with a "progressive" emphasis on social and technical conditions countered by a "conservative" focus on formal and historical aspects.The roster of speakers invited by Ungers maintained this balance professionally, geographically, and, most importantly, theoretically: Colin Rowe spoke in favour of the formal dimension of architecture; Reyner Banham emphasised the role of technology; Ulrich Conrads highlighted the prominence of society; and Sigfried Giedeon focused on history. 39Furthermore, the event included key figures who were also engaged in the discussion of architectural theory in the German-speaking academic environment, such as J€ urgen Joedicke in Stuttgart and Lucius Burckhardt at ETH Z€ urich. 40espite this rich and diverse context, J€ orn Janssen, a West German architect not widely known in the international architectural scene, struck a markedly different tone from the more practice-oriented perspectives offered by others.Distancing himself from the main approach of the symposium, Janssen observed how pure theory, as well as its opposition to practice, did not actually exist, since theory and practice represented two complementary elements of the same process.He argued that the false dichotomy of theory and practice concealed the real opposition between mental and physical activity, historically reflected in the conflict between rulers and those they rule.To move beyond this false dichotomy, he proposed shifting attention from architecture to "construction planning" (Bauplanung), a discipline aimed at overcoming architecture's obsolescence by applying more modern planning techniques to the building industry.Construction planning, as "the programming science of specific building processes," was not equipped to deal with the traditional focus of architecture on "cathedrals, palaces, [ … ] theatres and prisons," since "problems which do not have a social relevance, allow no modern solution." 41Construction planning was rather tied to such key sectors of the economy as "industrial production, transportation, communication, energy provision, regional development, land and water management," in which the rationalisation of the building industry could no longer be delayed. 42anssen argued that post-war social and technical developments made planning indispensable, even in the building industry, where a complacent intellectual class avoided the modernisation of architecture in order to align with the interests of the most regressive segments of capitalism.Architects had either hidden the sector's obsolescence behind formal and cultural concepts or confined planning to partial or marginal processes.He expanded this point in opposition to the words of the Bauhaus founder Walter Gropius, who in the middle of the twentieth century could still maintain that: Good architecture [ … ] implies an intimate knowledge of biological, social, technical and artistic problems.But then-even that is not enough.To make a unity out of all these different branches of human activity, a strong character is required.[ … ] Our century has produced the expert type in millions; let us make way now for the men of vision. 43 contrast to Gropius, Janssen believed architecture should be entrusted to the "minds of the millions of experts" collaborating peer-to-peer and utilising scientific and mathematical methods of construction planning.Architects would in this way be unambiguously deprived of their hypothetical capacity to single-handedly design the built environment.Nevertheless, he assigned architects the "special role" of "selecting experts" and "controlling and coordinating their work" so that, even as technicians among technicians, architects would somehow find themselves at the top of the working pyramid even in construction planning. 44eter Lammert reported that Janssen's contribution to the symposium was the most well received among the political students. 45It had the merit of being the only paper framed by a clear Marxist perspective, which saw the architectural profession as a dependent part of the production sector, to be analysed primarily in light of its technical developments and social relationships.At the same time, abandoning a more classically Marxist point of view, Janssen seemed to accept that the progressive interests of capitalist rationalisation would allow the building industry to emancipate itself, at least partially, from its most backward features without immediately imposing new oppressive productive conditions.In this sense, he envisioned the possibility of construction planning reforming the system from within, that is, without requiring any preconditional transformation of the political or socio-economic order. While Ungers may have endorsed some aspects of Janssen's vision, especially to the extent that it worked within the status quo, the students increasingly questioned this approach.For instance, as Frank recalls, they felt that the symposium "entirely missed the public's interest.Many students in the audience had just discovered the social dimension of architecture and considered theoretical or historical questions as superficial or, at least, marginal." 46The relationship between Ungers and his students was then on the brink of a radical change.His architectural and political position had remained largely unchanged since his arrival in Berlin.After 1967, this position was recognised by politically engaged students as insufficient, to the extent that it was confined to marginal or superficial details, and incapable of working towards a real transformation of the status quo.As a result of this widening split Ungers decided to move to Ithaca in the winter of 1968.His teaching would, there, be spared the kind of unwelcome political criticism that would soon increase dramatically at the TU Berlin. 47 Anti-Authoritarian Architectural Theory The students' new orientation became evident on the last day of the symposium, on which occurred the first protest initiated by West Berlin architecture students.During the final collective discussion, a group of students organised a "go-in," entering the theatre hall, distributing SDS leaflets, and unveiling a banner with the message: "All Houses Are Beautiful.Stop Building." 48This succinct statement encapsulated the widening gap between the students' idea of architectural theory and that of Ungers and the other panellists.The promise of a new, abstract theory, addressing architecture's social, technological, historical, and formal dimensions, was for the students simply an excuse to avoid questioning the role of architecture in the real world, and hence its subordination to the ruling class.Against the perpetuation of existing dynamics, the anti-authoritarian architecture students seemed to believe that the only solution was to stop all building activity. The uncompromising "Stop Building" slogan expressed the students' demands for a transformation of architecture's broader professional, political and social contexts.The call to stop building might have been motivated by the students' understanding of architectural value-the "beauty" of "houses"-not in the aesthetic sense, but rather in terms of the personal and cultural relationships embedded in dwellings.Simultaneously, the post-war surge of modernist buildings did not result in any real improvement of West German cities, as in the students' view the unprecedented scale of social housing construction had achieved nothing but the eviction of working-class tenants and the frustration of the bourgeoisie's cultural and environmental ambitions.For the first time, students were dismissing the possibility of reforming the architectural discipline, as they considered the source of its problems to lie beyond its practical and theoretical reach. However, "Stop Building" was also ambiguous.While dissociating from the reform of architectural theory and practice, it did not indicate a long-term strategy to address this dissociation.On one side, affirming that the issue was building neither better nor more houses, architecture was finally denied its steering role in solving urban problems, and was thus invited to stop concerning itself with them.On the other side, it was unclear whether building activity should be temporarily or permanently stopped, what would substitute it, how and why.The slogan also left the origin of the problem unaddressed.Was the issue professional, with architecture having to be replaced by new disciplines better suited to respond to the changed requirements of the building industry?Was it political, with the West Berlin geopolitical objectives and local interests preventing architecture from serving the interests of its citizens?Or was it socio-economic, given the capitalist system that created both the material and cultural preconditions for an oppressive urban environment?"All Houses Are Beautiful.Stop Building" encapsulated the uncertainties of the anti-authoritarian phase of the student movement, caught between outright rejection of the existing system and indecision about what should replace it. It is possible that at least some of the anonymous initiators of the go-in were also involved in the organisation of Caution Architectural Theory, a section of the Diagnosis exhibition organised by students in September 1968 at the TU Berlin Architecture Faculty. 49In contrast to the go-in at Ungers' symposium, this exhibit stripped architectural theory of any aura and defined it as a readily deployable ideology capable of legitimising whatever building the ruling class desired.Professional architects were portrayed as those who cunningly and disingenuously employ any theoretical principle for an immediate economic and cultural return.The authors of Caution Architectural Theory argued that, in this ideological context, every architectural definition becomes an "alibi aesthetics," an arbitrary theoretical construction meant to justify any design, regardless of its concrete social consequences. 50Without a genuine engagement with the status quo, every theory was deemed equivalent, and there was no sense in discriminating amongst them or supporting the one that sounded more radical or liberal.Instead of being used for changing reality, theory became an "alibi" to conserve it, hiding its most undesirable and unjust characters.The consequence of this alibi aesthetics was "theory hostility," as architects were "not prevent[ed] from 'thinking,' judging, teaching, and drawing manifestos" but "released from the obligation to check their own theoretical assumptions." 51he statements made in Caution Architectural Theory represent a complete overturning of the approach to theory taken by Ungers' symposium less than a year earlier.What Ungers presented as a pluralist theory of approaches striving for the reform of architecture was condemned in the exhibition as a single ideology aimed at justifying the current professional, political, and social status quo.Students proudly reported Giedion's comments on the go-in, where he maintained that he "was amazed by the disorientation of the listening students.It will take years to put them back on the right track."52 From the students' perspective, however, what Giedion labelled "disorientation" was their opposition, and what he considered "the right track" was nothing more than an old track they were both ready and happy to leave. Despite the exhibition's more in-depth analysis of the ideological role played by architecture, Caution Architectural Theory possessed ambiguities comparable to the "Stop Building" banner.Both critiques considered the reform of architecture not only to be inadequate, but potentially harmful, since it could merely mask the genuine roots of the issue.However, neither of the two actions clearly demonstrated how students or architects could actively engage to bring about change.J€ orn Janssen, following a brief stint at the TU Berlin, where he was hired and fired in 1969, offered a potential response to this question in the following year, when he assumed a teaching position at ETH Z€ urich and held an architectural theory seminar entitled "Economic Criteria for Planning Decisions." 53 J€ orn Janssen and the Socialist Phase of the Student Movement In the early 1970s, the West German student movement underwent a significant transformation.Despite the widespread influence of the SDS among the student population, the movement failed to recruit members from different social groups and achieve major national political victories. 54Against this backdrop, and influenced by the resurgence of strike campaigns in West German industrial cities in September 1969, larger and larger factions of the movement began advocating for more classbased and party-centred politics, abandoning the anti-authoritarian ideology in favour of a socialist organisation. 55 central theoretical reference in this process was the 1969 essay Fetish Revolution, in which the German philosopher Hans G. Helms showed how the anarchist trends within the leadership of the SDS had corrupted the Marxist concept of revolution, turning it into an act of rebellion against modernity and technology. 56From this perspective, the anti-authoritarian movement's politics reduced social revolution to a series of alternative practices and lifestyles, neglecting the Marxist emphasis on production and replacing it with a focus on critical individual consumption.Helms viewed the student "revolution" as a "fetish," an event that seemed, in theory, capable of abolishing existing power relationships but was, in reality, unable to alter the material base of society. Building on these ideas, between 1969 and 1970 Hans Helms and J€ orn Janssen coedited the book Capitalist Urban Planning, which expanded on the Fetish Revolution's argument from an architectural and urban perspective. 57In his introduction, Helms characterised the city in historical materialist terms, seeing it as a "means of exploitation" and a "product of social division of labour, class dominion, and class struggle." 58He scrutinised the proliferation of cars and motorways, arguing that these developments served capitalists' interests in financing key industrial branches and promoting small property among the working class.Accordingly, he observed: It would be superfluous and ridiculous to expect from urban planners a transformation of the urban order which was stimulated by the automobile industry.The necessary changes cannot be achieved through urban measures, but only through the political overturn of the conditions of production and transportation. 59cording to Helms, only a political upheaval of the conditions of production-a social revolution-could truly transform the nature of urban space.He therefore believed that architects and other technicians could, at best, add a "thin veneer of natural and restorative demands" to the preservation of the existing order. 60n his own contribution to this volume, J€ orn Janssen completed a century-long historical examination of the relationship between German capitalist development and housing policies. 61From this analysis, he concluded that the struggle for better housing would only be meaningful if seen in the context of the broader battle to abolish the entire system of capitalist production and exploitation.A century of capitalist and social-democratic housing policies had, in fact, diluted the immediate capacity of housing campaigns to activate the working-class struggle in a revolutionary perspective, instead aligning them with the preservation of the status quo.To counter this deadlock, Janssen suggested illustrating how all these housing achievements were temporary and illusory by, first, drawing the connection between housing and other sections of social life in which class oppression was more apparent and, secondly, demonstrating how even the most celebrated social-democratic policies actually went against the interests of the vast majority of the working class. Overturning the approach developed by Ungers and, at least in part, by his own presentation to the TU Berlin symposium, at the turn of the 1970s Janssen dismissed any hope for an operative theory capable of improving architectural efficiency within the status quo.Architectural theory had to locate itself clearly outside the context of the current profession to produce a critique illuminating the non-architectural origin of the specific problems discussed. 62While the authors of the "Stop Building" banner and Caution Architectural Theory had already arrived at a similar approach, their reading of architectural ideology was equivocally suspended between an emphasis on political, professional and social aspects of architectural thought and practice.In turn, this resulted in an incapacity to find a way out from the deadlock, as the students remained confined in the realm of theory themselves. In contrast, and returning to a more classical Marxist definition of historical materialism, Janssen unambiguously located architecture in the context of the socioeconomic dynamics of capitalism-which, if analysed scientifically, could not help but unveil the opposition between capital and labour that lie at its core. 63By illuminating this foundational contradiction, architectural theory could produce a revolutionary critique that displayed how the entire capitalist system, and not just some of its technical or social aspects, needed complete transformation.This allowed Janssen to provide his students with a clear pedagogical agenda: here, architectural theory was connected with a precise area of investigation (the relation between architecture and capitalist development), a unique time of action (the present, with all its load of inherited material and ideological contradictions), and, above all, an uncompromising direction of transformation-revolutionary change. When Janssen arrived in Zurich to apply this theory, the educational and professional situation in Switzerland bore more than just a passing similarity with the context of West Berlin. 64Political students were increasingly determined to influence academic policies while the university leadership was trying to placate them by hiring a few radical figures, among whom figured Janssen himself.Outside academia, the building industry also found itself in an expansive phase, allowing for experimentation with new scales and types of urban intervention.In this context, Janssen organised a four-semester seminar on "Economic Criteria for Planning Decisions."Janssen and his students chose to investigate a new private residential development on the outskirts of Zurich, owned and constructed by the largest Swiss building company, G€ ohner AG. 65The estate, named Sunneb€ uel, was situated in the peripheral area of Volketswil and provided a case study deeply intertwined with the formation of monopoly capitalism in the building industry.This shift in focus, from the denunciation of political and professional injustices to the scientific analysis of capitalist companies, marked an evident departure from the anti-authoritarian phase. In researching Sunneb€ uel, Janssen applied his method of "Learning in Conflict," in which: The idea of unbiased science, unpolitical curriculum, neutral information, and objective facticity has finally been demolished.Everyone could experience how different reality presents itself, depending on the point of view from which it is observed.Everyone could see that every insight [ … ] includes partisanship, and that learning is therefore itself a political act, which one can undertake either blindly and servilely or conscientiously.[ … ] Conscious learning is, under this premise, inherently critical learning.This necessitates a thorough questioning of existing notions and concepts within their historical context.Consequently, conflict becomes the essence of the learning process, and only through conflict is the learning journey truly fulfilled.Anything else is merely a form of training. 66en from this perspective, Janssen's method could be deemed highly effective, as it not only provoked conflict in the students' approach to urban issues but also within the broader cultural and political context in which they studied. 67The analysis of the socio-economic foundations of contemporary architecture through the lens of Marxist literature, conducted by his students, proved too provocative for ETH.In June 1971, succumbing to sustained pressure from bourgeois local and national press over the course of a year, ETH decided to terminate Janssen and his team of assistants in the midst of their research.Six months later, Janssen's position would be assigned to Aldo Rossi who, given the context, was intended as a compromise: someone who would continue developing a Marxist approach to architectural theory, but in a way that would not upset the Swiss bourgeoisie and its capital investments. 68owever, the dismissal did not mark the end of Janssen's and his students' work on the estate.Despite this setback, they continued their research, culminating in the publication of the collective book G€ ohnerswil: Housing Construction in Capitalism, following a second year of investigation. 69This book represented a practical application of the political and educational methods discussed in Janssen's essay Capitalist Urban Planning and serves as a valuable standpoint for evaluating urban planning during the socialist phase of the student movement. Akin to the work of the TU Berlin students in Diagnosis, Janssen and his students scrutinised the non-democratic and opaque processes employed by major construction companies to secure land, permissions, and concessions for their building projects.Their focus, however, shifted towards the economic nature of G€ ohner AG, a corporation which effectively leveraged profits from its construction endeavours to internalise numerous associated trades and diversify investments across various branches of the building industry. 70Conducting a comprehensive analysis of the costs and profits tied to the company's residential properties within the framework of Marxist political economy, the students illustrated how the increasingly socialised mode of production within G€ ohner AG could have facilitated the effective construction of affordable housing for lower-income classes.Nevertheless, due to the implicit thrust of the capitalist system, companies like G€ ohner AG could not help but prioritise their own interests and aspire to the highest profit.In this case, this involved over-saturating the market with opulent houses tailored for the needs of the bourgeoisie.Janssen and his students harboured no optimism for a potential reversal of this situation, as they demonstrated that the concentration of capital and the rise of monopolies across all sectors of production were neither recent nor exclusive to the construction industry. 71Instead, they underscored how private companies like G€ ohner AG could, in theory, feasibly construct affordable housing thanks to their advanced, standardised, and bureaucratically organised production structures, while their capitalist nature compelled them to prioritise profit expansion, thereby intensifying the internal contradictions within the system.Extending Janssen's analysis in Capitalist Urban Planning, a crucial distinction emerged between the notion of a "housing shortage" (Wohnungsnot)-a pressing social concern impacting workers who struggle to afford adequate housing-and the "housing problem" (Wohnungsproblem), a market bottleneck wherein the bourgeoisie faced challenges in acquiring or renting residences commensurate with their heightened purchasing power. 72While G€ ohner concentrated solely on addressing the "problem," the true urgency, that is, the "shortage" of housing for the working class, remained unattended.This perspective revealed additional contradictions within the productive system, as G€ ohner AG relied on high-income earners to afford the elevated prices of its houses, but it simultaneously also paid its own workers miserable wages, relegating them to the confines of suburban slums (fig.1).On one hand, the working class, who were responsible for constructing the houses through their labour, found themselves excluded from dwelling in them.On the other, "the propertied class was expending an increasing share of social wealth on their unproductive pursuits." 73his case made clear that the problem was neither the lack of theoretical foundations in the building process, as suggested by Ungers' symposium, nor the building process itself, as claimed by the anti-authoritarian students.Architecture was neither the problem nor the solution.The issue was entirely political-economic, and thus could only be addressed from this point of view.The students concluded their essay with a famous quote from Engels, asserting that "[i]n order to make an end of this housing shortage there is only one means: to abolish altogether the exploitation and oppression of the working class by the ruling class." 74hile the West Berlin anti-authoritarian students limited themselves to denouncing architectural theory as a reformist instrument that preserves the status quo, Janssen and his Zurich students took a different approach.They argued that architecture's shortcomings were not a by-product of political corruption or professional obsolescence, but rather the structural outcome of capitalist production and exploitation.Architectural theory, in their view, played a crucial role in investigating and critiquing this economic system, and as such it need not be stopped but rather coherently developed, as exemplified in the unconventional study of "economic criteria for planning decisions."If engaging with architectural theory was a necessary step to understand reality, however, it was not sufficient to transform it.Real change, they argued, would only come through a working-class revolution, in which architecture was, of course, not the leading figure. Conclusion By clarifying the aim and the strategy of revolutionary change, and defining in this framework a small but coherent space for architectural theory, Janssen and his Zurich students developed one of the clearest and most radical contributions to the relationship between architecture and politics, only to be abruptly interrupted by Janssen's second politically-motivated layoff, all in the space of three years. Dismissing political opposition was not an extraordinary circumstance in the German-speaking world of the period.Throughout the post-war decades, the West German state had unambiguously opted to forsake the democratic values it so convincingly upheld any time it feared serious political opposition: in 1956 it banned the Communist Party of Germany; in 1968 it passed the Emergency Laws, introducing draconian restrictions to fundamental constitutional rights in case of natural and political crisis; and in 1972 the social-democratic Chancellor Willy Brandt approved the Anti-Radical Decree, a repressive law excluding any citizen considered radical from public employment-first and foremost, teaching. 75On one hand, architecture students' dissatisfaction was progressively channelled into more and more reformist experiences, which celebrated less comprehensive but equally unfair models of urban renewal-as, for instance, the experience of the 1977 "Sanierung f€ ur Kreuzberg." 76On the other, pockets of anarchist resistance broke out in West German cities at the turn of the 1980s, managing to squat up to 165 buildings in West Berlin, but failing to produce any meaningful attempt to transform the socio-economic conditions of housing and planning for the majority of the population. 77lthough largely overlooked in architectural historiography, the ideas, actions and publications discussed in this article constitute a coherent critique of the relationship between architecture and politics from different perspectives.Among them, Janssen's analysis of G€ ohnerswil, as the climax of a broader political experience stretching between West Berlin and Z€ urich at the turn of the 1970s, offers a powerful cautionary tale about the possibility of solving social and material problems by reforming architecture-or, indeed, architectural theory.His teaching, however, makes a strong case for observing these issues in their wider socio-economic setting, and for addressing them only as an opportunity to achieve the total transformation of reality.However ambitious this program may seem, it is, perhaps, the most critical insight that contemporary architectural conversations on theory can draw from this key episode of Marxist thinking and action. Notes on contributor Alessandro Toti is a historian of architecture trained in Rome and London.He holds a PhD degree from the Bartlett, UCL, in History and Theory of Architecture and Urbanism, with a research focus on West Berlin Marxist architecture groups at the turn of the 1970s.He has taught history of architecture, architectural design, and urban design at various universities, including UCL, Westminster, Greenwich, Syracuse and the Rome Programs of Cornell and Virginia Tech. Disclosure Statement No potential conflict of interest was reported by the author(s). Figure 1 . Figure 1.The first three pages of the book "G€ ohnerswil" effectively establish its tone by contrasting the housing arrangements of the three distinct classes involved in the estate construction.As the original captions report: "The building contractor [lives in the] Villa of Ernst G€ ohner at Risch am Zugersee; his tenants in the G€ ohner estate "Sunneb€ uel" in Volketswil, nearby Zurich; his workers in immigrants shacks of the G€ ohner-owned Ig� eco AG in Volketswil."Source: Autorenkollektiv an der Architekturabteilung der ETH Z€ urich, "G€ ohnerswil": Wohnungsbau im Kapitalismus (Zurich: Verlagsgenossenschaft, 1972), 1, 3, 5.
v3-fos-license
2014-10-01T00:00:00.000Z
2007-06-07T00:00:00.000
14644200
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.18637/jss.v019.i07", "pdf_hash": "426f9ffe1d13417ee8c98b81abf75a621dfe31f1", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41441", "s2fieldsofstudy": [ "Engineering" ], "sha1": "53aba5c7e1f97994bf59eec8230680d24993d943", "year": 2007 }
pes2o/s2orc
Compensating for Missing Data from Longitudinal Studies Using WinBUGS Missing data is a common problem in survey based research. There are many packages that compensate for missing data but few can easily compensate for missing longitudinal data. WinBUGS compensates for missing data using multiple imputation, and is able to incorporate longitudinal structure using random effects. We demonstrate the superiority of longitudinal imputation over cross-sectional imputation using WinBUGS . We use example data from the Australian Longitudinal Study on Women’s Health. We give a SAS macro that uses WinBUGS to analyze longitudinal models with missing covariate date, and demonstrate its use in a longitudinal study of terminal cancer patients and their carers. Introduction Missing data is a common problem in survey-based research. Ignoring any missing data by using a complete case analysis can produce biased results. Biases occur when participants with complete data are systematically different from those with missing data. Longitudinal studies are especially susceptible to such bias, as missing data accumulates over time due to wave non-response and participant drop-out. One method of compensating for missing data is imputation. Over the past twenty years the body of literature on imputation theory and methodology has grown considerably and software has evolved accordingly. However, there has been relatively little work on imputation in a longitudinal setting. There are several theoretical approaches to imputation. Raghunathan (2004) reviews such approaches and identifies three classes: weighted estimating equations, multiple imputation, and likelihood-based formulations. Ibrahim et al. (2005) identify fully Bayesian as a fourth class. Weighted estimating equations (WEE) weight records with complete data to compensate for similar cases with missing data. Most recently, literature has focussed on improving estimates of variance (Robins et al. 1994(Robins et al. , 1995 as WEE, when unadjusted, underestimate the true variance in the data. Implementation of WEE currently relies on model-specific, user-defined algorithms, rather than standard procedures in mainstream statistical packages. Multiple imputation (MI) uses Bayesian simulation to fill in missing data, drawing together results from repeatedly imputed datasets. See Rubin (1987) for a comprehensive coverage of multiple imputation. Fully Bayesian (FB) models extend MI methodology by jointly simulating the distributions of variables with missing data as well as unknown parameters in a regression equation. In FB the analysis and imputation models are fully and simultaneously specified. Maximum likelihood (ML) techniques also rely on fully specified models, but differ from FB in that parameter estimates are constructed using likelihood-based approximations, rather than Bayesian simulation. Maximum likelihood approaches to imputation are often intractable in mainstream software packages. Implementation relies upon strict assumptions about patterns of missingness that are frequently violated in complex survey data. While MI procedures exist in a range of software packages such as SAS (SAS Institute Inc. 2003), Stata (StataCorp. 2003), S-PLUS (Insightful Corp. 2003), and R (R Development Core Team 2007), they generally rely on the assumption that data are multivariate normal or can be approximated by a multivariate normal distribution (Schafer 1997). More recent work on chained regression equations has led to a number of add-on packages that can incorporate categorical data: MICE in S-PLUS (van Buuren and Oudshoorn 1999), Ice in Stata (Royston 2005), and IVEware for SAS (Raghunathan et al. 2002). However, the authors have still had difficulty in incorporating longitudinal information into the imputation methodology of these programs. FB techniques are most suited to longitudinal imputation, as they can incorporate hierarchical structure into the modelling process, and, like chained regressions, they have the capability to systematically deal with categorical data. The software packages WinBUGS (Spiegelhalter et al. 2003) and MLwiN (Rasbash et al. 2005) both use a FB framework. Cowles (2004) and Woodworth (2004) both provide a useful overview to WinBUGS, while Carpenter and Kenward (2005) and Congdon (2001) present introductory examples of FB imputation with missing data. Pettitt et al. (2006) and Qiu et al. (2002) present thorough analyses in the context of missing categorical data. The aim of this paper is to demonstrate WinBUGS's capacity to compensate for missing longitudinal data, with a particular focus on missing covariate data. We do this by looking at a longitudinal analysis of diabetes incidence in Australian women. In Section 2 we introduce the motivating example from the Australian Longitudinal Study on Women's Health. In Section 3 we specify a fully Bayesian model for the incidence of diabetes without and with missing covariate data. In Section 4 we describe its implementation in WinBUGS, and present the results in Section 5. In Section 6 we give a general SAS macro (that calls WinBUGS) for analysing longitudinal models with missing covariate data. We conclude with a discussion and some recommendations in Section 7. Motivating example Women who are overweight have an increased risk of developing diabetes. However the relative impact of longer-term adiposity and short-term weight changes on the incidence of diabetes is of scientific interest (Mishra et al. 2007). The Australian Longitudinal Study on Women's Health (ALSWH) is designed to answer such questions as it tracks over time the health and well-being of a representative sample of Australian women (Lee, Dobson, Brown, Bryson, Byles, Warner-Smith, and Young 2005). The ALSWH study collects self-reported data from mail-out surveys every two to three years. For this analysis we used data from the mid-aged cohort of women who were aged 45 to 50 at the time of the initial survey in 1996 (S1). Subsequent surveys occurred in 1998 (S2), 2001 (S3), and 2004 (S4). At S1 13,716 women agreed to take part in the longitudinal study and by S4 10,905 women remained. Key variables for the analysis of diabetes incidence and weight are outlined below. At S1 women were asked if they had ever been diagnosed with diabetes. At S2, S3 and S4 women were asked if they had been diagnosed with diabetes since the previous survey. Using this data women were classified into one of the following groups: existing case at S1, incident case between S1 and S2, incident case between S2 and S3, incident case between S3 and S4, free from diabetes, or unknown. Women were asked to report their height and weight at each survey. Self-reported heights from the first three surveys were used to obtain a single estimated value for each woman by averaging the available data. Body mass index (BMI) for each woman at S1 was calculated as self-reported weight (kilograms) at S1 divided by the square of estimated height (metres). BMI was categorized as (according to the World Health Organization (2000)): 'underweight', < 18.5 kg/m 2 ; 'healthy weight', [18.5, 25) kg/m 2 ; 'overweight', [25, 30) kg/m 2 ; 'obese', [30,35) kg/m 2 ; or 'very obese', ≥ 35 kg/m 2 . Fewer than 2% of women were classified as 'underweight' at S1, so this category was combined with the 'healthy weight' group. At S1 women were asked what they would like to weigh. Responses were categorized into: happy/like to weigh more, like to weigh 0 to 5 kg less, like to weigh 5 to 10 kg less, like to weigh more than 10 kg less. Model specification Examining the association between health and weight is often difficult in survey data because weight is a sensitive question and is sometimes not reported. For example, in the ALSWH at S1 545 women (4.0%) did not report their weight, whereas for the other variables used in this paper the average percent of missing was 1.3%. If women who are overweight are less likely to report their weight then a complete case analysis could well underestimate the true association between weight and diabetes incidence. Figure 1 gives a graphical summary of our model. The model is split into the imputation and diabetes components. In the diabetes component we examined the association between BMI at S1 and annual percentage weight change on the incidence of diabetes, adjusting for age at baseline. BMI at S1 represented longer-term adiposity while annual percentage weight change represented short-term weight change. Because we were interested in weight change before diabetes onset, weight change was measured in the survey period prior to reported incidence Figure 1: Model of the association between diabetes incidence and long-term BMI and shortterm changes in weight (to avoid the risk of 'reverse causation' whereby women who were diagnosed with diabetes subsequently lost weight). Therefore, the study population for the analysis was confined to those women who became an incident case between S2 and S3 or S3 and S4, or those who were free from diabetes. Women who became an incident case between S2 and S3 were excluded from the analysis in the following period as they were no longer in the population at risk. As shown in Figure 1 there were different amounts of missing covariates, with likely different reasons for why they were not completed. Rubin defines three potential patterns of missingness: missing completely at random (MCAR), in which there is no systematic difference between the characteristics of those with and without missing data; missing at random (MAR), in which there is a systematic difference but this can be explained by other observed data; and missing not at random (MNAR) where the difference cannot be explained by observed data. We relied on the MAR assumption to build an imputation component into the model using the question, "How much would you like to weigh?" There were far fewer missing responses to the 'like to weigh' question compared with actual weight. Also, the 'like to weigh' variable at S1 was highly correlated with self-reported weight at all surveys (Mishra and Dobson 2004). Hence we used 'like to weigh' to impute missing weights at each survey. For each imputed weight we recalculated BMI at S1 and percentage weight change. Women for whom height was unknown (3.2%) or who didn't respond to the 'like to weigh' question (3.2%) were excluded from the analysis. For illustrative purposes we constructed three separate models: (i) A complete case model (7113 women), (ii) A cross-sectional imputation model (9557 women), (iii) A longitudinal imputation model using random effects to incorporate within-subject correlation (9557 women). Models (ii) and (iii) both had the diabetes and imputation component ( Figure 1). Model (i) only had the diabetes component. We now describe the three models in more detail. Complete case model Let Y it be a binary variable denoting incidence of diabetes for individual i (i = 1, . . . , 7113) at time t (t = 1, 2). The complete case model is then, where X is a matrix of the time-invariant covariates (BMI at S1, age at S1), Z is a vector containing the single time varying covariate (percentage weight change in the survey period prior to reported incidence) and α is an intercept that varies according to survey (time). Cross-sectional imputation model The diabetes component of this model followed the same structure as the complete case model. In the imputation component of the model, we assumed that the weight of individual i (i = 1, . . . , 9557) at survey s (s = 1, . . . , 4) was distributed as: where L is a vector containing the response of each individual i to the 'like to weigh' question. Thus weight for individual i at survey s was described by a population mean γ plus an increment of ϕ at each survey, and was adjusted according to the response of individual i to 'like to weigh' at S1 (φ). The estimates of γ, ϕ and φ were based on records with partial or complete data. Women with a missing weight (W is ) had their weight imputed from a Normal distribution with meanμ is and varianceσ 2 . Note that the diabetes component is evaluated over two time periods (t = 1, 2, surveys 3 and 4) whereas the weight component is evaluated over four time periods (s = 1, 2, 3, 4, surveys 1 to 4). This meant that we used the maximum amount of information to impute weight, whilst excluding surveys 1 and 2 from the diabetes component because we were only interested in incident cases. Longitudinal imputation model The diabetes component of the model followed the same structure as the complete case model. We introduced a random intercept into the imputation component of the model to incorporate within-subject correlation in weight, and hence take account of the longitudinal study design. The imputation component for weight was: Instead of a population mean for weight, each subject had her own estimate (γ i , known as a random intercept). The total variance in weight from the previous model (σ 2 ) has been partitioned into the within-subject variance σ 2 w , and the between-subject variance σ 2 b . The within-subject correlation is given by Inference using Gibbs sampling The models that we have presented above use two parametric distributions (Bernoulli and Normal), with many parameters at several hierarchical levels. An analytical solution to the model is therefore intractable. Fortunately we can make inference about the parameters using Gibbs sampling (the default method in WinBUGS). In Gibbs sampling each unknown parameter is estimated conditional on all the other observed data and the other estimated parameters (for a detailed description of Gibbs sampling see Gelman et al. (2004)). For example, in the hierarchical imputation model, a missing weight would be sampled from a normal distribution with meanμ is and varianceσ 2 . A complete iteration occurs when all the parameters and missing data have been estimated. The next iteration is then based on these estimates and the data. To start the iterations an initial set of values for each unknown parameter and observation is specified. Many iterations are run (usually greater than 1000) in an attempt to converge to a solution. We discuss some of the practical issues of running such iterations in the next section. WinBUGS code To run an analysis in WinBUGS there are four basic requirements: specify a model; load the data; specify initial values; and run the Gibbs sampler. This process is most efficient when the above information is stored in four batch files: an input data file; a file containing the model specification; an initial values file; and a script file that executes WinBUGS commands. We focus here on the model specification file, illustrating the conversion of our three models into a WinBUGS format. Model specification in WinBUGS differs from other standard statistical packages in that the model must be fully and explicitly specified by the user, rather than inserting model specifications into pre-programmed statistical procedures. Information on the construction of the remaining batch files is in Appendix A. Complete case model The first lines of code are: The model statement opens the model specification file. We looped through records for 7311 individuals at two time points, except where a woman first reported diagnosis of diabetes at S3, in which case she was no longer included in the population at risk at S3 and data from a single time point was used. This condition was achieved through the use of the indicator variable, nsurvey, which took the value of 1 when diabetes incidence occurred between S2 and S3, and took the value of 2 otherwise. In this model t = 1 refers to diabetes incidence between S2 and S3 and weight change between S1 and S2. Similarly t = 2 refers to diabetes incidence between S3 and S4 and weight change between S2 and S3. BMI and age at S1 did not change over time. We specified the distribution of diabetes incidence (diab) to be Bernoulli, Ending statements with a semi-colon is optional in WinBUGS. Our interest lay in the parameter diab.prob, which represents the probability of becoming an incident case of diabetes. We modelled the relationship between the probability of diabetes and other explanatory variables as follows. We specified non-informative priors for each of the unknown parameters in the bottom tier of the hierarchy. We did this by assigning each unknown parameter a normal distribution with a zero mean and small precision dnorm(0.0,1.0E-6), where the precision is the inverse of the variance. Cross-sectional imputation model The specification of the diabetes component of the model was very similar to the complete case model. As above, we modelled diabetes as a Bernoulli variable. diab[i,t]~dbern(diab.prob[i,t]); However, to assist with convergence, we imposed a constraint on the minimum value that diab.prob could take (as very small probabilities led to non-estimable likelihoods). With the introduction of missing data in weight, BMI became a stochastic variable in the model. For this reason, we could not use the equals function to create an indicator function for BMI. The problem was overcome by using the step function, with thresholds set at some value between the integer categories. Further explanation on the use of the indicator functions step and equals can be found in the WinBUGS user's manual Spiegelhalter et al. (2003). The imputation component of the model focussed exclusively on the imputation of weight at each survey, which is used to calculate both BMI and percentage weight change. Where weight for individual i at survey s was unknown, we specified the distribution of weight to be Normal. Our interest lay in the relationship between the mean value of weight and other information contained within the existing data, as shown below. for ( We used the cut function to prevent 'feedback' from the results from diabetes influencing the imputed weights. In other words to maintain the flow of information as indicated by the arrows in Figure 1. This above code was embedded within the 'individual', or 'i', loop but external to the 't' loop in which the diabetes component was contained. This enabled weight to be modelled using data from all four surveys and in so doing, ensured that the imputation model incorporated all available information. Weight was not a variable of direct interest so we used logical functions to recalculate the variables bmi (categorical) and wtspc (continuous) as follows. We compare the fit of the three models using the Deviance Information Criterion (DIC) (Spiegelhalter et al. 2002) and using 10-fold cross-validation (Breiman et al. 1984). Results Analyses for each of the three models were performed in WinBUGS, Version 1.4.1. Each model was run for 25,000 iterations, with an additional 5000 iterations for burn-in. The time taken to run the longitudinal imputation model (the most complex) in WinBUGS was 53 minutes, using a server running Microsoft Windows Server 2003 Enterprise Edition with dual 3.6 GHz Xeon processors and 6 GB of RAM. The length of time is mostly dependent upon the number of observations in the data and the number of iterations required. To compare the results from WinBUGS with another package, the complete case analysis was also implemented in SAS, Version 9.1.3, using proc genmod with the options type=exch and d=binomial and link=logit. Odds ratios for incidence of diabetes in each of the three models are shown in Table 1. There was little difference in the odds ratios for diabetes, or their posterior limits, between the various models and two packages. Nor did the interpretation of the results change, with the main result being that long-term obesity (BMI) was a stronger predictor of diabetes incidence compared to short-term weight gain (Mishra et al. 2007). The longitudinal model was a better fit to the data than the cross-sectional model as the longitudinal model had a smaller DIC ( Table 2). The longitudinal model used many more parameters as each woman had her own intercept. This large increase in parameters gave a much improved fit to the imputation component of the model. This improved imputation gave a slightly better fit to the diabetes component (DIC of 2570.1 vs 2573.8). The results of the 10-fold cross-validation were similar to those from the DIC. For the crosssectional model the average error for an imputed weight was 10.4 kilograms (standard deviation (SD) = 0.23 kg). For the longitudinal model the average error was a much smaller 5.3 kilograms (SD = 0.17 kg). The cross-validation found little difference between the models in terms of their fit to the diabetes component. For the cross-sectional model the average area under the receiver operating characteristic (ROC) curve for predicted diabetes (yes/no) was 0.548. For the longitudinal model the average area under the ROC curve was 0.543. A more general SAS macro In this section we describe a SAS macro for analysing general longitudinal models with missing covariate data. The macro converts a SAS data set to WinBUGS format and writes WinBUGS This particular macro is restricted to models with a continuous dependent variable and so is more limited than the models used in section 4. Multiple covariates are allowed and these may be either categorical or continuous. However, the covariate with missing data must be continuous and time-dependent (i.e., change over time). This macro uses a simpler model than that discussed in the previous section for the diabetes data set. The model for diabetes included specific functions (such as calculating body mass index from weight) and used different time periods for the model of interest and imputation model. Such specific programming can be added to the WinBUGS code generated by our SAS macro. We demonstrate the SAS macro for a continuous dependent variable with a smaller longitudinal data set from a study of terminal cancer patients and their carers (Correa-Velez et al. 2003). The outcome is the carer's level of anxiety, measured on the hospital anxiety and depression scale (HADS), for which higher scores indicate greater anxiety (Zigmond and Snaith 1983). In this study patients and their carers were regularly interviewed during the final year of life. The study was particularly interested in how patient anxiety impacted on carer anxiety. Both patient anxiety and carer anxiety had some missing values. The other variables are the carer's gender, the time to death (in weeks), and the patient's number of symptoms. The number of symptoms and anxiety scores are time-varying covariates. The data set contains 514 interviews from 109 carers. Using the terminal cancer data our SAS macro longimp is called using the following statement (note the text between '/*' and '*/' is comment and can be omitted): The dependent variable in the model of interest is the carer's anxiety (depvar=hadsanx). This is dependent on: their gender (which is a categorical variable, class=gender), their patient's level of anxiety (phadsanx) and the time to patient's death (deathwks). The patient's level of anxiety is a time-dependent covariate and has some missing values. We impute these missing values by making them the dependent variable of the imputation model (depvari=phadsanx). The patient's level of anxiety is dependent on their number of symptoms (count). There are no missing values for the number of symptoms. The other necessary inputs are the variable that defines time, which in this case is the interview number (time=interno), and an identification number for each carer which links their repeated results (repeated=carerid). There is the option to centre the continuous explanatory variables by subtracting their mean (centre). This is generally advisable in WinBUGS as it often improves the convergence of the MCMC algorithm. The final option is to choose the number of MCMC iterations and burn-in (MCMC). The above SAS macro call produces the following four pages of output: The first page of output gives some basic information on the model. The second page gives the MCMC convergence diagnostic of Geweke (1992). This compares the first 10% of the chain to the last 50%. It is important for valid inference to have MCMC chains that are stable and have converged. A chain that has converged should have a constant mean and the output compares the means of the first and last sections of the chain using an unpaired t-test. In this case the mean for the gender variable seems to have increased slightly. The chain should be run again (possibly for longer) to give a more stable estimate for gender. The third and fourth pages give the parameter estimates from the model of interest and the imputation model. The results show that as the patient's death approached the carer's anxiety increased (deathwks). Also male carers had much lower anxiety (gender), and increased patient anxiety was associated with increased carer anxiety (phadsanx). The average anxiety score was 6.464 (intercept) and the variance was 8.377 (sigma2). The within-subject correlation for carers was a relatively strong 0.597 (rho), indicating a good deal of similarity over time in each carer's anxiety level. For the imputation model the patient's number of symptoms was positively associated with their level of anxiety (count). Discussion Survey data is often partially completed and using a complete case analysis can produce biased results. For an example used here concerning the risk of diabetes incidence this was not the case, as a complete case analysis and analyses after imputation gave similar estimates. However, the complete case analysis used data from 7311 women, whereas the analyses using imputation used data from 9557 women (a 31% increase). Using a larger sample generally gives results that are more representative. In our example concerning diabetes incidence there was no difference between the parameter estimates from a cross-sectional imputation model and a model that exploited the longitudinal structure of the data. However, the longitudinal imputation model gave better estimates of missing weights as shown by the cross-validation (mean error of 5.3 kg for the longitudinal model compared to 10.4 kg for the cross-sectional). Hence we still strongly recommend the use of longitudinal imputation when the data structure is longitudinal. Much of the software available for imputation is yet to develop capabilities for longitudinal imputation, and instead uses cross-sectional imputation. WinBUGS, on the other hand, can incorporate longitudinal structure with relative ease. As illustrated in Section 4, there was little difference in the longitudinal and cross-sectional imputation code. WinBUGS is also able to easily deal with missing categorical data, whereas many other packages rely on an assumption of Normality. We have provided a SAS macro for analysing longitudinal studies with missing covariates that uses WinBUGS but is able to be run from SAS. A. Batch files in WinBUGS A.1. Input data file Raw data are values of the variables: diabetes, weight (at each survey), age, height, like to weigh, and the indicator variable: nsurvey. Data can be entered into WinBUGS in one of two ways: the list format of S-PLUS, or as a series of one dimensional arrays, in a tab delimited file. In both cases, the hierarchical dimensions of the data must be specified by the user. Data are stored as .txt files. Missing data is entered as 'NA'. WinBUGS is not suited to data management, and so for large data sets another package is required. We used SAS, Version 9.1.3, creating our data files as tab delimited text. It is also possible to use R and the package R2WinBUGS in the management of large data sets (Sturtz et al. 2005). Using the data array format, data was entered under the following column headings (we also show three example lines of data): The last line of a data file of this format must be the word 'END' and the return key must be entered for WinBUGS to read the file correctly. A.2. Initial values file Initial values can be specified in one of two ways: by the user, or as randomly generated values using the gen.inits() command in the WinBUGS script file. It is also possible to specify some initial values and then use the gen.inits() function to create the remainder. User specified values are stored in a .txt file whose structure follows the same protocol as the data file. It is often necessary for the user to specify ballpark initial values as gen.inits() can generate unreasonable starting values, which in turn means that WinBUGS cannot begin to iterate the Gibbs sampler. WinBUGS treats each unobserved record in a variable with missing data as a random variable whose distribution must be estimated. To specify the initial values for these unobserved records we must enter a matrix of values that matches the dimensions of the variable. If a record is observed, then the entry in the initial value matrix is 'NA', otherwise, if the record is unobserved, the entry is the user-specified initial value. A.3. Script file The script file is a list of WinBUGS commands which are executed sequentially and activated by the 'Script' option in the Model menu. The following commands must be included in this file, in the sequence shown below, for the model to run.
v3-fos-license
2018-12-12T11:40:04.368Z
2012-06-15T00:00:00.000
55859367
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.irpa.is/article/download/a.2012.8.1.1/pdf_245", "pdf_hash": "50c7c84933870a2f0015c84bd2cb7fa6f1afeae4", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41442", "s2fieldsofstudy": [ "History", "Political Science" ], "sha1": "50c7c84933870a2f0015c84bd2cb7fa6f1afeae4", "year": 2012 }
pes2o/s2orc
Iceland ’ s external affairs in the Middle Ages : The shelter of Norwegian sea power According to the international relations literature, small countries need to form an alliance with larger neighbours in order to defend themselves and be economically sustainable. This paper applies the assumption that small states need economic and political shelter in order to prosper, economically and politically, to the case of Iceland, in an historical context. It analyses whether or not Iceland, as a small entity/country in the Middle Ages (from the Settle ment in the 9 and 10 centuries until the late 14 century) enjoyed political and economic shelter provided by its neighbouring states. Admitting that societies were generally much more self-sufficient in the Middle Ages than in our times, the paper argues that Iceland enjoyed essential economic shelter from Norwegian sea power, particularly as regards its role in securing external market access. On the other hand, the transfer of formal political authority from Iceland to the Norwegian crown was the political price paid for this shelter, though the Icelandic domestic elite, at the time, may have regarded it as a political cover. The country’s peripheral location shielded it both from military attacks from outsiders and the king’s day-to-day interference in domestic affairs. That said, the island was not at all unexposed to political and social developments in the British Isles and on the European continent, e.g. as regards the conversion to Christianity and the formation of dynastic and larger states. This paper claims that the analysis of the need for shelter needs to take into account the political and economical costs that may be involved in a shield. Also, it needs to address how external actors may solve the problem of internal order. Moreover, an analysis from the point of view of the advantages of political or military shelter needs to address the im portance of the extent of engagement of a small community, particularly a remote one, with the outside world. The level of engagement and the identity of the entity with which reciprocal transactions take place may have an im portant bearing on the community. This was the case in Iceland, i.e. communica tion with the outside world was of immense im portance during the Middle Ages. Hence, the paper suggests that an analysis of the means by which shelter was secured must address the importance of communication according to the centre-periphery relations model. Fræðigreinar 6 STJÓRNMÁL & STJÓRNSÝSLA Introduction The main aim of this paper is to test the case of Iceland within the framework of smallstate theory and answer its key consideration by examining whether Iceland, as a small entity/country, had external shelter or stood on its own during the Middle Ages. The history of Iceland is, almost without exception, told through the geo graphical location of the executive branch of government and accordingly split into eras of external or domestic rule. The most important element is whether political power was located in the country or in its neighbouring states, i.e. Norway and Den mark (For instance, see Aðils 1915;Jónsson 1989;Karlsson 1985;Karlsson 2000). Normally, the story begins with the Settlement by the independent and brave Viking explorers (Björnsson et al. 2008) and the establishment of the Althingi (the entity's parliament) in around 930, which is interpreted as marking the creation of the Icelandic Commonwealth, the 'nation's' most glorious era. The Commonwealth then comes to a halt with 'the fateful decision' to include the country in the Norwegian monarchy in 1262. This 'tragic decision' was taken after a period of domestic political violence on an unprecedented scale which coincided with pressure by the Norwegian king on leading chieftains and farmers in Iceland to submit to his rule. The traditional historical narrative states that submission to foreign authority led to catastrophic economic and political decline of the country which lasted for centuries (Aðils 1903;Jónsson 1915-16). An end was not put to the suffering of the Icelandic nation until it reclaimed its independence (Björnsson et al. 2008), according to this narrative. The change of fortune is seen as manifested in the rapid economic development of the country in the first half of the 20 th century: a result of greater independence from Denmark, i.e. Home Rule in 1904, Sovereignty in 1918 and the creation of the Republic in 1944 (for a good overview of this narrative, see Hálfdanarson 2001a). Iceland's history was often analysed as 'a specific isolated phenomenon' ripped out of the context of world history (Agnarsdóttir 1995, 69). On the other hand, some historians have categorized Iceland's history according to trade and external relations. They refer to the period from 1262 until 1400 as 'the Norwegian Age', the period from around 1400 to 1520 as 'the English Age', the 16 th century as 'the German Age' and identify the period from 1602 to 1787 as being the age of the Danish trade monopoly (Þorsteinsson & Jónsson 1991). Also, Jóhannesson (1965) divided the Medieval Period according to the importance of industries, i.e. the agricultural period (930-1300) and the fisheries period (1300-1500). These categorizations are known, and sometimes mentioned, but not commonly used (with the exception of the period of the Danish monopoly; for instance, see Nordal & Kristinsson 1996). More recently, historians have focussed less on the 'loss' of sovereignty and independence and the importance of the 'independence struggle' during the latter half of the 19 th century up until the creation of the Republic and more on other aspects 8 STJÓRNMÁL & STJÓRNSÝSLA economically and politically sustainable (Keohane 1969;Handel 1981;Archer & Nugent 2002). The main reason was that they did not have resources to guarantee their own defences (Vital 1967). Besides, their small domestic markets, concentrated production (dependence often on only one export product) and greater reliance on exports and imports, and on exports to a single country or a particular market, made them more dependent on international trade than larger states. Hence, their economies would fluctuate more than larger economies and international economic crises would hit them with greater force than other states (Katzenstein 1984(Katzenstein & 1985. These assumptions were based on the dependence of many small states on the two superpowers during the cold war and from observations of small countries and city-states in early times (for instance, see Alesina & Spolaore 2005). The vulnerability of small states and their lack of capabilities (Neumann & Gstöhl 2004) were further highlighted in the de-colonization process of the post-war period. Geographical location was seen as being of great importance, i.e. whether or not a small country was territorially based in a conflict zone and near a more powerful state. Also, the structure of the international system was of prime importance due to the better ability of small states to prosper during peacetime (Handel 1981) and in a world based on free trade (Katzenstein 1984(Katzenstein & 1985 as compared with times of war and restricted international trade. Lately, in the wake of the current international financial crisis and its devastating consequences for many 'prosperous' small states, the literature has been forced to turn back to its original findings on vulnerability (see, for instance, Schwartz 2011; Þórhallsson 2011) after a period in which many small European countries were described as successful and better capable than larger states of achieving economic growth (see, for instance, Katzenstein 1984Katzenstein & 1985Briguglio, Cordina & Kisanga 2006;Cooper & Shaw 2009). Hence, the classic small-state literature, with its focus on the importance of a protecting power or alliance formation for small countries, has re-established itself as the core for understanding the status and role of small states in the international system. This study places the case of Iceland in small-state theory within the framework of the international relations literature. The intention is to test an analysis of the need for shelter in the case of Iceland during the Middle Ages, from the Settlement around the turn of the 10 th century until the end of the 14 th century -a time of radical changes in the country's overseas relations. The paper is a part of 'a quintology': five papers which examine the concept of external shelter in the case of Iceland from the Settlement to the present day according to the importance of external relations. In addition to the present one on relations with the Norwegian crown, the others examine the importance of 'the English and German Periods'; the age of the Danish rule; the American Period; and the new European Period. The aim is not to re-write the history of Iceland. The purpose is rather to shed light on some aspects of the country's external affairs, such as trade and communications, which have been somewhat neglected in the historical narrative, and to start a debate on the importance of trade, peripheral location and foreign affairs in Iceland's history. External relations Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson 9 STJÓRNMÁL & STJÓRNSÝSLA are examined by reviewing the extensive existing academic literature on the history of Iceland during this period. The importance of economic and political shelter for small states, due to their more limited resources and means to withstand stress (Vital 1967) as compared to larger states, is related to three interrelated features: reduction of risk before an eventual crisis event; assistance in absorbing shocks when risk goes bad; and help in cleaning up after the event. There is a need to distinguish between economic and political shelter. Economic shelter may be in the form of direct economic assistance, a currency union, help from an external financial authority, beneficial loans, favourable market access, a common market, etc., provided by a more powerful country or by an international organization. Political shelter refers to direct and visible diplomatic or military backing in any given time of need by another state or an international organization, and through organizational rules and norms (Þórhallsson 2011). Accordingly, with whom did Iceland have the closest economic ties during our period under study? Did they result in economic constraints or benefits which could be interpreted as economic shelter? With whom did Icelanders have the closest political ties? Did they result in political constraints or backing which could be interpreted as political shelter? An understanding of the advantages of shelter addresses the present situation of small countries in the current international system. Therefore, an attempt to apply it to a small medieval entity/country needs to take into account the enormous difference in the nature of relations between states in this period as compared with modern times: we must bear in mind the non-existence of international organizations and the looser definitions of what constituted a state in the former period. Also, we must keep in mind that societies were generally much more self-sufficient in the Middle Ages than in our times. At the same time, for instance, it needs to address the role of 'an international actor', the Roman Catholic Church, and the implications of peripheral location during the Middle Ages. There may be complications in applying a modern theory to the Medieval Period or any other past eras. On the other hand, realism, which exercised a hegemonic position in the study of international relations in the 20 th century, assumes that the nature of international relations has changed little, if at all, over the millennia. Realists trace their ideas about states'/entities' power struggle back at least as far as the ancient Greek city-states. Classical realism claims that states' behaviour is dictated by human nature, which is seen as destructive, selfish, competitive and aggressive. However, neorealism argues that the nature of interstate politics constructs states' behaviour. That said, they are unified in the assumption that their theories apply to the ancient world as well as the modern one (Sheehan 2005, 5-24). For instance, Alesina and Spolaore (2005, 178) argue that European city states (from about 1300 to about 1600) were, 'for the most part, politico-economic' entities that had some characteristics of today's open and democratic small states. According to constructivists, (constructivism being a contemporary theory in the field of international relations), national discourse, international norms and practices STJÓRNMÁL & STJÓRNSÝSLA and notions of shared identity among states help us to understand domestic and external actions taken by a country's decision-makers. Additionally, Galasso (2012) argues that the individual socialisation and culture of decision-makers shapes a state's external relations, as was the case in the Roman Empire during the early Principate (from 27 B.C. to 284 A.D.). The norms and practices that constructed personal relations and relations between states, independent entities or semi-independent entities in the past were different from those which operate nowadays. On the other hand, studies, e.g. of the Greek city-state system and the Roman Empire, demonstrate that ancient key concepts like personal honour can be addressed in the modern international relations literature. For instance, constructivists often explore the linkages between culture, personal identity and state identity in order to explain foreign policy behaviour in historical world systems (see, Leira and Neumann 2007;Neumann 1994;Galasso 2012). Hence, cultural environments shape all historical world systems, i.e. cultural environments influence personal identity, and subsequently state identity, regardless of the period (Reus-Smit 1999). Icelanders cannot be dealt with as a separate nation during our period under study, even though this was the time when the islanders' identity, and a notion that they differed from other Norse communities, started to take shape. Our findings will indicate that Iceland was a peripheral part of the Norwegian domain, though its autonomy, before and after submission to the crown, must not be underestimated. Also, it is problematic to refer to Iceland as a state, according to the Westphalian system (based on the concept of nation-state sovereignty, i.e. territoriality and the absence of external actors in domestic decision-making), during our period under review. However, the islanders made their first international agreement with the Norwegian king around the first quarter of the 11 th century and, at a similar time, 'noble' islanders started to represent the islanders as a group when meeting other noble men abroad. Moreover, the islanders enjoyed considerable autonomy after submission to the Norwegian crown. Iceland started as an independent entity and then was voluntarily moved to a dependency relationship with Norway. One could say that submission to the crown was an extreme form of shelter while, at the same time, it was the only form of external shelter available. That all said, our concern is the extent of external engagement of the small community and its domestic consequences and whether the case fits the theory regarding the importance of external shelter for small societies. Furthermore, a consideration of small states' exposure to threats has to take notice of the different levels and potential responses to threats in the Medieval Period compared to the situation in the present international system. This applies, for example, to violent crime, severe civil unrest, infrastructure, health challenges (including pandemics, natural disasters, environmental damage and food security), transport and the implications of Iceland's peripheral location, as already mentioned (see Bailes and Þórhallsson 2012 for present threats to small states). On the other hand, medieval small states were exposed to the traditional military and economic threats covered by the present international relations literature. Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson STJÓRNMÁL & STJÓRNSÝSLA The concept of shelter and the importance of alliances for small states may be linked to Rokkan and Urwin's (1983) historical account of the importance of centreperiphery relations in their attempts to explain state-building in Western Europe. The 'centre and periphery' model in the literature refers both to territories and inhabitants. Peripherality exists in three distinctive types of dominance of the centre(s), i.e. in politics, economy and culture. The authors argue that there are three key characteristics of peripheries: distance (location from the dominant centre), difference (at least some minimum level and sense of separate identity) and dependence (at least in terms of political decision-making, cultural standardization and economy). They prioritize the significance of distance, since it is important in the structuring of the peripheral economy and determines the ease or difficulty associated with the centre's attempt to control the periphery. Also, long distances make cultural communication more difficult and increase the likelihood of a separate identity. Furthermore, the authors identify three types of transactions which construct the centre-periphery relations in order to examine how dependent the periphery is upon the centre: economic (import/export of goods, services, labour, credits, investments, subsidies); cultural (transfer of messages, norms, lifestyles, ideologies, myths, ritual systems); and political (conflicts over territorial rights, wars, invasions, blockades, alliances and accommodations of different elites) (Rokkan & Urwin 1983, 4). Medieval Iceland was a peripheral community, in geographical terms: an outpost in northwestern Europe -only Greenland lay beyond it -according to the centreperipheral model. Also, the hegemonic medieval Roman Catholic and Icelandic world view was that Iceland was a marginal and peripheral entity far from the Christian centre (Jakobsson 2009). On the other hand, Karlsson (2000, 100) argues that Iceland lay at the centre of the Norwegian sea empire -not remote at all (though in terms of number of inhabitants it was on the periphery) -due to the closeness and short sailing time, in favourable wind, from Bergen, the king's residence, and Trondheim, the archbishop's seat. This was the case until the king's residence was moved to Oslo and a change of view took place within the Norwegian kingdom that it was a Nordic, and not a North Atlantic, monarchy, with possibilities of expansion to the south and south east, in the early 14 th century. Our concern has to do with how Iceland's external relations were structured: what economic, political and societal consequences did external affairs have for the islanders? Were they sheltered or not by their neighbours? The paper argues that Iceland enjoyed important economic shelter from Norwegian sea power, particularly regarding its role in securing regular trade to and from the island throughout the period under study. Whether Iceland enjoyed political shelter from the Norwegian kingdom is more of a puzzle. The transfer of formal political power from Iceland to Norway can be regarded as a political cost. However, the key consideration is whether the traditional narrative of the history of Iceland has not neglected to examine whether the alignment with the kingdom, throughout the Middle Ages, provided political shelter for a considerable part of the domestic ruling elite. It, at least, benefited from the protection of the Norwegian crown, e.g. in terms of safer travel within its domains than might otherwise be anticipated. Also, internal problems 12 STJÓRNMÁL & STJÓRNSÝSLA of order were to a certain extent halted, at least temporarily, as islanders submitted to the king. Accordingly, small-state literature needs to address how external actors may solve the problem of internal order. The submission entailed the creation of an executive branch of government -and, thus provided protection for the community at large. Confirming the importance of distance in the centre-peripheral relations model, Iceland's peripheral location shielded it from military attacks and constant interference by the crown. Furthermore, an analysis of the need for shelter needs to address a small state's potential political and economic cost associated with alignment with a larger country. Importantly, Iceland -despite its geographical remoteness -was not at all unexposed to political and social developments in Norway and in the British Isles and on the European continent, e.g. as regards the conversion to Christianity and the creation of a dynastic state. This paper claims that the case of Iceland indicates that the assumption that a small state needs economic and political protection must also address the importance of the extent of engagement of a small community, particularly a remote one, with the outside world. The level of engagement and the entity or entities with which relations take place may have an important bearing on the community at large. Hence, the paper suggests that an analysis of the need for shelter should examine the importance of cultural or social communication according to Rokkan and Urwin's (1983) centre-periphery relations model. The traditional Icelandic historical narrative has neglected the importance of the Norwegian economic and societal shelter and side-lined the protection which at least some chieftains and other rich farmers gained by closer engagement with Norwegian sea power and international developments in Europe concerning dynastic state building during the era. The paper is divided into three sections in addition to this introduction and a concluding chapter. First, it starts by examining Iceland's external trade relations and considering whether or not Iceland enjoyed economic protection during the Middle Ages. Second, the entity's/country's political engagement with Norway and other international actors is examined in order to identify the political aspect of the shelter concept. Third, the importance of external social communication, i.e. societal shelter, is put under scrutiny. The concluding part summarizes the main findings and proposes suggestions to enhance our understanding of the advantages of shelter. The Norwegian market link: Economic shelter? Scholars disagree somewhat about the content and extent of Iceland's external trade, the number of ships sailing to and from the country and the final destination of exports during the Middle Ages. Nevertheless, there is a unanimous consensus among them that Norwegian sea power provided Iceland with an external link with the outside world. During the early Middle Ages, a form of common market existed in northwestern Europe (see Figure 1), though farmers' communities had largely to be self-sufficient due to transport difficulties. The Norsemen were one of the most prominent maritime peoples during the Viking Age (from around 800 to the mid-11 th century) 1 and established settlements in the British Isles (in England, Scotland, Ireland, Shetland and the Orkneys) and the Faroe Islands before reaching Iceland. Wealthy farmers and community leaders in these various outposts, and in Denmark and southern Sweden, were connected with each other by marriage, educational and cultural exchanges and trading (Líndal 1974). Figure 1. Iceland's economic shelter: The common Norse market area acknowledging Norwegian sea power from the 9 th to the 14 th century. A shipping fleet capable of sailing long distances in rough waters was essential for all communication and trading with the outside world. Until the 11 th century, the settlers in Iceland seem to have owned a considerable number of ships with the capacity to sail to other parts of northwestern Europe, though there are few historical records about foreign trade during the Commonwealth -especially its first half. During this period, Icelanders seem to have been in control of their own trade with the rest of the Norse world and records indicate that 35 ships came to Iceland in 1118 (Karlsson 1975). Formally, regional chieftains, in the Althing, regulated foreign trade, though it appears to have been relatively free, at least during the first decades of the Commonwealth Sources: Based on numerous accounts of trade, such as those in Líndal 1978, 39 andKarlsson 2000, 104. Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson 14 STJÓRNMÁL & STJÓRNSÝSLA (Jóhannesson 1974). During the Commonwealth era, chieftains may have tried to control the prices of imports, but with limited effect (Karlsson 1975, 20-21) In the 12 th century, Icelanders were still in control of some of their external trade. However, they were not in charge of the shipping fleet sailing to and from the country. At the turn of the 13 th century, most trade was in the hands of foreignersprimarily Norwegians but also their distant relatives in the British Isles (mainly the Orkneys) -and few ships, if any, seem to have been owned solely by Icelanders (Líndal 1974). Little wood, inefficient tools, limited skills and several problems relating to the use of driftwood contributed to this situation. Nor is it likely that the profits from trade would cover the costs and huge risks involved for individual farmers in sailing in the turbulent waters around the island (Kristjánsson 1975, 199-201). Nevertheless, Iceland's two episcopal sees, some rich farmers and some of the king's administrative officials on the island owned shares in some ships, i.e. those who took part in the trade to and from the island, including the bishops and the archbishop in Trondheim, diversified their risk by sharing ownership of both ships and cargo (Þorsteinsson & Grímsdóttir 1989, 179-180). In the first centuries after the Settlement, sailors would land their ships at various places along the coastline but in the 13 th century records mention only ten harbours. This may be related to fewer ships and sailings from and to the island and the fact that Norse merchants would attempt to land at 'good' and known 'natural' harbours close to populated areas. Fairs were set up by the harbours for the convenience of farmers and traders. Trading mainly took the form of barter. Icelanders who travelled abroad had to carry goods with them as payment for their daily expenditure and to pay the landing tax in Norway, as will be discussed below. Some records indicate a sense of solidarity between the people of the two entities: for instance, in 1056, during a time of great hardship, the king sent four ships to Iceland carrying grain that was supposed to be sold at a reasonable price, and allowed poor Icelanders to travel abroad (Líndal 1974). In Norway, villages where merchants and associates of the archbishop and the king took over the trade started to emerge in the 11 th century. These provided better facilities for commerce and increased revenues. Bergen, on the west coast of Norway, became the main commercial centre of the Nordic countries (Líndal 1974). For instance, Norwegian merchants became Nordic Greenlanders' link with the outside world (Kjartansson 1996, 64), with contact throughout the Norse world. Part-time trading activities by the rich Icelandic farmers could not compete with this 'newly rich' trading class in Norway. The small Icelandic domestic market, its almost total reliance on a single export product, vaðmál (homespun woollen cloth), transport difficulties and the high costs associated with the country's peripheral location contributed to the fact that commerce did not become a profession and no villages were created. Moreover, the reserves of silver brought by the settler population are thought to have been exhausted in the 11 th century. Icelandic farmers also exported skin, sulphur and falcons (Karlsson 2009) and, in the early 13 th century, Icelanders demanded that the Norwegian king ban the export of grain in times of domestic (Þorsteinsson & Jónsson 1991). The main imports consisted of grain, in addition to timber, tar, canvas and wax and other goods. Grain and timber were of the greatest importance: domestic production could never satisfy internal demand and timber was essential for building fishing boats (and was thus probably the most important import) and houses (Karlsson 1975). Also, it has been argued that Iceland was an important transshipment centre for shipments of valuable goods such as ivory (walrus teeth) and skins from Greenland to Western Europe (Guðmundsson 2002, 42-80) while other scholars claim that there is limited evidence for Iceland as a transshipment centre (Vésteinsson 2005). The importance of trade relations for Iceland is manifested in the much-quoted provision in the agreement between Iceland and the Norwegian king in 1262, later termed the Old Covenant 2 , stating that the king guaranteed that six ships would sail from Norway to Iceland in the following two summers and, thereafter, as many ships would sail to Iceland as the king and leading Icelandic farmers thought appropriate. This meant that there were twelve ships sailing between the two countries, six from Iceland in the summer and six to Iceland, where they remained throughout the winter (since sailing to Iceland was only possible during the summer). It must be remembered that ships often failed to reach their destination due to difficult sailing conditions. For instance, records indicate that in the period 1262 to 1412, ships failed to reach Iceland entirely in five years, and in six years only one ship arrived in Iceland (Þorsteinsson & Grímsdóttir 1989, 168). The sailing clause in the Old Covenant was restated in 1302 and, again, nearly twenty years later, in a letter to the king, a request was made for two ships to be sent to the south of Iceland, two to the north, one to the West Fjords and one to the east coast (Þorsteinsson & Grímsdóttir 1989). Hence, inadequate transport affected the extent of exports and imports. One wonders whether the rich farmers and the bishops had only their own needs in mind when deciding on the number of ships to and from the island (Þorsteinsson 1964;Karlsson 1975, 19) and/or whether their priority was to secure communications and meet the needs of their colleagues abroad -as will be discussed below. Certainly, six ships could carry considerable amounts of Iceland's main medieval exports, vaðmál and stockfish (gutted and winddried fish) (Þorláksson 1991). Several other historians have also made an attempt to calculate the amount of goods that ships sailing to and from the island could carry. For instance, Karlsson (2009, 237-39) argues that six ships could carry the island's entire vaðmál production (not for domestic use) and that they could carry about 100 to 140 kilos of imported goods for every single household in the country. Findings vary considerably but indicate that six ships might have been able to carry a considerable amount of stock that was, at least, 'enough' for the domestic consumption by rich farmers and bishops and even by the population at large (if the ships reached shore; for a good overview, see Karlsson 2009 andÞorláksson 1991). Iceland's exports underwent changes reflecting developments in trade and fashion in the rest of Europe. Clothes made with felted woollen cloth were an important export in the 11 th and 12 th centuries but around 1200 they dropped out completely, probably due to changes in fashion. In the 13 th century, exports of skin were in 16 STJÓRNMÁL & STJÓRNSÝSLA decline due to the import of less costly skins from the East by the Hansa merchants (Karlsson 1975). During the early period, Iceland's exports reached England and Germany, though there is no mention of sailings by English and German merchants to Iceland. In the 13 th century, profits from trade seem to have been in decline, which led to an economic downturn. This changed with a new main export commodity and a 'new' and larger market and a better access to it (Þorsteinsson & Grímsdóttir 1989). In the early 14 th century, Iceland's exports underwent fundamental changes following a radical shift in trade in Western Europe. Stockfish and oil gradually took over from vaðmál as Iceland's main exports (Þorláksson 1991). Marine products became the country's main exports and have remained so ever since (Kristjánsson 1980;Jóhannesson 1965). Iceland became a full participant in fishing and maritime trading in the North Atlantic (Þór 2002). Vaðmál exports had been in decline for some time: textiles from Flanders, England and elsewhere had taken over the European market (Þorsteinsson & Grímsdóttir 1989). The traditional explanation refers to the relative decline in the importance of the Norwegian sea empire coinciding with population increases in Europe and the emergence of important cities on the Baltic Sea from Lübeck to Tallinn. Hansa merchants became prominent in the increased trade between the industrial regions, at the time, of Western Europe and the 'new' cities from the late 13 th and early 14 th century. They took the lead in the Baltic Sea trade. In the mid-14 th century, the Hanseatic League was in control of most of the trade in Northern, northeastern and Western Europe, particularly in trade with fish and grain (Stabel 2001). Accordingly, the centre of trade shifted from the northwest to the Baltic Sea and Western Europe. The new Christian cities now under German control needed fish for the Christian fasts and oil (fish liver oil) to light up their dark streets. Already in the 12 th century, German merchants did considerable trade with northern Norway in exchange for grain. Grain production declined significantly in Norway and its domestic yield was not sufficient to feed the increasing population. In 1343, Bergen became the location of one of the four main Hanseatic offices (Kontore), the others being in Bruges, London and Novgorod. By 1400, approximately 3,000 of the town's population of 14,000 were Germans (Gade 1951). Norway now depended on trade with the Hansa merchants and they received more favourable trade terms than merchants from other countries, particularly England. The Norwegian fishing industry and exports flourished and the centre of trade shifted from the settlements of the Northwest Atlantic, where little or no grain could be produced, eastwards to the Baltic Sea. Norwegian engagement with England declined significantly (Nielssen 2001, 185-190). Gradually, the Hansa merchants took charge of most of the Norwegian trade, thanks to their superior coordination and greater financial resources and also their larger ships, which were essential for the massive volume of trade between Norway and the Hanseatic cities. However, they were not in full control. Norwegian merchants kept their status as the only ones to engage in regular trade with the islands to their west, including Iceland, up until the end of the 14 th century (see Figures 1 and 2) (Þorsteinsson 1964 Importantly, Norwegian merchants in Bergen controlled trade with Iceland and provided access for Icelandic exports to the Hanseatic and English markets (see Figure 2). The common explanation is that Iceland exported stockfish and oil to the Hanseatic League in exchange for grain. Stockfish had been exported from the country for a considerable time but now reached new heights with new market access and higher prices. This led partly to a change in employment, at least seasonally, and may have stimulated residence on the coast in the west and south of the country. New harbours were, at last, closer to rich fishing grounds (Þorsteinsson & Grímsdóttir 1989, 174). Foreign trade seems to have been particularly lively in the fifth decade of the 14 th century, as more ships than before came to Iceland. High prices for stockfish probably contributed to the boom (Þorsteinsson & Jónsson 1991, 135-137). Those farmers who profited most from stockfish exports became richer than anyone had been before in the country. The general public probably experienced economic deprivation (due to colder weather conditions, reduction Sources: Based on numerous accounts of trade, such as those in Þorláksson 1991 andGrímsdóttir 1989. Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson 18 STJÓRNMÁL & STJÓRNSÝSLA in the amount of cultivated land and increased taxation by big landowners, i.e. the rich farmers, the church and the crown) in the 13 th and 14 th century (Karlsson 1989, 203-205). This growth most likely came to a halt in an epidemic, the Black Death (1349-50), which had devastating effects in Norway and elsewhere in Europe -though it did not reach Iceland since ships failed to leave for the island during the plague. One third to half of the Norwegian population is estimated to have died. Bergen's merchants had difficulties in maintaining their former level of trade after the epidemic and, additionally, Norway's war with its Scandinavian neighbours affected its capacity to maintain its former level of trade with its tributaries. Shipping to Iceland went back to a level of activity similar to what it had been before 1340. The Norwegian king increased taxation in Iceland due to a steep fall in tax revenues from other parts of the kingdom. Exported goods from Iceland were now subject to a five per cent tariff and the six ships in regular sailings to and from the island had to provide the crown with a quarter of their space for goods. Moreover, the king started to rent out the tax province, Iceland, with taxation and obligations for three continuous years for a specific price. Most of the 'rent seneschals' were Icelandic but Bergen became Iceland's formal commercial and administrative centre in order to guarantee tax collection. Nevertheless, the Norwegian kingdom experienced regular tax collection difficulties due to tax revolts and interruptions in sailings from the country (Þorsteinsson & Jónsson 1991, 141-142). Bergen provided Iceland with access to the outside world and its merchants had the king's backing in their attempt to create a centre for Norwegian trade and administrative functions, as Figure 2 demonstrates. In the period from 1284 to 1348, there were no restrictions on sailing by subjects of the Norwegian crown to Iceland (Þorsteinsson 1964). On the other hand, in the last decade of the 13 th century, Germans had been forbidden to sail north of Bergen; early in the 14 th century, this rule was extended to all foreigners and covered Iceland and other tax provinces of the Norwegian crown (Þorsteinsson & Jónsson 1991). In 1348, all seamen were forbidden to sail to Norwegian dependencies without special permission from the Norwegian king (Þorsteinsson 1964). Bergen was now the centre of trade with the Hansa towns: all stockfish from the Norwegian kingdom had to be exported through the city. Accordingly, Bergen's merchants had a monopoly on trade with Iceland until the last decade of the 14 th century. Records indicate that Icelanders made three attempts to build ships and sail to Norway but all the ships were confiscated due to the monopoly. Nearly all, if not all, communications between Iceland and the outside world were via Bergen (Þorsteinsson & Jónsson 1991). Þorláksson (1991 & 2001) agrees that Norwegian merchants were the only ones to provide Iceland with access to foreign markets (see Figures 1 and 2) but disputes the importance of stockfish and its export to the Hansa regions in the 14 th century. First, he claims that vaðmál was still an important source of Icelandic export earnings and that as late as about 1390, Icelanders were fulfilling considerable demand for highquality Icelandic vaðmál in Norway. Nor should exports of oil, sulphur and falcons be forgotten. Moreover, he argues that domestic population increases and changes in food consumption, and not external demands for stockfish, were the main reasons Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson STJÓRNMÁL & STJÓRNSÝSLA for fishing becoming important in the second half of the 13 th century (Þorláksson 2003 and 2001). In fact, he claims that the stockfish export boom did not start until about 1330 and later, and came to a halt with the Black Death, and that it was not extensive in the late 14 th century. Second, Þorláksson (2001) argues that most of the stockfish from Iceland went, through Bergen, to England. At the beginning of the 14 th century, the Bergen-England trade, including Icelandic stockfish at least from 1307, was conducted by Norwegians but later by both Hansa and English merchants, mainly the latter. He argues that there was considerable demand for Icelandic stockfish in England, and none at all in the Hansa region on the Continent due to a different process of handling of Icelandic stockfish, which was disliked on the Continent, compared to stockfish from Norway, and also due to irregular sailings to Iceland. He claims that the increased (though irregular) numbers of Norwegian ships reaching Iceland from 1375 to 1392 is related to stockfish demand. Norwegian sailings to Iceland declined significantly after the failure of ten ships from Bergen to reach Iceland in 1392. Fishing in Iceland continued to be the secondary activity of some farmers and their servants and was pursued mainly during the winter season only, despite the importance of stockfish and the fact that Iceland did not share its waters with others. Norway had rich fishing grounds of its own and other countries' ships did not have the capacity yet to sail as far as Iceland (Þorsteinsson & Grímsdóttir 1990, 15). Fishermen used primitive open rowboats which could go only a few miles and return to land the same day (Gunnarsson 1980). Fishing was mainly inshore and in coastal waters; the richest identifiable fishing grounds were located off the southwestern coast of the island (Thoroddsen 1924). The population continued to be widely spread over the habitable part of the country. The largest inhabited places, the two bishops' seats, were far from the coast and thus did not develop into commercial centres. The fact that manors were spread out in the country and domestic travel was very difficult also did not help the small volume of internal trade to develop (Þorsteinsson & Jónsson 1991, 137). Eggertsson (1996) argues that Iceland failed to develop a strong specialized fishing industry and relates this to the peripheral status of the island in the late Middle Ages within the Norwegian kingdom and, later, the Danish kingdom. Also, he links this to the challenge from the crown to cooperation between Icelanders and outsiders (i.e. the failure to adopt free trade) -and some important distinctive domestic features: the king's unwillingness to invest substantial resources 'in isolated and distant Iceland' may suggest that 'the transaction costs of developing a strong presence there were thought to outweigh the benefits' (Eggertsson 1996, 6) and that the island was of marginal interest to the crown. Eggertsson's external explanations may be accurate: the Norwegian crown's failure to invest in the country's rich fishing grounds and the trade and sailing restrictions and taxation imposed constraints on the development of a strong industriallybased fisheries sector. However, he is in danger of overlooking conditions on the economically stagnant island in the 12 th century -a remote island which no longer had its own shipping fleet and depended on the outside world for timber, fishing gear and 20 STJÓRNMÁL & STJÓRNSÝSLA technology. Its silver reserves were gone and it could only trade by barter. Moreover, European dynasties did not regard it as their role to invest in industries and leading domestic farmers gave priority to agricultural production at the expense of the fishing industry during the period under review. Norwegian merchants, with the king's support, undertook the task (for their own benefit of course), of providing market access for the country's exports. Eggertsson touches on a critical feature of this small peripheral community when stating that the small scale of economic activity in Iceland demanded critical external inputs and international marketing services. These were not available, but the Norwegian merchants and the king nevertheless provided a link to the outside world, i.e. input in the form of a shipping fleet for transport and market services for the Norse market and, later, the important German and English markets. Iceland's engagement with the Norwegian kingdom provided foreign contacts and access to export markets and transformed the country's economy from being almost totally based on farming to being partly based on fishing. Marine products became the country's main exports and have remained so ever since -as has already been stated. To conclude, the Norwegian kingdom taxed Icelandic products when they reached Norwegian soil and, later, inland and, at times, placed various restrictions on trade with Icelanders. It did not build a decisive industrial base and commercial villages or centres in Iceland (as it did in Norway) or develop the country's domestic infrastructure. That all said, an important part of this economic-relations picture between the two entities is missing: the Norwegian link provided essential economic shelter through transportation and trade. Norse influence in Iceland: Political shelter? Formal relations with foreign authorities cannot be identified until the 11 th century. They were mainly with the Norwegian crown and the church. For instance, Icelandic bishops, and those who were about to be consecrated as bishops, would represent their island/country and receive guidelines for their followers. This was the case of the first Icelandic bishop who went to see the Pope and the Roman-German Emperor (Líndal 1974, 258). On the other hand, Þorsteinsson (1966, 148-49) argues that Icelanders were in direct political relations with the Norwegian king even from the time of the Settlement, evidence for this being the settlers' requests to him to solve disputes regarding the Settlement. Already in the 11 th century, the Icelanders seem to have identified themselves as Icelanders though this was most likely as a term of reference to people from the country rather than as a nation (Jakobsson 2005). A clear distinction seems to have been drawn between those who lived on the island and those from abroad, though the legal status of Icelanders and foreigners was much the same during the Commonwealth era (Líndal 1974). Iceland's first international agreement was made with the Norwegian king in 1022 and was updated twice, the second time in 1083. This agreement listed the rights of the king and his Norwegian associates in Iceland and the rights of Icelanders in Norway. Icelanders had two obligations under the agreement. First, those who travelled to Norway had to pay tax when they reached shore (some were exempted from it, including all those who were driven off course from Iceland to Norway). This tax was quite high, but varied from one period to another. Second, Icelanders in Norway had to be prepared to serve in the defence of Norway and the crown in the event of an invasion. Many Icelanders are recorded as having fought alongside the king (Líndal 1974, 221-222). On the other hand, Icelanders were assured safe travel within the Norwegian kingdom; this was not the case when they travelled to Ireland, Scotland, Denmark and France, where Icelandic ships are mentioned as having been confiscated, as Figure 3 indicates (Þorsteinsson 1964, 49;Karlsson 2009, 243). Þorsteinsson and Jónsson (1991, 50-59) mention that the agreement may be interpreted as 'a security union with Norway' without taking that assumption much further. The agreement indicates that Icelanders travelled somewhat during this period and passed through Norway on their way to the rest of Europe. They had to rely on Norwegian rules in their relations with the outside world and paid high taxes for the Sources: Based on numerous accounts of the area under the control of the Norwegian king, such as those in Líndal 1978, 39 andKarlsson 2000, 104. Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson 22 STJÓRNMÁL & STJÓRNSÝSLA king's protection within his jurisdiction and, thus, accepted the agreement's obligations (see also Þorsteinsson 1964). Þorsteinsson (1966) argues that in the early 11 th century, countries' boundaries in Northern Europe and by the North Sea became clearer. Accordingly, Icelanders had to make an agreement with a king in order to travel peacefully: the only option was to make an arrangement with the Norwegian king (see Figure 3). Icelanders most often had representatives at the Norwegian court, and Icelandic poets were even amongst the king's advisers. The remoteness of the island protected it from outside military attacks since military ships of the era were not capable of reaching it. Also, the island did not have any such valuable domestic resources that would make a hostile takeover worthwhile. Hence, Icelanders did not have to organize defences like most other countries in Europe had to. This probably had considerable effect on the governance of the country, which lacked an executive branch of government during the Commonwealth period (Kristjánsson 1975, 219-220). Moreover, the distance from Norway made it difficult for the king to exercise his influence on the island, though he made several successful attempts and was already very influential within the country before the submission to the crown (Karlsson 1975). For instance, towards the end of the 13 th century, when the king called on Icelanders on the island to fight on his side in Norway, within their own kingdom, they resisted and got away with it -only few are thought to have accepted the military call (Þorsteinsson & Líndal 1978). That said, Norwegian kings are thought to have regarded it their sole right to govern Norse settlements. Several records demonstrate their intention to take control of Iceland. For instance, the intense pressure by the Norwegian king to convert Iceland to Christianity is regarded as part of the crown's attempt to govern the country and the rest of the Norse world (Kristjánsson 1975, 219-220). The attempt to guarantee regular shipping to Iceland was not only to do with the importance of trade: it was essential for all communication with the outside world. The king's influence and tax collection in the country were secured by regular contacts with the local ruling elite, the clergy kept informed about the Roman Catholic Church's line by travelling and other exchanges of information and Icelanders took part in pilgrimages and crusades. 3 Regular communications kept Iceland on the European map and ensured the continuation of Norwegian and other European influence on the small community (see Figures 1 and 2). The creation of a separate archbishopric in Trondheim in Norway in the mid-12 th century -after it had been fully or partly located in Lund within the Danish kingdom at the turn of the century -ensured the continuation of Norwegian influence in Iceland. Moreover, Icelandic domestic affairs fell into line with the developments on the Continent, i.e. greater independence of the Roman Church from the secular power and more demands of its servants and followers for, and establishment of, monasteries and nunneries (Kjartansson 1996, 67). According to Jakobsson (2005), the decision to bring Iceland under the Norwegian kingdom may have had a considerable effect on Icelanders' tendency to identify them selves as a specific group distinct from others in the Nordic region. He argues Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson STJÓRNMÁL & STJÓRNSÝSLA that it is problematic to speak of a specific Icelandic world view in the Middle Ages (1100-1400). Icelanders had adapted to the Roman Catholic world view, which emphasized the unity of all Christians. Also, Icelandic farmers' outlook centred chiefly on their own community and personal connections mattered more than loyalty to a specific geographical governmental authority (Jakobsson 2005, 366;Júlíusson 2007, 6). That said, prestige within the Norwegian royal court was important for the Icelandic identity: '…the identity was realized there rather than back in Iceland. Tales of solidarity among Icelanders take place especially in Norway or in dealing with the Nor wegian king. And it was not until Iceland became a part of the Norwegian kingdom that the Icelandic identity became assertive at home in connection with all sorts of opposition to changes and innovations by the foreign government' (Jakobsson 2005, 366). This was partly a break with the past since the Norse world, including Iceland, drew its identity from ideas about a common origin of Scandinavians in Asia. This common origin united Scandinavians with the others outside the region. The Norse communities saw themselves as forming the northern part of Christendom (with its centre in the Mediterranean) -i.e. their importance was subsidiary to being part of a greater entity (Jakobsson 2005, 365). Þorsteinsson and Jónsson (1991) argue that according to its wording, the Old Covenant was a unilateral declaration by farmers (land owners) -not chieftainsmade with the aim of solving serious disturbances in transport logistics and remedying the lack of an executive branch of government of the Commonwealth period and establishing peace after a period of civil war. They acknowledged the Norwegian king and agreed to pay taxes to the crown but otherwise held on to their powers and prestige. Hence, the crown would establish peace, guarantee regular sailings and Icelanders would have a say in the making of their own laws (rules). In 1302, the Old Covenant was updated and new clauses were added to it (Þorsteinsson & Jónsson 1991, 119-130). Jakobsson (2007) supports the view that peace was the farmers' main aim in accepting the authority of the crown. Moreover, Hálfdanarson (2001b) describes the Old Covenant as a Social Contract, according to Locke's definition, providing for the creation of an executive branch of government in Iceland. Together, the establishment of the Althing, which was a domestic decision to create a society (providing for a judiciary and unified internal law), and the Old Covenant, which was an agreement with the Norwegian king (on executive power), form the Icelandic Social Contract. New 'laws' (rules) were now 'given' by the king. They were subject to consent by the Althing, which could initiate legislation by petitioning the king and pass its own resolutions supplementing royal law (rules). The main function of the Althing continued to be, as before, judicial. Iceland depended on trade with Norway and local leaders were eager to accept royal patronage. They managed 'to fend off royal demands for military or financial contributions above the moderate regular tax' (Kjartansson 1996, 70) and were particularly successful in keeping royal and ecclesiastical appointments to themselves. The peripheral location of the country within the kingdom allowed for a certain level of manipulation of the crown's power by the domestic elite (Júlíusson 2007, 6). Júlíusson (2007) argues that 'the local aristo - 24 STJÓRNMÁL & STJÓRNSÝSLA cracy developed strong feudalizing tendencies in fact, if not formally' and that these tendencies can be traced back to the period between Settlement and the 11 th century. Hence, Icelanders continued to exercise considerable autonomy, though 'the Norwegian king acquired property rights to all trade with Iceland, including the right to determine (at least formally) what foreign merchants could enter the trade' (Gunnarsson 1987, 74). In 1383 these rights were passed to the Danish kingdom as it took Iceland over from Norway. In fact, from 1319, Norwegian kings had become less engaged in Icelandic affairs than before. They were often based outside Norway, mainly in Sweden, and had more important issues on their agenda than this distant fief. For instance, trade with Iceland was affected by the ongoing clashes between and within the Nordic kingdoms in 1365-70. The Norwegian king may often have con sidered himself lucky if he received tax revenues from Iceland (Þorláksson 2003, 243). In 1380, the Norwegian and the Danish kingdoms were to merge, which later led to the creation of the Kalmar Union (1397), including Sweden. The Danes gradually assumed the leading role in this union. Iceland was regarded as a peripheral Norwegian entity, and, as such, was not part of any arrangements regarding the Kalmar Union or the earlier changes in royal authority in Scandinavia. Icelanders seem never to have had any concerns about the formal legal status of the country in relation to the three Nordic countries or any other countries (Þor steinsson & Grímsdóttir 1989, 246). Leading Icelandic farmers would sometimes write 'bills of rights' in connection with oaths of allegiance to a new king in the country (documents such as the Old Covenant in 1262-64 and 1302 and the Skálholt Agreement of 1375). These indicate that rich farmers were concerned with retaining their own freedom and previously arranged privileges. For instance, they demanded to have the right to approve all new taxation and official appointments by the king in the country. The same applied to their rights to have a say in law-making and judiciary procedures. They were not concerned with aspects of Icelandic-Norwegian relations such as kings' elections (they always swore allegiance to the elected king who promised to respect their traditional rights) and the general status of Iceland within the kingdom. For instance, they seem either not to have interfered in changes to the governance of Norway or simply to have approved them (Þorsteinsson & Grímsdóttir 1989, 246-247). In the Middle Ages, the relationship between Norway and Iceland was characterized by a typical dependence of the smaller entity, i.e. the peripheral island depended on the larger and more centrally located entity, Norway, which accords with the model in small-state literature. Moreover, relations between the two entities bear the clear hallmark of the development of interactions between other entities (regions and states) in Europe: 'the Europe that was emerging generally favoured the institution of monarchical states as well as conversion to Christianity. Europe was represented by a collection of kings' (Le Goff 2005, 45). In other words, the relationship followed the development of the international system at the time. In the first centuries, Iceland was formally politically separate but under strong formal and informal influence from the Norwegian kingdom regarding its decisionmaking; examples are the conversion to Christianity and the agreements on the rights of the king and his officials in Iceland and the rights and duties of Icelanders in Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson STJÓRNMÁL & STJÓRNSÝSLA Norway, e.g. as regarding tax and military service. The kingdom could use its superiority, particularly in its important role in keeping the remote island connected to the outside world, to demand certain obligations on the part of Iceland, though it had difficulty in enforcing them due to the peripheral location of the island. They were never imposed by military force. The reason why the Norwegian kingdom did not interfere to a greater extent in Iceland's domestic and external affairs, and did not attempt greater land expansion, is probably the continuous military conflicts (civil war) in which Norway was involved from around 1130 to the first decades of the 13 th century. In a brief period of calm, the kingdom attempted to have a greater say in Iceland just after 1170. King Hákon Hákonarson (Hákon gamli), who ruled Norway in 1217-63, united the kingdom and attempted to gain control over the whole Norse world which had been populated during the Viking Age. By 1240, the king could turn his attention to foreign affairs, particularly trade with England, and territorial expansion. As part of the attempt to increase trade with England, he attempted to increase his power over the Orkneys, the Hebrides, the Isle of Man and the Faroe Islands and gain control of Iceland and Greenland. He gained the support of the church and the archbishop in Trondheim. The aim was to create a Norwegian royal and ecclesiastical power (Stefánsson 1975, 139). The policy of the church was to support the king's attempts to create peace within their borders and that Icelanders, like all other subjects within these borders, should recognise the authority of the king (Jakobsson 2007, 153). In the 13 th century, the archbishop had slowly but steadily become more influential within the Icelandic church, e.g. through the appointment of foreign bishops. Later, in the mid-14 th century, bishops were chosen by the Pope, sometimes in consultation with the king. In the period under study, most of these bishops were of foreign origin, mainly Norwegians and Danes. The Pope and the church, in general, became more influential in Iceland, as elsewhere in Europe (Þorsteinsson & Grímsdóttir 1990, 33-39) and this was accompanied by greater foreign influence. Many scholars have attempted to explain why Icelanders decided to become part of the Norwegian kingdom in the 13 th century. The aim of this paper is not to answer that question or evaluate these assumptions; rather, it is to evaluate whether or not the small Icelandic community followed a typical trend as defined in the small-state literature regarding alliance formation and, more precisely, by seeking shelter with a larger neighbour. In the case of Iceland, the Norwegian kingdom had always been influential in the country. At the same time as Iceland and Greenland became parts of the Norwegian kingdom, the southern part of the Norse world, the Hebrides and the Isle of Man, became parts of Scotland. Norwegian sea power north of the English Channel was in decline, due to increased competition for influence in the northern parts of the British Isles (Líndal 1974). That said, no one challenged Norwegian sea power in the North Atlantic -the northwest corner of its reach. The kingdom continued to dominate those waters for some time. Iceland's distant geographical location did not prevent the country from following a trend similar to that followed by other European countries at the time as regards the formation of dynastic and larger states. STJÓRNMÁL & STJÓRNSÝSLA To what extent the local elite regarded it as a cost to transfer authority to the king is difficult to envisage, particularly in the light of its willingness to accept royal appointments and other privileges which were of great social and economic benefit to it. Records indicate that there was considerable resistance to the crown's efforts to influence decision-making in Iceland, beginning at least as early as the conversion to Christianity and continuing throughout the Middle Ages -i.e. both before and after the formal transfer of power to the king. However, this may be related more to attempts by leading farmers and bishops to hold on to their own power and even increase their say in domestic decision-making rather than a resistance to the transfer of authority to an external actor (see, for instance, Líndal 1964). Moreover, an alignment with the kingdom secured Icelanders the right to travel in Norway and a link to the outside world. The king would guarantee 'safe travel' within the kingdom and 'regular access' to the important European markets and spiritual centres (see Figures 1, 2 and 3), while in return, Icelanders accepted certain obligations, such as his taxation and trade restrictions, at times. An alignment with the king's mission to unite the Norse world under a single ruler was, politically, regarded by some as being in Iceland's interests. It provided political shelter for the peripheral entity and temporarily stopped internal violence and dissension. Interestingly, the model in small-state literature misses the importance of shelter against domestic forces. Accordingly, the model needs to address how external actors may solve the problem of internal order. The importance of accommodation: Societal shelter? The first settlers in Iceland in the 9 th and 10 th centuries were of a more diverse nature than has sometimes been admitted in the past. Genetic evidence indicates an overall proportion of British Isles ancestry of about 40 per cent, with a great discrepancy between the female (62 per cent) and male (25-30 per cent) components. Hence, only about 60 per cent of the genetic origin of the present Icelandic nation was originally Scandinavian & Helgason et al. 2009). The Norse community was spread across northwestern Europe and mixed with the local population. Norse colonies in the British Isles were suffering serious setbacks at the time of the Icelandic Settlement (Kjartansson 1996, 62). The predominantly Norse settlers brought slaves with them from these earlier-established colonies, and it was no doubt partly due to a degree of cultural assimilation in these regions that Norse culture became established as the dominant norm in Iceland. For instance, the same language, 'Viking Age Norse', was spoken in Scandinavia, the Faroe Islands, Shetland, the Orkneys and in most parts of northern Scotland. It was also spoken in various parts of England, Normandy and Ireland and in Garðaríki (the 'realm of towns') a chain of Norse settlements along the Volkhov River in Russia, as shown in Figure 4 (Líndal 1974). The common language, heritage and family relations ensured the continuation of trade and cultural exchange. Mercenary services and trading had been part of the Viking/Norse expeditions (Le Goff 2005, 43) and the Settlement of Iceland has to be viewed in the context of the general Norse expansion of the period (Kjartansson 1996, 62). The Icelandic emigrants moved further on and established a permanent settlement in Greenland and attempted a settlement in Newfoundland around 1000, after Iceland is thought to have been fully populated with 15,000-30,000 inhabitants. There are great uncertainties about the exact size of the population, though it is most often estimated at 30,000 to 60,000 during the Commonwealth era and in the 14 th century (see, detailed discussion in Karlsson 2009). Icelanders, particularly wealthy farmers and their sons, travelled widely, especially to Norway and other parts of the Norse world (and also to the Baltic Sea and France) but they did not represent the island population, or the region/country, as a group in the first half of the period under study, as has already been stated. They more likely met the Norwegian king, as they seem to have done quite often, and the kings of Denmark and Sweden, as individuals acting on their own behalf (Líndal 1974). On the other hand, Icelanders could not rely on regular communications by sea, and the Sources: Based on numerous accounts of communication between Iceland and the outside world such as those in Þorsteinsson & Líndal 1978, 39;Le Goff 2005, 43-44. Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson 27 STJÓRNMÁL & STJÓRNSÝSLA long distance to the island set its mark on the lifestyle and the government of the island (Þorsteinsson & Grímsdóttir 1989, 168). Þorláksson (1979) and Karlsson (2009) argue that Iceland's trade was mainly con ducted in order to increase the status of its upper class and not to ensure a supply of necessary goods for consumption or to gain profit. Rich farmers and chieftains sought extravagant foreign goods in order to distinguish themselves from the general public. Foreign trade was also important in order to serve God, i.e. external goods had to be imported to conduct divine services (one function of which was also to impress; Vésteinsson 2000, 59). As Figure 5 demonstrates, the conversion from paganism to Christianity in the 10 th and 11 th centuries ensured a continuation of Norse influence in Iceland and also opened the country up to broader European Christian influence that was sweeping through the region from the Continent and the British Isles. First after the conversion, 'visiting bishops' sent by the Norwegian king and the archbishop tried to ensure that Iceland became more closely engaged with developments on the Continent. The conversion to Christianity made travel considerably easier, since Icelanders, like other people from the Northern countries, were no longer looked upon as pagans and hostile barbarians: they were now part of the Christian community. Christianity was an important force in establishing common customs, and was a unifying factor in Europe (Le Goff 2005). STJÓRNMÁL & STJÓRNSÝSLA The close connection between the regions of northwestern Europe is manifested in the broad ecclesiastical jurisdiction of the archbishop in Trondheim. This included not only Iceland but also Greenland, the Faroe Islands, the Isle of Man, the Orkneys and the Hebrides, and illustrates the cultural similarities of the inhabitants; it was the policy of the church and king that each archbishop should cover a region throughout which there was a single language and single culture. Christianity had several important effects in Iceland, as elsewhere in the Nordic region. The kings became more powerful, permanent Christian institutions were created with long-lasting effect and the career of a scholar became, for the first time, an occupation in the region. Also, the Nordic region established permanent and more extensive relations with the outside world. This southern and continental Christian influence was particularly important, since the region had been beyond the sphere of influence of the Roman Empire. Western European civilization had finally reached this northwestern outpost (see Figure 5). The 12 th century marked an inclusion in Western Christian literate civilization. Iceland now had a clergy with some Latin education and part of its elite was educated and kept in touch with European scholars (Kjartansson 1996, 66). The pioneer figures in education in Iceland had themselves all been educated on the European continent -mostly in the same region, i.e. northwestern Germany. This may be related to the location of the archbishop over Norway and Iceland in Hamburg-Bremen and the important trade links between Iceland and Norway, on the one hand, and the German regions on the other (Líndal 1974, 255). The first Icelandic bishop was consecrated in the mid-11 th century in Hamburg-Bremen and records indicate that, at least, the first Icelandic bishoprics ran schools (Hugason 1997). To travel to Saxony, up the river Weser, was easier than to many other parts of Europe. This educational link provided a long-lasting connection between the Icelandic church and the Continental church. Moreover, it separated it from the English church, by which it had been influenced in the initial stages of Christianity in the country, and had since maintained some relations. Essentially, commercial exchanges between the North Sea and Baltic regions 'entailed an exchange of ideas, art objects and cultural influences' (Brand & Müller 2007, 7), as shown in Figure 6. This was nowhere more visible than in the ports (Brand & Müller 2007, 7), such as Bergen. Iceland became part of this picture through the city and its trade with the Baltic Sea. Jakobsson (2009) argues that, during the Middle Ages, Icelanders regarded foreign travel as having three main purposes and benefits: association with noble men, adaptation to their manners and bringing back tokens of this adaptation. Travel had an important educational dimension, i.e. a visit to noble men and learning their manners was regarded as an education in itself (Jakobsson 2009). He argues that descriptions of these travels reflect the dichotomy of Icelanders' home base on the periphery and their spiritual centre in the Mediterranean, after the conversion to Christianity: 'The view that a culture normally regards itself as the world's centre does not hold true for Iceland during the Middle Ages' (Jakobsson 2009, 923). Icelanders saw themselves as 'belonging to a larger unity with all its benefits and constraints, the most important drawback being that Iceland was seen as peripheral. … The distance of Iceland from the political, cultural and economic centres had to be compensated for' (Jakobsson 2009, 923). During the Commonwealth era, trade was probably mainly conducted to serve social purposes along with the economic aims of the elite (Hjaltalín 2004, 222;Júlíusson 2007, 6). Furthermore, and importantly, the development of Icelandic literature (the Sagas, Sources: Based on numerous accounts of communication between Iceland and the outside world such as those in Þorsteinsson & Grímsdóttir 1989;Þorláksson 1991. STJÓRNMÁL & STJÓRNSÝSLA Iceland's external affairs in the Middle Ages: The shelter of Norwegian sea power Baldur Þórhallsson STJÓRNMÁL & STJÓRNSÝSLA the Legendary Sagas and Sagas of Chivalry) followed the developmental trends in European literature throughout our period under study. This indicates a substantial and lasting link with educational centres in Europe -despite the peripheral location of the country. Without their influences, Icelanders would not have been able to write their notable works. Importantly, the Icelandic scribes had customers abroad and, thus, their work became a part of the islanders' export. Also, Icelandic schools seem to have been under strong external influence and taught subjects similar to those taught elsewhere in Europe (Kristjánsson 1975, 147-221). From the Settlement up to the 14 th century, there existed close literary ties between Iceland and Norway. From the beginning, Icelandic skalds were frequent visitors at Norwegian courts; later, Icelandic writers would export their written works to Norway and, in the 14 th century, Icelandic scribes frequently copied sagas for export to Norway: 'Written texts and literary influences thus flowed freely between the countries' (Kjartansson 1996, 75). Moreover, art work on the island, which was mainly Nordic and partly Celtic, in the beginning, took shape, with Christianity, under influence from traditions from all over Europe (east, west and south) -not from any single area in particular (Björnsson 1975, 281). As Figures 4, 5 and 6 show, Iceland was not at the centre of Europe but could not escape its highway of cultural transfer. Records indicate that the local Icelandic elite was very much concerned with keeping in contact with its counterparts and noble men in the Norse world and its surroundings throughout the period under study. It was eager to adapt to their way of life and receive the same status and privileges. Societal engagement through reliable shipping contact was essential in order to accomplish these aims. Securing regular exchanges with the outside world was an essential part of keeping in touch with developments in other parts of the Norse and Christian world. Securing societal shelter may have been an important feature of external affairs during the Middle Ages. Conclusion Small states form alliances in order to compensate for their lack of defence capabilities and economic dependence, according to the literature. Geographical location is of im portance, since the nearer a small state is to a great power and potential con frontations in the international system, the more it is in need of an ally. The small-state literature and its alliance and shelter concepts do not, in general, take into account the importance of cultural communication and social interaction between states. Our analysis raises the question whether peripheral geographical location may, in fact, impose pressure on a small entity to engage in close relations with its closest neighbours in order to maintain social engagement of the inhabitants with the outside world. Social engagement with 'outsiders' will happen automatically in centrally located small entities, such as in the Medieval European city-states. This is not the case with remote entities. They have to take precautions if they are not to be left isolated. Isolation will, at least, have considerable impact on the entities' elites, i.e. their living standards, socialization, cultural communication and education.
v3-fos-license
2022-12-26T05:03:38.225Z
2022-12-24T00:00:00.000
255096054
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "a5a2d867c27d469a08ed26f8d6f3b4641bc508e5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41446", "s2fieldsofstudy": [ "Biology" ], "sha1": "a5a2d867c27d469a08ed26f8d6f3b4641bc508e5", "year": 2022 }
pes2o/s2orc
Metabolic engineering of Rhodotorula toruloides for resveratrol production Background Resveratrol is a plant-derived phenylpropanoid with diverse biological activities and pharmacological applications. Plant-based extraction could not satisfy ever-increasing market demand, while chemical synthesis is impeded by the existence of toxic impurities. Microbial production of resveratrol offers a promising alternative to plant- and chemical-based processes. The non-conventional oleaginous yeast Rhodotorula toruloides is a potential workhorse for the production of resveratrol that endowed with an efficient and intrinsic bifunctional phenylalanine/tyrosine ammonia-lyase (RtPAL) and malonyl-CoA pool, which may facilitate the resveratrol synthesis when properly rewired. Results Resveratrol showed substantial stability and would not affect the R. toruloides growth during the yeast cultivation in flasks. The heterologus resveratrol biosynthesis pathway was established by introducing the 4-coumaroyl-CoA ligase (At4CL), and the stilbene synthase (VlSTS) from Arabidopsis thaliana and Vitis labrusca, respectively. Next, The resveratrol production was increased by 634% through employing the cinnamate-4-hydroxylase from A. thaliana (AtC4H), the fused protein At4CL::VlSTS, the cytochrome P450 reductase 2 from A. thaliana (AtATR2) and the endogenous cytochrome B5 of R. toruloides (RtCYB5). Then, the related endogenous pathways were optimized to affect a further 60% increase. Finally, the engineered strain produced a maximum titer of 125.2 mg/L resveratrol in YPD medium. Conclusion The non-conventional oleaginous yeast R. toruloides was engineered for the first time to produce resveratrol. Protein fusion, co-factor channeling, and ARO4 and ARO7 overexpression were efficient for improving resveratrol production. The results demonstrated the potential of R. toruloides for resveratrol and other phenylpropanoids production. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-022-02006-w. Introduction Resveratrol possesses excellent biological activities and pharmacological properties, which has extensive applications in the chemical, pharmaceutical, food, and cosmetic industries [1,2]. However, the plant-based extraction could not satisfy ever-increasing market demand, while the chemical synthesis is impeded by the existence of toxic impurities, generated during multiple-step complex reactions, for industrial scale applications [3,4]. Microbial cell factory offers an alternative approach for resveratrol production since it has advantages like ecocompatibility, and high stereo-selectivity [5]. Microbial production of resveratrol could be achieved through the shikimate and aromatic amino acid (AAA) pathway via recruiting cinnamate-4-hydroxylase (C4H), 4-coumaroyl-CoA ligase (4CL), and stilbene synthase (STS) with L-phenylalanine (L-Phe) as the direct † Mengyao Zhang and Qidou Gao have contributed equally to this work *Correspondence: yangxb@nwafu.edu.cn precursor or introducing 4CL and STS with L-tyrosine (L-Tyr) as the starter. To date, microbes like Escherichia coli, Yarrowia lipolytica and Saccharomyces cerevisiae have been intensively explored for resveratrol production [5]. For ex novo production, the best recombinant E. coli produced 2.3 g/L resveratrol from p-coumaric acid [6] while for de novo biosynthesis, 0.8 g/L and 22.5 g/L resveratrol were obtained with S. cerevisiae and Y. lipolytica in bench-scale production, respectively [7,8]. The non-conventional oleaginous yeast R. toruloides is attractive for producing various value-added chemicals including oleochemicals, terpenoids and sugar alcohols from low-cost feedstock [9][10][11]. R. toruloides might be also a potential workhorse for aromatic compounds that it is endowed with an efficient and intrinsic bifunctional RtPAL. As an oleaginous yeast, it should also provide substantial malonyl-CoA and erythrose-4-phosphate (E4P) for aromatic compounds biosynthesis since they are highly required for the fatty acid biosynthesis and NADPH generation during lipid accumulation. Importantly, the RtPAL has been demonstrated efficient in catalyzing L-Phe to trans-cinnamic acid (t-CA), and L-Tyr to p-coumaric acid (p-CA) to support the resveratrol production [12]. To date, no attempts have been made to produce phenylpropanoid compounds, such as resveratrol, in R. toruloides. To tap its potential for aromatic compound production, the oleaginous yeast R. toruloides was engineered to produce resveratrol as an example by introducing At4CL and VlSTS (Fig. 1). Subsequently, the production was significantly increased via critical genes overexpression, protein fusion, and cofactor channeling. Finally, the maximum titer was improved to 125.2 mg/L. The present study demonstrated that R. toruloides could be explored as a platform for phenylpropanoid bioproduction. Plasmid construction The heterologous genes were codon-optimized according to the R. toruloides preference and synthesized by Synbio Technologies (Suzhou, P. R. China). All vectors used in this study were derived from the binary vector pZPK [13]. The DNA ligation kit (Takara) and In-Fusion HD cloning kit (Takara) were employed for plasmid) construction, following its user instruction. The PCR-based mutation was used for obtaining protein mutants. All the vectors and primers used in this study were summarized in Additional file 1: Tables S1 and S2, respectively. Transformation and verification Agrobacterium-mediated transformation (ATMT) was modified according to the protocol reported by Lin et al. [13]. Briefly, the correct binary vector was transformed into A. tumefaciens AGL1 cells by electroporation, and selected on LB agar plates containing 50 μg/mL kanamycin. The A. tumefaciens cells carrying the binary vectors and the R. toruloides cells were cultivated at 28 °C until OD 600 reached 2. Both cells were washed twice, and diluted to OD 600 = 0.4-0.6 with distilled water. The cell suspensions were mixed with a ratio of 1:1 (v/v). Then, 200 μL of the mixture was spread onto the filter paper placed on the IM plate and incubated at 25 °C for 36 h. Subsequently, the filter paper was transferred onto the selection YPD plate for screening transformants harboring the Ntc or Hyg resistance markers (supplemented with cefotaxime and corresponding antibiotics (Nourseothricin or Hygromycin B) and incubated at 30 °C until colonies appeared. The transformants were randomly selected and streaked onto selecting plates for five generations to certify their stability. Cultivation in shake flask The R. toruloides was seeded into 50 mL test tubes containing 5 mL YPD liquid medium supplemented with 50 μg/mL antibiotics if needed, and cultivated under 28 °C, 180 rpm for 48 h. Then, the seed cultures were inoculated into 50 mL medium with the initial OD 600 = 0.5 in 250 mL Erlenmeyer flasks and grown at 28 °C, 180 rpm for 96 h. Unless otherwise stated, the fermentation in 250 mL Erlenmeyer flasks were loaded with 50 mL YPD medium. To test its stability during yeast fermentation, 0.5 mM resveratrol was added to replace the glucose in the YPD medium. Analytical methods The cell density was tested with UV-Vis spectrophotometer EVOLUTION 220 (Thermo Fisher Scientific, USA). The d-glucose was quantified by the SBA-40C biosensor (Shandong Province Academy of Sciences, Jinan, China). The resveratrol production capacity of the transformants was analyzed in terms of maximum and averaged titers, since the ATMT strategy leads to random integration in the genome [14]. The analysis of resveratrol and p-coumaric acid was performed as described by Wang et al. [16]. 3 mL of fermentation samples were mixed with 3 mL of ethyl acetate, vortexed thoroughly, and centrifuged at 12,000 rpm, for 5 min at 4 °C. The supernatant was dried with RapidVap (Labconco, USA) at room temperature, re-dissolved in 300 μL acetonitrile, and filtrated by a 0.22 μm membrane before high-performance liquid chromatography (HPLC) analysis. The Shimadzu LC-2030 PLUS HPLC system is equipped with a Waters T-nature C18 column (4.6 × 250 mm, 5 μm) at 306 nm under isocratic elution of 65% (1% acetic acid aqueous) and 35% (acetonitrile) over 4.3 min (p-coumaric acid), 5.9 min (resveratrol). The column working temperature was kept at 35 °C, and the injection volume was 5 μL with a flow rate of 1.0 mL/min. Results and discussion Establishing the resveratrol biosynthesis baseline in R. toruloides R. toruloides is endowed with a versatile metabolism capability and a wide feedstock spectrum, especially it can efficiently assimilate the resveratrol precursor p-CA [10,17]. To investigate the feasibility of recruiting the bifunctional RtPAL for biosynthesizing resveratrol, the stability of resveratrol was tested during fermentation with R. toruloides. Thus, resveratrol was used as the sole carbon source to replace the glucose in YPD (Fig. 2). The resveratrol presented no obvious decrease during the 120 h fermentation, demonstrating that R. toruloides would not degrade resveratrol (Fig. 2a). Then, the influence of resveratrol on the growth of R. toruloides was investigated. The cell growth was not affected when 500 mg/L resveratrol was added (Fig. 2b). The above results indicated it is possible to harness the RtPAL for biosynthesizing resveratrol in R. toruloides. The At4CL and grape derived STS have been extensively utilized for the heterologous production of resveratrol [12]. Here, the two essential enzymes At4CL and VlSTS were introduced into R. toruloides, which were mediated by a P2A peptide, by putting them under the promoter of pXYL. The resulting strains (MY11) harboring At4CL and VlSTS produced resveratrol with an averaged titer of 8.7 mg/L at 96 h (Fig. 2c). Since resveratrol can also be produced from L-Phe by RtPAL in R. toruloides, a truncated A. thaliana C4H (the N-terminal membrane anchor region, 1-22 amino acid residue, was removed to generate AttC4H) was subsequently introduced, which was proved beneficial in supporting resveratrol biosynthesis with other microbial hosts [18] (Fig. 2c). By simultaneously introducing of AttC4H, At4CL and VlSTS, the resultant average resveratrol titer was increased by 176% in the strain group MY21 (24.1 mg/L) (Fig. 2c). The results implied that a synergy between the L-Tyr and L-Phe dependent routes might exist as reported in S. cerevisiae, where the L-Phe and L-Tyr routes were combined for producing the aromatic chemicals [19]. The results here also indicated that L-Phe based resveratrol biosynthesis route was more efficient than the one on L-Tyr in R. toruloides. Enhancing resveratrol production via fusing protein and improving P450 activity The resveratrol biosynthesis pathway involves two requisite but unstable intermediates, p-CA and p-coumaroyl-CoA. Protein fusion is a common strategy to facilitate substrate trafficking, avoid metabolic flux leakage, and improve enzymatic efficiency [20,21]. It has been reported successful in improving the efficiency of substrate delivery to support the resveratrol production by employing the fusion protein 4CL::STS [22,23]. Therefore, the AtC4H and the fusion protein At4CL::VlSTS (linked by Gly-Ser-Gly) were introduced into R. toruloides NP11. The resulting strain group MY22 obtained 29.0 mg/L of resveratrol on average, a 20% increase Fig. 2 Establishing and optimizing the heterogenous resveratrol biosynthesis pathway. a Degradation assays for resveratrol in strain R. toruloides NP11. The 0.5 mM (114.13 mg/L) resveratrol was added as a single carbon source. The inoculated group was shown in yellow, and the control group was shown in blue. b Toxicity tolerance test for resveratrol in strain R. toruloides NP11. The 100 and 500 mg/L resveratrol was added to the YPD medium. c is the resveratrol production in engineered strains after 96 h cultivation in YPD medium. The population performance of each engineered strain was quantified using a violin plot. The resveratrol titer of each transformant was shown as circles, and the grey outline represented the density. The black line presents the mean value of each transformant. Statistical significance was analyzed using two-tailed unpaired t-test (*p < 0.05, **p < 0.01, ***p < 0.001) compared with their independent expression in strain group MY21 (Fig. 2c). The heterologous expression of a plant originated pathway may function sub-optimally due to the unsuitable cofactor as in the case of microbial production of resveratrol [5]. Particularly, the AtC4H employed in resveratrol synthesis pathway is a membrane-associated plant-derived P450 enzyme, whose heterologous expression may suffer from insufficient cofactor NADPH supply [24]. Additionally, as a heme-thiolate protein, the plant-derived cytochrome P450 monooxygenase AtC4H also requires a cytochrome P450 redox partner [24,25]. It has been reported that the decline in the catalytic activity of P450 is caused by inadequate and inefficient cofactors, and it would lead to a limitation of resveratrol overproduction [7,19]. Accordingly, AtC4H may need to be remedied in low activity by increasing the electron transfer efficiency. Thus, the P450-mediated redox partner AtATR2 was introduced and the endogenous heme prosthetic group RtCYB5 was overexpressed in strain MY22-No.29 (one of the most efficient producer in group MY22) to generate strain group MY23, the average resveratrol production of 30 transformants in the resulting strain group MY23 was improved to 64.1 mg/L, a 121% increase compared to those of MY22 (Fig. 2c). As anticipated, accelerating the catalytic cycle in P450 can effectively increase resveratrol production. The result here was consistent with the previous report where the production of resveratrol by S. cerevisiae also increased by about 150% via enhancing the P450 activity [7]. Validating the critical steps in the shikimic acid and AAAs pathways Due to the multibranch and multistep metabolic pathway, it is challenging for microbial overproduction of plant secondary metabolites [26]. In this case, the critical enzymes like shikimate kinase, chorismate synthase ARO2, prephenate dehydratase PHA2, prephenate dehydrogenase TYR1, and aromatic amino acid aminotransferase I ARO8 in the shikimate acid and AAAs pathways have been reported as the potential limiting steps for further boosting resveratrol production (Fig. 3a) [18]. A sophisticated and strict metabolic network regulates the biosynthesis pathway of aromatics, especially the feedback inhibition of aromatic amino acids on ARO4 (the first enzyme of the shikimic acid pathway) and ARO7 (the route point enzyme of the AAAs pathway) [11,27,28]. First, the potential mutation sites of Aro4p and Aro7p in R. toruloides were identified by multiple pairwise sequence alignment to their counterpart in S. cerevisiae and Y. lipolytica. Then, the single-point mutations were introduced into the wild-type proteins to obtain the feedback-insensitive mutant enzymes RtARO4 K227L and RtARO7 G153S . Subsequently, plasmids harboring combinations of the wild-type RtARO4 and RtARO7 and the feedback-insensitive mutants RtARO4 K227L and RtARO7 G153S were constructed (Fig. 3b). Next, the above four recombinant plasmids were introduced into the strain MY22-No.29, resulting in the engineered strain groups of MY31, MY32, MY33, and MY34 (Fig. 3b), which have shown a sharp increase in the average production of resveratrol by 233% (96.5 mg/L), 137% (68.7 mg/L), 138% (68.9 mg/L) and 78% (51.4 mg/L) in comparison with an average production of the parental strain group MY22 respectively. Interestingly, the production capacity of strain group MY31, which carried the wild-type RtARO4 and RtARO7, was significantly higher (p < 0.05) than that of strain group MY34, which bearing the feedback-insensitive mutants RtARO4 K227L and RtARO7 G153S . The wild-type RtARO4 and RtARO7 were also overexpressed in the strain MY23-No.26 (the highest yield transformant in group MY23) to form the resulting strain group MY41, which produced an average of 102.6 mg/L resveratrol. Likewise, the mutants RtARO4 K227L and RtARO7 G153S were also introduced into strain MY23-No.26 to obtain strain group MY42, whose averaged resveratrol titer reached 76.1 mg/L (Fig. 3b). The results showed that relieving feedback inhibition regulation could increase resveratrol production while raising the expression of RtARO4 and RtARO7 showed a more positive effect on resveratrol overproduction (125.2 mg/L). Although it may seem counterintuitive, this inconsistency may be due to the reasons listed below. (1) The low accumulation of 3-deoxy-arabino-heptulonate-7-phosphate (DAHP) and aromatic amino acids was insufficient to initiate concentration-dependent negative feedback inhibition. (2) The catalytic activity of RtARO4 K227L and RtARO7 G153S cannot surpass that of the wild types after introducing the point mutation at the regulatory site. (3) Due to the limitations of the genetic manipulation technique, the interference caused by the background expression of endogenous RtARO4 and RtARO7 could not be avoided. Furthermore, previous researches indicated that the shikimate kinase AroL and the chorismate synthase ARO2 might be restricted factors in the shikimate pathway [29,30]. Thus, the heterologous EcAroL from E. coli and the endogenous RtARO2 were overexpressed in MY41-No.41, respectively. However, the resveratrol production in the resulting strain groups MY51 and MY52 were decreased (Fig. 3c). Likewise, the overexpression of the potential critical enzymes, including prephenate dehydratase RtPHA2, prephenate dehydrogenase RtTYR1, and aromatic amino acid aminotransferase I RtARO8, also resulted in decreased production of resveratrol (Fig. 3c). Unexpectedly, overexpression of seven combinations of the above five genes showed a significant adverse effect on resveratrol production (p < 0.05) (Fig. 3d). Clearly, the results here were quite beyond anticipation, for which the possible explanations are as follows: (1) There might be a remained unclear and harsh regulatory system in R. toruloides which inhibited the positive effect on resveratrol production by single-mindedly increasing expression levels; for example, the regulation mechanism of the enzyme catalytic activity based on substrate concentration [19,30]. (2) The current The effects of cerulenin on resveratrol production Generally, malonyl-CoA is considered the rate-limiting step in resveratrol synthesis since each molecule of resveratrol consumes three molecules of malonyl-CoA [31,32]. As an oleaginous yeast, there might be more competition for malonyl-CoA between the biosynthesis of resveratrol and lipids [33]. Therefore, cerulenin, an efficient FAS inhibitor, was added to determine whether malonyl-CoA is the bottleneck in resveratrol biosynthesis at the current stage [34] (Fig. 4a). The highest resveratrol producing strain MY41-No.41 was utilized by supplementing different concentrations of cerulenin (0 μM, 10 μM, 30 μM, 50 μM) into the cultivation medium after 24 h incubation (OD 600 = 15-20). As shown in Fig. 4b, strain MY41-No.41 produced 125.2 mg/L resveratrol without the addition of cerulenin, which was significantly higher than those obtained with the addition of cerulenin (111.6 mg/L with 10 μM (p = 0.0119), 112.4 mg/L with 30 μM (p = 0.0324) and 105.0 mg/L with 50 μM (p = 0.0175)). This decline in resveratrol production might be due to the disturbed cell state aroused by lipid metabolism [31]. Moreover, there was an observable growth inhibition when 50 μM cerulenin was added, which may be caused by the fact that lipid metabolism is necessary for cell growth [31]. The results indicated that malonyl-CoA might be adequate in engineered strain for supporting resveratrol synthesis. Conclusions This is the first report on engineering R. toruloides for resveratrol production, which was achieved by recruiting heterologous AtC4H, At4CL, and VlSTS. The resveratrol production was enhanced via protein fusion, cofactor manipulation, and ARO4 and ARO7 overexpression. The best producer MY41-No.41 produced 125.2 mg/L in the 250 mL flask from the YPD medium. The present work would provide a reference for the further exploration of R. toruloides as a platform for phenylpropanoids production. Fig. 4 The effects of cerulenin on resveratrol production in R. toruloides. a Schematic illustration of the cerulenin effects on resveratrol production. b The OD 600 and resveratrol production of strain MY41-No.41 under different concentrations of cerulenin. All data indicated the mean of n = 3 biologically independent samples, and error bars show standard deviation. c Time profile of resveratrol production, pH, glucose and OD 600 from strain MY41-No.41 under different conditions. The red lines denote additions of 0.1 mM citrate buffer (pH = 6.0), and the blue lines denote the control group
v3-fos-license
2024-06-19T15:06:01.840Z
2024-06-01T00:00:00.000
270575075
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/25/12/6647/pdf?version=1718623086", "pdf_hash": "b58604d32c2abe68651a6a3219897d9b61277803", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41447", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "f7e412119cdde371531f75f401854942ac9c8c6b", "year": 2024 }
pes2o/s2orc
Subcellular Localization of Thioredoxin/Thioredoxin Reductase System—A Missing Link in Endoplasmic Reticulum Redox Balance The lumen of the endoplasmic reticulum (ER) is usually considered an oxidative environment; however, oxidized thiol-disulfides and reduced pyridine nucleotides occur there parallelly, indicating that the ER lumen lacks components which connect the two systems. Here, we investigated the luminal presence of the thioredoxin (Trx)/thioredoxin reductase (TrxR) proteins, capable of linking the protein thiol and pyridine nucleotide pools in different compartments. It was shown that specific activity of TrxR in the ER is undetectable, whereas higher activities were measured in the cytoplasm and mitochondria. None of the Trx/TrxR isoforms were expressed in the ER by Western blot analysis. Co-localization studies of various isoforms of Trx and TrxR with ER marker Grp94 by immunofluorescent analysis further confirmed their absence from the lumen. The probability of luminal localization of each isoform was also predicted to be very low by several in silico analysis tools. ER-targeted transient transfection of HeLa cells with Trx1 and TrxR1 significantly decreased cell viability and induced apoptotic cell death. In conclusion, the absence of this electron transfer chain may explain the uncoupling of the redox systems in the ER lumen, allowing parallel presence of a reduced pyridine nucleotide and a probably oxidized protein pool necessary for cellular viability. Introduction Endoplasmic reticulum (ER) is an elaborated membrane network in the cytosol of most eukaryotic cells, and its lumen is considered as an individual organelle with a vast spectrum of unique functions.The lumen is enclosed by membranes possessing selective permeability, allowing the ER to maintain its own proteome, metabolome, and specific intraluminal reactions, including those that maintain the redox environment.Principal redox active components of the ER are similar to the redox machineries of other intracellular compartments; however, their concentrations and redox states may vary significantly between organelles. Redox systems in the ER lumen constitute a special, two-center based network.The first system, responsible for biosynthesis, biotransformation, and antioxidant defense, is organized around pyridine nucleotides.NADPH is produced by the hexose-6-phosphate dehydrogenase (H6PDH) within the lumen and contributes with a yet-unknown mechanism to the antioxidant defense of the lumen.Furthermore, it is a cofactor of the 11β-Hydroxysteroid dehydrogenase type 1 (11βHSD1) oxidoreductase of the ER, which can catalyze the luminal reduction of cortisone to cortisol [1].Luminal NADPH may also be necessary for adrenal CytP450-related steroidogenic pathways [2] and for NAD(P)H cytochrome b5 oxidoreductase (Ncb5or) [3].Since the ER membrane is impermeable to the pyridine nucleotides, these enzymes must use a separate, luminally located NADP(H) pool [4].Investigation of the redox state showed that pyridine nucleotides are predominantly reduced in the lumen, thus being able to ensure the abovementioned processes. The second redox system is involved in the post-translational modification of secretory proteins and utilizes glutathione (GSH)/glutathione disulfide (GSSG), ascorbate/dehydroascorbic acid, and vitamin K.The central enzyme of the second system is the protein disulfide isomerase (PDI), which participates in the processes of oxidative folding, vitamin K cycle, and dehydroascorbic acid reduction.Thus, electrons derived from the reaction of disulfide bond formation can be used by this mechanism to regenerate active vitamin K and ascorbate [5].Conventionally, the luminal space has been characterized as more oxidizing than the cytosol due to the presence of the oxidative protein folding: proteins synthesized and processed here have remarkably more disulfide bridges and less free cysteinyl thiols than the cytosolic ones.This phenomenon is reflected in the altered ratio of glutathione to glutathione disulfide (i.e., the luminal [GSH]: [GSSG] ratio is nearly 20 times lower than in the cytosol) [6][7][8]. Colocalized redox pairs are usually linked by oxidoreductases to form a more sophisticated, complex redox system.However, the lack of linking enzymes, and the uncoupled state of these two intraluminal redox systems in the ER, have been previously implicated, mainly because of the coexistence of a reduced pyridine nucleotide and an oxidized protein/glutathione pool [5,9,10].If the linking enzymes are missing in the ER lumen, the redox pairs can coexist independently and have different redox potentials: despite the oxidizing power of the GSSG/GSH system, pyridine nucleotides may remain reduced.The fact that the uncoupling of the thiol/disulfide and NAD(P)H/NAD(P) + redox couples occurred as a result of their subcompartmentation was excluded: these two main redox systems in the ER-the thiol/disulfide and the pyridine nucleotide systems-are not isolated from each other within the compartment [9] but are distributed in all subfractions of the lumen. Glutathione reductase and thioredoxin reductases (TrxRs) are key factors that catalyze electron transfer and maintain the connection between reduced pyridine nucleotides and glutathione disulfide.Glutathione reductase is well represented in the cytosol and in the mitochondria; however, there are only sporadic reports of its occurrence in the ER.A 10-fold lower glutathione reductase activity was reported in rat liver microsomal vesicles than in the cytosol [11].Previous results of our laboratory also suggest that the expression and activity of glutathione reductase are practically absent in rat liver microsomal vesicles [9].Like glutathione reductase, TrxRs can maintain a possible electron flux between the two redox systems via the conversion of thioredoxin (Trx) from an oxidized to a reduced form using NAPDH [12].TrxRs play a crucial role in the antioxidant process, regulation of intracellular redox potential, and programmed cell death [12][13][14].Thioredoxins, with a dithiol/disulfide active site (CGPC), are the main cellular protein disulfide reductases; therefore, they also serve as electron donors for ribonucleotide reductases, thioredoxin peroxidases (peroxiredoxins), and methionine sulfoxide reductases.Fundamentally, the thioredoxin reductase enzyme has three distinct isoforms (TrxR1, TrxR2, and TrxR3) that are present in various mammalian tissues and subcellular compartments of the mammalian cells.While TrxR1 is suggested to be present in the cytoplasm, TrxR2 is considered to be the mitochondrial isoform [15,16].The cytosolic Trx1/TrxR1 system is involved in the regulation of transcription factors, protein repair, and apoptosis.Due to its antioxidant activity, Trx1 is able to protect against oxidative stress when upregulated or overexpressed [17].The mitochondrial Trx2/TrxR2 system is also suggested to be involved in the regulation of transcription factors and has several other roles, e.g., the regulation of mitochondria-driven cell death, the mitochondrial integrity, detoxification of aldehydes, protein synthesis and folding, and metabolic processes [18].Interestingly, overexpression of Trx2 increased the production of mitochondrial reactive oxygen species (ROS) in hypoxia [19].Trx1 and Trx2 may respond differently to particular changes in the redox state of the cell.Increased ROS production upon the addition of epidermal growth factor caused a selective oxidation of cytoplasmic Trx1.On the other hand, a preferential oxidation of Trx2 was observed upon tumor necrosis factor α (TNFα) treatment [18]. The importance of the presence and localization of the Trx/TrxR system emerged by recent findings that the isoforms have been associated with a wide variety of human diseases (reviewed in [20]).The Trx/TrxR system plays a crucial role in the physiology of the adipose tissue, namely in its carbohydrate metabolism, insulin production and sensitivity, blood pressure regulation, inflammation, chemotactic activity of macrophages, and atherogenesis.Interestingly, current evidence suggests that the modulation of the Trx/TrxR system may be a novel target in the management of the metabolic syndrome, insulin resistance, and type 2 diabetes [21], as high levels of the Trx1 protein suppress the progression of the disease.It ameliorates glucose intolerance and enhances and preserves beta-cell functions, especially the insulin-secreting capacity [22].In cardiovascular disease, Trx1 has an impact on atherosclerosis via influencing the NO system [23] and also plays a role in cardiac hypertrophy [24], heart failure, and myocarditis [25].In the nervous system, the acquired or genetic dysfunction of Trx or TrxR could predispose neurons for degeneration [26], as observed in Alzheimer's or Parkinson's disease. Furthermore, elevated TrxR level and activity have also been described in numerous cancer cells, making the system a new potential therapeutic target in oncological treatments [27][28][29][30]. In order to elucidate the pathophysiological role of this protein family in human diseases and to exploit them as possible therapeutic targets, it is crucial to understand their intracellular localization and their contribution to the maintenance of subcellular redox homeostasis. Only one study from almost 30 years ago suggested that thioredoxins might be absent from the ER [31]; however, an extensive analysis of the subcellular localization of the proteins is still missing.According to our hypothesis, similar to glutathione reductase, thioredoxin reductases might be absent from the lumen of the ER.In this article, we further investigated the localization and the potential absence of Trx/TrxR enzymes as one of the essential reasons for the uncoupled redox systems in the ER. In Silico Analysis of Trx/TrxR Localization with Different Prediction Tools To confirm the luminal absence of Trx and TrxR isoforms, in silico analysis was performed with eight subcellular localization prediction programs (Table 1).The presented data show the predicted probability of ER localization of each Trx/TrxR isoform.In summary, a low probability of ER localization was found for all Trx/TrxR isoforms using each prediction program.PSORT II analyzes the features of the protein sequence (e.g., sorting signals, motifs) that influence the intracellular localization and subsequently estimates the probability of the protein being present in each localization site, while displaying the most likely localization [32]. Predotar is a neural network-based approach that uses the charge and the hydrophobicity of amino acid side chains to predict subcellular localization.The output localization represents the probability that the input sequence contained the given target signal.In case of the ER, the hydrophobic target signal located 15-30 residues from the N-terminal is the major determinant recognized by Predotar [33]. Cello uses an approach based on a two-level support vector machine (SVM) system, and the location with the largest displayed probability is used as the prediction.CELLO II performs especially well for the cytoplasmic localization, but it is not as accurate in detecting other subcellular localizations [34]. Multiloc also uses an SVM-based approach, which integrates N-terminal targeting sequences, amino acid composition, and protein sequence motifs to predict the intracellular location of the protein [35]. YLoc-HighRes was trained on the Höglund dataset and derived about 30,000 features from the protein sequence using amino acid composition, pseudo composition, and physical properties such as hydrophobicity, charge, and volume.PROSITE motifs and GO terms from close homologs are also included in the predictions [36]. LocTree3 combines the machine learning-based LocTree2 and a homology-based interference by a BLAST search of proteins with known subcellular location [37]. DeepLoc uses deep neural networks to predict protein subcellular localization, which takes into account the entire protein sequence with an attention mechanism identifying protein regions important for subcellular localization.The model was trained and tested on a protein dataset extracted from one of the latest UniProt releases, in which experimentally annotated proteins follow more stringent criteria than before.DeepLoc-1.0's networks operate with the entire protein sequence and their corresponding location labels, but it does not recognize sorting signals separately [38]. We further examined the likelihood of the occurrence of thioredoxin reductase 3 (TrxR3) in certain subcellular compartments by using the same prediction tools (Table 2), since its location is the least studied among the Trx/TrxR isoforms.All the prediction tools used uniformly showed the highest probability of cytoplasmic localization of TrxR3, while its ER localization was unlikely according to the programs used. Activity of TrxR in Subcellular Compartments of Rat Liver To investigate the presence of TrxRs in subcellular organelles, enzyme activity was measured in various subcellular compartments isolated from rat liver.We found that TrxRs showed high specific activities in the mitochondrial (1.57U/mg ± 0.19) and in the cytoplasmic fractions (1.26 U/mg ± 0.11).As expected, the specific activity of TrxRs in the ER fraction was undetectable (0.02 U/mg ± 0.01), and the optical density (OD, reflecting the amount of the reaction product catalyzed by TrxR) remained unchanged over time, indicating the absence of TrxR activity from the ER. Finally, the highest specific activity of TrxRs appeared in the cytoplasmic and in the mitochondrial fractions (Figure 1a), indicating the apparent cytosolic and mitochondrial localization of the TrxRs.Cytosolic TrxR activity is presumably due to TrxR1, whereas the mitochondrial activity might be mainly attributed to TrxR2.The higher specific activity of the mitochondrial fraction compared to the cytosol is due to the differences in protein concentration of the organelles.In the homogenate, lower activity was measured due to its relatively lower TrxR concentration.Finally, the nearly undetectable specific activity of TrxRs in the endoplasmic reticulum fraction (Figure 1a) and their persistently low absorbance (Figure 1b) support the assumption of the lack of TrxRs in the endoplasmic reticulum of liver cells. Expression of Trx/TrxR Isoforms in Subcellular Compartments of Rat Liver We aimed to further confirm the localization of Trxs and TrxRs in rat liver subcellular fractions by Western blot analysis (Figure 2).First, the purity values of the fractions were verified by organelle-specific marker proteins (Cyclophilin D for mitochondria, GAPDH for cytosol and Grp94 for microsomal fraction).Secondly, the Trx1 (12 kDa), Trx2 (13 kDa), TrxR1 (55 kDa), TrxR2 (56 kD) and TrxR3 (65 kDa) proteins were decorated with their specific antibodies in each fraction.According to previous data [39], cytosolic location of Trx1 and TrxR1 were observed.In addition, Trx2 and TrxR2 showed mitochondrial localization, which is consistent with our current knowledge [40,41].Finally, TrxR3, which has the most uncertain localization based on previous studies, appeared in the cytosolic fraction.In conclusion, none of the examined Trxs and TrxRs were localized in the endoplasmic reticulum fraction. Expression of Trx/TrxR Isoforms in Subcellular Compartments of Rat Liver We aimed to further confirm the localization of Trxs and TrxRs in rat liver subcellular fractions by Western blot analysis (Figure 2).First, the purity values of the fractions were verified by organelle-specific marker proteins (Cyclophilin D for mitochondria, GAPDH for cytosol and Grp94 for microsomal fraction).Secondly, the Trx1 (12 kDa), Trx2 (13 kDa), TrxR1 (55 kDa), TrxR2 (56 kD) and TrxR3 (65 kDa) proteins were decorated with their specific antibodies in each fraction.According to previous data [39], cytosolic location of Trx1 and TrxR1 were observed.In addition, Trx2 and TrxR2 showed mitochondrial localization, which is consistent with our current knowledge [40,41].Finally, TrxR3, which has the most uncertain localization based on previous studies, appeared in the cytosolic fraction.In conclusion, none of the examined Trxs and TrxRs were localized in the endoplasmic reticulum fraction. None of the Trx/TrxR Isoforms Co-Localizes with ER Marker Protein Grp94 To further investigate the intracellular localization of Trx/TrxR isoforms, immunofluorescence analysis was performed on hTERT-immortalized human fibroblasts and HeLa cells.ER was labeled with the Grp94 marker protein, and possible co-localization was None of the Trx/TrxR Isoforms Co-Localizes with ER Marker Protein Grp94 To further investigate the intracellular localization of Trx/TrxR isoforms, immunofluorescence analysis was performed on hTERT-immortalized human fibroblasts and HeLa cells.ER was labeled with the Grp94 marker protein, and possible co-localization was studied with Trx/TrxR isoforms.Our results show that none of the isoforms show evident co-localization with Grp94 (Figure 3), indicating the luminal absence of Trx/TrxR. None of the Trx/TrxR Isoforms Co-Localizes with ER Marker Protein Grp94 To further investigate the intracellular localization of Trx/TrxR isoforms, immunofluorescence analysis was performed on hTERT-immortalized human fibroblasts and HeLa cells.ER was labeled with the Grp94 marker protein, and possible co-localization was studied with Trx/TrxR isoforms.Our results show that none of the isoforms show evident co-localization with Grp94 (Figure 3), indicating the luminal absence of Trx/TrxR. Fibroblast cells (40×) HeLa cells (40×) Blue: nucleus, green: Grp94, red: Trx1 The localization of the least examined isoform, TrxR3, was addressed in a separate experimental series to extensively analyze its subcellular distribution on fibroblast cells.A clearly distinct, non-overlapping localization of TrxR3 and the ER-specific protein, Grp94 was observed (Figure 4).In order to better visualize the cellular shape, cells were immunoreacted with an antibody against a cytoskeleton-specific protein, tubulin (Figure 5).Although tubulin and TrxR3 did not show co-localization, the intracellular distribution of the signal corresponding to TrxR3 strongly implies its cytosolic distribution.The localization of the least examined isoform, TrxR3, was addressed in a separate experimental series to extensively analyze its subcellular distribution on fibroblast cells.A clearly distinct, non-overlapping localization of TrxR3 and the ER-specific protein, Grp94 was observed (Figure 4).In order to better visualize the cellular shape, cells were immunoreacted with an antibody against a cytoskeleton-specific protein, tubulin (Figure 5).Although tubulin and TrxR3 did not show co-localization, the intracellular distribution of the signal corresponding to TrxR3 strongly implies its cytosolic distribution.The localization of the least examined isoform, TrxR3, was addressed in a separate experimental series to extensively analyze its subcellular distribution on fibroblast cells.A clearly distinct, non-overlapping localization of TrxR3 and the ER-specific protein, Grp94 was observed (Figure 4).In order to better visualize the cellular shape, cells were immunoreacted with an antibody against a cytoskeleton-specific protein, tubulin (Figure 5).Although tubulin and TrxR3 did not show co-localization, the intracellular distribution of the signal corresponding to TrxR3 strongly implies its cytosolic distribution. ER-Targeted Expression of Trx1/TrxR1 in HeLa Cells Severely Compromised Cell Viability and Induced Apoptotic Cell Death After showing that the ER lacks Trx/TrxR redox systems, we were interested to see the consequences of co-expressing Trx1 and TrxR1 in the lumen of the ER, which would be able to artificially connect reduced pyridine nucleotides with oxidized proteins. For this purpose, HeLa cells were co-transfected with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc plasmids containing an ER signal sequence and a retention signal to ensure ER localization of Trx1 and TrxR1.A pCMV-ER/GFP-myc plasmid, containing an ER-targeted GFP within the exact same cloning site, was used as a transfection control.The transfection was validated by examining the expressed proteins by Western blot analysis (Figure 6). ER-Targeted Expression of Trx1/TrxR1 in HeLa Cells Severely Compromised Cell Viability and Induced Apoptotic Cell Death After showing that the ER lacks Trx/TrxR redox systems, we were interested to see the consequences of co-expressing Trx1 and TrxR1 in the lumen of the ER, which would be able to artificially connect reduced pyridine nucleotides with oxidized proteins. For this purpose, HeLa cells were co-transfected with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc plasmids containing an ER signal sequence and a retention signal to ensure ER localization of Trx1 and TrxR1.A pCMV-ER/GFP-myc plasmid, containing an ER-targeted GFP within the exact same cloning site, was used as a transfection control.The transfection was validated by examining the expressed proteins by Western blot analysis (Figure 6).To verify the expression of GFP, Trx1, and TrxR1 proteins in the lumen of the ER, we performed immunofluorescence analysis 24 h post-transfection.The ER-targeted expression of GFP was verified by examining the co-localization of GFP and the ER marker Grp94 using immunofluorescence analysis in HeLa cells (Figure 7A).As expected, both transfected Trx1 and TrxR1 showed co-localization with the ER marker Grp94, confirming the luminal localization of the transfected proteins (Figure 7B). Next, we analyzed the cell viability of HeLa cells co-transfected with pCMV-ER/Trx1myc and pCMV-ER/TrxR1-myc plasmids.The same plasmid with an ER-targeted GFP (pCMV-ER/GFP-myc plasmid) was used as a transfection control.Cell viability was measured at 12, 24, and 36 h post-transfection and was compared to the viability of un-transfected HeLa cells (Figure 8).At 12 h post-transfection, the decrease in cell viability of pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc co-transfected cells was statistically significant, while transfection with pCMV-ER/GFP-myc did not affect viability.After 24 h, the viability of ER-targeted Trx1/TrxR1 co-transfected cells dropped below 30%, while there was still no significant change in the viability of pCMV-ER/GFP-myc transfected cells.At 36 h post-transfection, we could hardly detect viable HeLa cells among the ER-targeted Trx1/TrxR1 clones, and the viability of control transfected cells has started to decrease too.Taken together, these data indicate that the expression of Trx1 and TrxR1 in the ER causes a significant reduction in cell viability of HeLa cells. Int. J. Mol. Sci. 2024, 25, x FOR PEER REVIEW 11 of 22 To verify the expression of GFP, Trx1, and TrxR1 proteins in the lumen of the ER, we performed immunofluorescence analysis 24 h post-transfection.The ER-targeted expression of GFP was verified by examining the co-localization of GFP and the ER marker Grp94 using immunofluorescence analysis in HeLa cells (Figure 7A).As expected, both transfected Trx1 and TrxR1 showed co-localization with the ER marker Grp94, confirming the luminal localization of the transfected proteins (Figure 7B).Next, we analyzed the cell viability of HeLa cells co-transfected with pCMV-ER/Trx1myc and pCMV-ER/TrxR1-myc plasmids.The same plasmid with an ER-targeted GFP (pCMV-ER/GFP-myc plasmid) was used as a transfection control.Cell viability was meas- Altered cell viability and morphology were also observed in brightfield images of HeLa cells 12, 24, and 36 h after transfection with ER-targeted Trx1/TrxR1 (Figure 9).Cell viability was visibly decreased, and cell morphology was also altered in the 24 h post-transfection samples.Altered cell viability and morphology were also observed in brightfield images of HeLa cells 12, 24, and 36 h after transfection with ER-targeted Trx1/TrxR1 (Figure 9).Cell viability was visibly decreased, and cell morphology was also altered in the 24 h posttransfection samples.Altered cell viability and morphology were also observed in brightfield images of HeLa cells 12, 24, and 36 h after transfection with ER-targeted Trx1/TrxR1 (Figure 9).Cell viability was visibly decreased, and cell morphology was also altered in the 24 h posttransfection samples.In order to confirm the apoptosis of HeLa cells 24 h after co-transfection with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc, we examined the cleavage of the 116 kDa Poly(ADP-ribose) Polymerase (PARP) by Western blot analysis (Figure 10).In the cotransfected cells, PARP cleavage to an 85 kDa fragment indicated the presence of apoptosis [42].These results suggest that the expression of Trx1 and TrxR1 in the ER induces rapid apoptotic cell death in HeLa cells.In order to confirm the apoptosis of HeLa cells 24 h after co-transfection with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc, we examined the cleavage of the 116 kDa Poly(ADP-ribose) Polymerase (PARP) by Western blot analysis (Figure 10).In the cotransfected cells, PARP cleavage to an 85 kDa fragment indicated the presence of apoptosis [42].These results suggest that the expression of Trx1 and TrxR1 in the ER induces rapid apoptotic cell death in HeLa cells. Un-transfected tion control) or co-transfected with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc.The cel ber was decreased, and the cell shape was altered in the co-transfected cells 24 and 36 h post fection.Images were acquired at 20× magnification.Scalebar: 100 µm. In order to confirm the apoptosis of HeLa cells 24 h after co-transfection with pC ER/Trx1-myc and pCMV-ER/TrxR1-myc, we examined the cleavage of the 116 Poly(ADP-ribose) Polymerase (PARP) by Western blot analysis (Figure 10).In th transfected cells, PARP cleavage to an 85 kDa fragment indicated the presence of ap sis [42].These results suggest that the expression of Trx1 and TrxR1 in the ER in rapid apoptotic cell death in HeLa cells. Discussion The negligible representation of glutathione reductase in the ER was described previously [9].However, there are only scarce data on the localization of the Trx/TrxR system.The localization of the TrxR protein was investigated almost 30 years ago by immunoblot analysis, comparing the protein expression of the ER and cytosolic fractions.It was found that TrxR protein was exclusively present in the cytosolic fraction [31].The study had indisputable relevance and was used as an orientation in the majority of further investigations; we sought to assess a comprehensive investigation of the localization of the Trx/TrxR system including all major organelles, with special emphasis on the ER.In the current work, we investigated the expression and enzyme activity of all known isoforms of thioredoxins and thioredoxin reductases using various in silico and in vitro techniques.To demonstrate the presence or absence of these isoforms in a single cellular compartment, methods that are widely accepted and considered to be the most appropriate to this aim were used, such as microscopical analysis, database search, and protein expression studies [43]. Available databases, such as the Human Proteome Atlas (HPA, http://www.proteinatlas.org/, [44], accessed on 23 April 2024), can be a useful tool for assessing the intracellular localization of proteins, but they usually have limitations and are not a substitute for more detailed localization studies.The HPA assesses the subcellular distribution of proteins solely by immunocytochemistry/immunofluorescence analysis, and human data for Trx/TrxR isoforms are exclusively derived from cancer cell lines.Furthermore, not all Trx/TrxR isoform localizations are confirmed by the HPA and fall into only a lower reliability category.In our manuscript, we have also tested protein expression by Western blot analysis and enzyme activity on subcellular fractions, and not only on cancer cell lines, but also on healthy human fibroblasts.Importantly, our results were not always consistent with the HPA's data.For Trx1 and Trx2, we observed localizations similar to HPA; however, HPA localizes TrxR1 to the nucleoplasm, whereas our findings indicated a predominantly cytosolic localization, which is in accordance with the majority of the literature [16,45,46]).In the case of TrxR2, we observed a mitochondrial localization in accordance with the HPA; however, the reliability of the localization fell only into a lower category.The HPA localizes TrxR3 mainly to the nucleoplasm, in addition to the cytosol, while our experiments mainly show cytosolic location of the protein. Besides the HPA, several databases provide information on the subcellular localization of Trx and TrxR isoforms.For instance, the Map of the Cell (http://mapofthecell.biochem.mpg.de/,[47], accessed on 27 May 2024) provides quantitative information on the proteome of HeLa cells, including intracellular distribution; however, the organellar localization of TrxR3 is unassigned.ProLocate (https://prolocate.cabm.rutgers.edu/index.cgi,[48], accessed on 27 May 2024) provides information on the location of over 6000 components of the rat liver proteome, but there are no data on TrxR3. The Compartments database (https://compartments.jensenlab.org/Search,[49], accessed on 27 May 2024) integrates evidence for protein subcellular localization from several different sources, including manually curated literature, high-throughput screens, automatic text mining, and sequence-based prediction methods, and then assigns a confidence score to each.Interestingly, the ER localization of each isoform was identified in the text mining section of Compartments, although with only low or moderate confidence. Our in silico analysis with eight different prediction tools equivocally indicated that the presence of human Trx1, Trx2, TrxR1, TrxR2, and TrxR3 in the ER lumen is highly unlikely.Next, the specific enzyme activity of TrxR was measured on rat liver subcellular fractions; as expected, the highest thioredoxin reductase activity was detected in the cytoplasmic and mitochondrial fractions: the former mainly occurred due to the TrxR1, the latter due to TrxR2 expression [15,16,40,50], and the activity was virtually negligible in the microsomal fraction.Therefore, it can be indisputably concluded that the absence of TrxR in the ER lumen is highly certain, and this fact evidently supports the presumption of uncoupled state of the redox systems in the ER lumen.In addition, our Western blot experiments also showed the absence of the three isoforms of TrxR and two isoforms of Trx protein from the ER-derived microsomal fraction.In the case of Trx1 and TrxR2, a faint band might be observed in the ER-derived fraction, which can be attributed to the presence of ER contact sites with other organelles and the inadequacy of the fractionation techniques to separate these.Our results confirmed that the highest TrxR1 expression was found in the cytoplasm, and the TrxR2 expression was connected to the mitochondrial fraction.These results were in accordance with the literature, as mammalian TrxR1 expression analysis mainly showed cytoplasmic localization, while TrxR2 was shown to be expressed in the mitochondria and possesses a mitochondrial localization signal [15,16,40,50].The exact localization of TrxR3 is currently uncertain in the databases, but it seems to be localized in the cytoplasm based on our Western blot results.Ultimately, the determination of the intracellular location of TrxRs was achieved also by immunofluorescent analysis of HeLa and fibroblast cell lines.Finally, none of the microscopic images showed any Trx or TrxR co-localization with the specific ER markers, and the cytosolic location of TrxR3 was confirmed in these experiments.So far, the expression of TrxR3 was shown in different tissues, such as testis, cardiovascular tissues, and the colon, and we cannot exclude that the protein may show a different subcellular distribution in different tissue types. In order to study how the disturbance of the physiological uncoupling of the reduced pyridine nucleotide system and the oxidized proteins can alter cellular homeostasis, we generated an ER-targeted version of Trx1 and TrxR1.Unfortunately, overexpressing selenoproteins in mammalian cells poses a significant challenge due to the complex and inefficient selenoprotein synthesis machinery.Attempts to overexpress TrxR proteins typically result in a mixture of mostly UGA-truncated proteins, and a small fraction of full-length selenocysteine-containing enzymes.The presence of selenocysteine-deficient TrxR variants potentially leads to cell death due to their prooxidant properties [51].This may be the reason why the generation of cell lines stably overexpressing TrxR1 in Jurkat, HeLa, and U1285 cells was unsuccessful so far in the literature [52].We also failed to generate stable expressing cell lines, because transient transfection of the ER-targeted members of the Trx1/TrxR1 system into HeLa cells resulted in rapid apoptotic cell death.This might suggest that coupling the two separated redox systems in the ER lumen can be detrimental to cellular viability.Despite its absence from the lumen, a contribution of the Trx/TrxR system to luminal reduction of misfolded proteins has recently been suggested: a cytosolically located Trx system was shown to transfer reducing agents from NADPH via a yet-unknown membrane protein towards the lumen to reduce incorrectly folded protein disulfides [53].These data further support the fact that intraluminal pathways connecting NADPH and protein thiol/disulfide systems are not essential for cell survival.Furthermore, intraluminal connection of the two systems might have deleterious consequences, as proper disulfide formation and oxidative capacity of the lumen would be compromised. To date, the presence of Trx/TrxR systems in the ER remains a limitedly investigated area.Our study provides the first comprehensive analysis of the localization and activity of Trx/TrxR proteins, with special emphasis on their presence in the ER lumen.The relevance of the results lies in the fact that in silico predictions were confirmed by various in vitro experiments, where we proved the lack of the three human isoforms of TrxR (TrxR1, 2, 3) and two isoforms of Trx (Trx1, 2) and showed that their activity and the expression are absent from rat liver ER lumen. Animals Male Wistar rats (180-230 g, Charles River Europe Laboratories Inc. Toxi-Coop Ltd., Budapest, Hungary) were kept with ad libitum access to food and water until used. Preparation of Subcellular Fractions from Rat Liver Tissue Subcellular fractions were prepared from overnight-fasted male Wistar rats by differential centrifugation, as previously reported, with slight modifications [54,55].Briefly, freshly removed liver was cut into pieces and homogenized in sucrose-HEPES buffer (0.3 M of sucrose and 0.02 M of HEPES, pH 7.2) with a Potter-Elvehjem homogenizer.Homogenates were diluted to 20% with the same sucrose-HEPES buffer and centrifuged at 50× g for 1 h at 4 • C to remove cell debris.The supernatant was further centrifuged at 1000× g for 10 min at 4 • C. Mitochondrial fraction was obtained after centrifugation of the post-nuclear pellet (11,000× g, 20 min at 4 • C).The supernatant was ultracentrifuged (100,000× g, 60 min, 4 • C) to separate the cytosolic and microsomal fractions representing the endoplasmic reticulum.Finally, mitochondrial and microsomal fractions were resuspended in MOPS buffer (100 mM of KCl, 20 mM of NaCl, 1 mM of MgCl 2 , 20 mM of MOPS, pH 7.0), and all fractions were stored in liquid nitrogen until use.Protein concentrations of the fractions were determined using the Pierce BCA (Bicinchoninic Acid) Protein Assay Kit (Thermo Fisher Scientific Inc., Waltham, MA, USA; #23225) using bovine serum albumin as a standard, according to the manufacturer's instructions. Measurement of Thioredoxin Reductase Activity To measure the activity of thioredoxin reductase, the Thioredoxin Reductase Assay Kit (Abcam, Cambridge, UK; ab83463) was used, according to the manufacturer's instructions. The measurements were performed in a 96-well plate with 200 µg of protein per well derived from the rat liver fractions. Sodium Dodecyl Sulphate Polyacrilamide Gel Electrophoresis (SDS-PAGE) and Western Blot Analysis Equal amounts of total protein of subcellular fractions (25 µg in each) were run on 10 or 12% SDS polyacrylamide gel, as previously reported [56].Proteins were electroblotted onto a PVDF or nitrocellulose membrane, blocked in 0.05% TBS-Tween containing 5% non-fat milk, and incubated with the primary antibody overnight at 4 • C, in 0.05% TBS-Tween containing 1% non-fat milk.After washing steps (3 times in 0.05% TBS-Tween for 5 min), the membrane was incubated with the secondary antibody for 1 h in 0.05% TBS-Tween containing 1% non-fat milk.SuperSignal™ West Pico PLUS Chemiluminescent Substrate reagent (Thermo Fisher) was used for the visualization. Cell Culture and Maintenance hTERT-immortalized human fibroblast cells were cultured in vitro in Dulbecco's Modified Eagle Medium (Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Life Technologies), 1% minimum essential medium with non-essential amino acids (Life Technologies), and 1% penicillin-streptomycin (Life Technologies).HeLa cells were maintained in Eagle's Minimum Essential Medium (Life Technologies) supplemented with 10% fetal bovine serum (Life Technologies) and 1% penicillin-streptomycin (Life Technologies).The cell cultures were incubated in humidified incubators at 37 • C in 95% air and 5% CO 2 . Immunofluorescent Analysis of Human Fibroblast and HeLa Cells For microscopic determination of the localization of Trx/TrxR isoforms, a 12 mm coverslip was placed in a 24-well plate, and 30,000 fibroblast or 50,000 HeLa cells were plated per well.Cells were counted with a Bürker chamber, traditionally.After overnight incubation at 37 • C in 95% air and 5% CO 2 , cells were washed once with PBS (pH 7.4), fixed on the coverslip with 100% ice-cold methanol for 20 min, and washed with PBS three times for 5 min.Cells were blocked by 0.05% PBS-Tween containing 1% bovine serum albumin (Sigma-Aldrich, St. Louis, MO, USA; A9647) and 5% goat serum (Themo Fisher, 5062Z) for 30 min and incubated overnight at 4 • C with the primary antibody.The following day, after washing with 0.05% PBS-Tween, cells were incubated with the secondary antibody in the dark (Invitrogen, anti-rabbit Alexa Fluor 568, #A-11011; antirat Alexa Fluor 488, #A-11006; anti-mouse Alexa Fluor 647, #A32728; or anti-rat Alexa Fluor 568, #A-11077 in case of the transient transfection of HeLa cells; all 1:500) for 1 h at room temperature.After the re-blocking step, an additional primary antibody was applied (Grp94 rat monoclonal; diluted 1:200 in 0.05% PBS-Tween or tubulin mouse monoclonal; diluted 1:1000 in 0.05% PBS-Tween).Cells were then washed with 0.05% PBS-Tween and incubated with the corresponding secondary antibody with conjugated fluorophore for 1 h at room temperature.After washing three times with PBS, the coverslip was embedded on a slide with ProLong Glass Antifade Mountant with DAPI (Invitrogen, #P36935).After drying in the dark for 24 h, the fluorescent signal was recorded with a Nikon Eclipse Ti2 inverted microscope (Nikon Instruments, Melville, NY, USA) equipped with 10×, 20×, 40×, and 60× oil immersion objective (Plan Apo lambda, N.A. 1.4) and a cooled sCMOS camera (Zyla 4.2, Andor Technology, Belfast, UK).Images were analyzed with ImageJ Version 1.54. Transient Co-Transfection of HeLa Cells HeLa cells were grown and treated on 96-well plates.Cells were co-transfected with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc plasmids using Lipofectamine 3000 (Invitrogen, # L3000008), according to the manufacturer's instructions.For brightfield microscopy, HeLa cells were grown in 24-well plates and were transfected with the abovementioned plasmids as described previously. Cell Viability Assay The relative number of viable cells was calculated by Burker chambers.Cell viability was detected using CellTiter-Blue assay (Promega, Madison, WI, USA; #G8080).Cells were grown and treated on 96-well plates and were incubated with resazurin for 2 h at 37 • C. Fluorescence was measured at 560/590 nm.Three parallel measurements were carried out. Conclusions The different redox potential of the two redox systems in the lumen (i.e., pyridine nucleotide and protein thiol/disulfide) is ensured by their uncoupling, since both thioredoxin reductase and glutathione reductase [9] are hardly detectable there.Coupling the two systems by ER-targeted expression of Trx1 and TrxR1 resulted in apoptotic cell death shortly after transfection, indicating that separated luminal redox systems are necessary for cellular viability.The exact mechanisms by which coupling of the two systems has deleterious consequences remain the subject of further studies. Figure 1 . Figure 1.Thioredoxin reductase activity is absent from the endoplasmic reticulum.Thioredoxin reductase activity was measured colorimetrically with the Thioredoxin Reductase Assay Kit in various subcellular fractions isolated from rat liver (homogenate, mitochondria, cytosol, and the endoplasmic reticulum fraction).(a) Specific activity of TrxR in each subcellular fraction; p values: ***: 0.0002, **** = <0.0001,t-test.(b) Time dependence of optical density (OD, corresponding to the amount of the reaction product catalyzed by TrxR). Figure 1 . Figure 1.Thioredoxin reductase activity is absent from the endoplasmic reticulum.Thioredoxin reductase activity was measured colorimetrically with the Thioredoxin Reductase Assay Kit in various subcellular fractions isolated from rat liver (homogenate, mitochondria, cytosol, and the endoplasmic reticulum fraction).(a) Specific activity of TrxR in each subcellular fraction; p values: ***: 0.0002, **** = <0.0001,t-test.(b) Time dependence of optical density (OD, corresponding to the amount of the reaction product catalyzed by TrxR).Int.J. Mol.Sci.2024, 25, x FOR PEER REVIEW 7 of 22 Figure 2 . Figure 2. Thioredoxin and thioredoxin reductase isoforms are not localized in the endoplasmic reticulum fraction of rat liver.The expression of different Trx and TrxR isoforms was investigated with Western blot after SDS-PAGE separation of proteins, on purified subcellular fractions (homogenate, mitochondria, cytosol, and the ER) of rat liver.The purity of the fractions was tested with organelle-specific markers, i.e., Cyclophilin D for mitochondria, GAPDH for cytosol, and Grp94 for ER.The expression of each Trx and TrxR isoform was analyzed on the subcellular fractions.Equal amounts of protein were loaded in each lane (25 µg).Homog, homogenate; Mito, mitochondria; Cyt, cytosol; ER, endoplasmic reticulum.The arrow on the left indicates the bands corresponding to TrxR2. Figure 2 . Figure 2. Thioredoxin and thioredoxin reductase isoforms are not localized in the endoplasmic reticulum fraction of rat liver.The expression of different Trx and TrxR isoforms was investigated with Western blot after SDS-PAGE separation of proteins, on purified subcellular fractions (homogenate, mitochondria, cytosol, and the ER) of rat liver.The purity of the fractions was tested with organellespecific markers, i.e., Cyclophilin D for mitochondria, GAPDH for cytosol, and Grp94 for ER.The expression of each Trx and TrxR isoform was analyzed on the subcellular fractions.Equal amounts of protein were loaded in each lane (25 µg).Homog, homogenate; Mito, mitochondria; Cyt, cytosol; ER, endoplasmic reticulum.The arrow on the left indicates the bands corresponding to TrxR2. Figure 2 . Figure 2. Thioredoxin and thioredoxin reductase isoforms are not localized in the endoplasmic reticulum fraction of rat liver.The expression of different Trx and TrxR isoforms was investigated with Western blot after SDS-PAGE separation of proteins, on purified subcellular fractions (homogenate, mitochondria, cytosol, and the ER) of rat liver.The purity of the fractions was tested with organelle-specific markers, i.e., Cyclophilin D for mitochondria, GAPDH for cytosol, and Grp94 for ER.The expression of each Trx and TrxR isoform was analyzed on the subcellular fractions.Equal amounts of protein were loaded in each lane (25 µg).Homog, homogenate; Mito, mitochondria; Cyt, cytosol; ER, endoplasmic reticulum.The arrow on the left indicates the bands corresponding to TrxR2. Figure 3 . Figure 3. Trx/TrxR isoforms do not show co-localization with the endoplasmic reticulum marker Grp94.hTERT-immortalized human fibroblast and HeLa cells were immunoreacted with antibodies to different Trx/TrxR isoforms and the endoplasmic reticulum (ER) marker Grp94, as described in the Materials and Methods section.The images were acquired by fluorescent microscopy, as reported in the Materials and Methods section.Images were acquired with 40× magnification.Scalebar: 50 µm. Figure 4 . Figure 4. Thioredoxin reductase 3 is localized in the cytosol of hTERT-immortalized human fibroblast cells.Fibroblast cells were immunoreacted with antibodies against TrxR3 and the endoplasmic reticulum marker Grp94, as described in the Materials and Methods section.Images were acquired by fluorescence microscopy, as reported in the Materials and Methods.Images were obtained at 60× magnification.Blue: nucleus, green: Grp94, red: TrxR3.Scalebar: 50 µm. Figure 3 . Figure 3. Trx/TrxR isoforms do not show co-localization with the endoplasmic reticulum marker Grp94.hTERT-immortalized human fibroblast and HeLa cells were immunoreacted with antibodies to different Trx/TrxR isoforms and the endoplasmic reticulum (ER) marker Grp94, as described in the Materials and Methods section.The images were acquired by fluorescent microscopy, as reported in the Materials and Methods section.Images were acquired with 40× magnification.Scalebar: 50 µm. Figure 3 . Figure 3. Trx/TrxR isoforms do not show co-localization with the endoplasmic reticulum marker Grp94.hTERT-immortalized human fibroblast and HeLa cells were immunoreacted with antibodies to different Trx/TrxR isoforms and the endoplasmic reticulum (ER) marker Grp94, as described in the Materials and Methods section.The images were acquired by fluorescent microscopy, as reported in the Materials and Methods section.Images were acquired with 40× magnification.Scalebar: 50 µm. Figure 4 . Figure 4. Thioredoxin reductase 3 is localized in the cytosol of hTERT-immortalized human fibroblast cells.Fibroblast cells were immunoreacted with antibodies against TrxR3 and the endoplasmic reticulum marker Grp94, as described in the Materials and Methods section.Images were acquired by fluorescence microscopy, as reported in the Materials and Methods.Images were obtained at 60× magnification.Blue: nucleus, green: Grp94, red: TrxR3.Scalebar: 50 µm. Figure 4 . Figure 4. Thioredoxin reductase 3 is localized in the cytosol of hTERT-immortalized human fibroblast cells.Fibroblast cells were immunoreacted with antibodies against TrxR3 and the endoplasmic reticulum marker Grp94, as described in the Materials and Methods section.Images were acquired by fluorescence microscopy, as reported in the Materials and Methods.Images were obtained at 60× magnification.Blue: nucleus, green: Grp94, red: TrxR3.Scalebar: 50 µm. Figure 6 . Figure 6.Expression of myc-tagged GFP, Trx1, and TrxR1 in transfected HeLa cells.Cell lysates from un-transfected HeLa cells, pCMV-ER/GFP-myc plasmid transfected cells, and pCMV-ER/Trx1myc and pCMV-ER/TrxR1-myc co-transfected HeLa cells were analyzed by Western blot analysis after SDS-PAGE separation of proteins.The myc-tagged proteins were probed with anti-myc antibody to confirm the expression of pCMV-ER/GFP-myc in the transfection control, and pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc in the co-transfected cells.The myc-tag adds approximately 2 kDa to the original molecular weight of the protein; therefore, GFP-myc (29 kDa), TrxR1myc (57 kDa), and Trx1-myc (14 kDa) were detected according to their molecular weight.Equal amounts of protein were loaded in each lane (25 µg).β-actin was used as a loading control. Figure 6 . Figure 6.Expression of myc-tagged GFP, Trx1, and TrxR1 in transfected HeLa cells.Cell lysates from un-transfected HeLa cells, pCMV-ER/GFP-myc plasmid transfected cells, and pCMV-ER/Trx1myc and pCMV-ER/TrxR1-myc co-transfected HeLa cells were analyzed by Western blot analysis after SDS-PAGE separation of proteins.The myc-tagged proteins were probed with anti-myc antibody to confirm the expression of pCMV-ER/GFP-myc in the transfection control, and pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc in the co-transfected cells.The myc-tag adds approximately 2 kDa to the original molecular weight of the protein; therefore, GFP-myc (29 kDa), TrxR1myc (57 kDa), and Trx1-myc (14 kDa) were detected according to their molecular weight.Equal amounts of protein were loaded in each lane (25 µg).β-actin was used as a loading control. Figure 6 . Figure 6.Expression of myc-tagged GFP, Trx1, and TrxR1 in transfected HeLa cells.Cell lysates from un-transfected HeLa cells, pCMV-ER/GFP-myc plasmid transfected cells, and pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc co-transfected HeLa cells were analyzed by Western blot analysis after SDS-PAGE separation of proteins.The myc-tagged proteins were probed with anti-myc antibody to confirm the expression of pCMV-ER/GFP-myc in the transfection control, and pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc in the co-transfected cells.The myc-tag adds approximately 2 kDa to the original molecular weight of the protein; therefore, GFP-myc (29 kDa), TrxR1-myc (57 kDa), and Trx1-myc (14 kDa) were detected according to their molecular weight.Equal amounts of protein were loaded in each lane (25 µg).β-actin was used as a loading control. Figure 7 . Figure 7. Immunofluorescence analysis of HeLa cells transfected with ER-targeted GFP, or Trx1 and TrxR1.(A) Immunofluorescence analysis of HeLa cells transfected with pCMV-ER/GFP-myc plasmid.GFP shows co-localization with the endoplasmic reticulum marker Grp94 (red).(B) Trx1 and TrxR1 (red) show co-localization with the ER marker Grp94 (green) in pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc co-transfected HeLa cells.Cells were immunoreacted with antibodies against Trx1 or TrxR1 and the ER marker Grp94, as described in the Materials and Methods section.Images were acquired by fluorescence microscopy, as reported in the Materials and Methods section, at 40× magnification.Scalebar: 50 µm. the viability of ER-targeted Trx1/TrxR1 co-transfected cells dropped below 30%, while there was still no significant change in the viability of pCMV-ER/GFP-myc transfected cells.At 36 h post-transfection, we could hardly detect viable HeLa cells among the ERtargeted Trx1/TrxR1 clones, and the viability of control transfected cells has started to decrease too.Taken together, these data indicate that the expression of Trx1 and TrxR1 in the ER causes a significant reduction in cell viability of HeLa cells. Figure 9 . Figure 9. Brightfield analysis of HeLa cells transfected with pCMV-ER/GFP-myc plasmid (transfection control) or co-transfected with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc.The cell number was decreased, and the cell shape was altered in the co-transfected cells 24 and 36 h post-transfection.Images were acquired at 20× magnification.Scalebar: 100 µm. Figure 9 . Figure 9. Brightfield analysis of HeLa cells transfected with pCMV-ER/GFP-myc plasmid (transfection control) or co-transfected with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc.The cell number was decreased, and the cell shape was altered in the co-transfected cells 24 and 36 h post-transfection.Images were acquired at 20× magnification.Scalebar: 100 µm. Figure 10 . Figure 10.Co-transfection of HeLa cells with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-m sulted in apoptotic cell death.PARP expression and cleavage were investigated by Wester analysis after SDS-PAGE separation of proteins.Lysates of untreated HeLa cells, pCMV-ER myc plasmid transfected cells, and pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc co-trans HeLa cells were analyzed.Equal amounts of protein were loaded in each lane (25 µg).β-act used as a loading control. Figure 10 . Figure 10.Co-transfection of HeLa cells with pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc resulted in apoptotic cell death.PARP expression and cleavage were investigated by Western blot analysis after SDS-PAGE separation of proteins.Lysates of untreated HeLa cells, pCMV-ER/GFP-myc plasmid transfected cells, and pCMV-ER/Trx1-myc and pCMV-ER/TrxR1-myc co-transfected HeLa cells were analyzed.Equal amounts of protein were loaded in each lane (25 µg).β-actin was used as a loading control. Table 1 . In silico prediction of endoplasmic reticulum localization of Trx/TrxR isoforms.The sequences of Trx and TrxR isoforms were obtained from the Uniprot database (http://www.uniprot.org),accessedon15 November 2021 (bold: canonical form; † : computationally mapped).The amino acid sequences of Trx and TrxR isoforms are available in TableS1. Table 2 . In silico prediction of the intracellular localization of TrxR3.The sequence of TrxR3 was obtained from the Uniprot database (http://www.uniprot.org),accessed on 15 November 2021.
v3-fos-license
2017-09-07T07:27:52.919Z
2002-01-01T00:00:00.000
11349980
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://silvafennica.fi/pdf/530", "pdf_hash": "f831393dcf4b24b1ca00ff53b2fe6ed56c992099", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41451", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "f831393dcf4b24b1ca00ff53b2fe6ed56c992099", "year": 2002 }
pes2o/s2orc
Optimal Stomatal Control in Relation to Leaf Area and Nitrogen Content We introduce the simultaneous optimisation of water-use effi ciency and nitrogen-use effi ciency of canopy photosynthesis. As a vehicle for this idea we consider the optimal leaf area for a plant in which there is no self-shading among leaves. An emergent result is that canopy assimilation over a day is a scaled sum of daily water use and of photosynthetic nitrogen display. The respective scaling factors are the marginal carbon benefi ts of extra transpiration and extra such nitrogen, respectively. The simple approach successfully predicts that as available water increases, or evaporative demand decreases, the leaf area should increase, with a concomitant reduction in nitrogen per unit leaf area. The changes in stomatal conductance are therefore less than would occur if leaf area were not to change. As irradiance increases, the modelled leaf area decreases, and nitrogen/leaf area increases. As total available nitrogen increases, leaf area also increases. In all the examples examined, the sharing by leaf area and properties per unit leaf area means that predicted changes in either are less than if predicted in isolation. We suggest that were plant density to be included, it too would further share the response, further diminishing the changes required per unit leaf area. Introduction In this paper, we examine some relationships between total plant leaf area and the properties of leaves, and between photosynthetic capacity per unit leaf area (taken as being related to the amount of nitrogen per unit leaf area), and the transpira-tion rate per unit area.We integrate earlier work on the optimisation of water use in relation to carbon gain (Cowan 1977, Cowan andFarquhar 1977) with that by Field (1983) on the optimisation of nitrogen allocation within a canopy in relation to canopy carbon gain.We consider the problem of identifying the optimal leaf area for a plant that can add leaves indefi nitely without causing self-shading.After obtaining some general optimisation results, we apply them to a simple model of photosynthesis and transpiration and seek general ecophysiological implications.We show that the simple model leads to relationships between leaf properties (e.g.nitrogen concentration, intercellular [CO 2 ]) and environmental factors (e.g.rainfall, irradiance, nitrogen availability) that are broadly predictive of those observed in the fi eld.A more rigorously structured treatment of the equations underlying the linked optimisation of canopy nitrogen allocation, water use and carbon dioxide gain is given in the accompanying paper (Buckley et al. 2002). Consider a set of leaves with a fi xed total amount of nitrogen available to be shared among them (N t ) (strictly we consider only the nitrogen available for photosynthetic machinery).Consider also that there is a fi xed supply of water to be transpired by the set of leaves at a total rate (E t ), and total leaf area per unit ground area (a).The nitrogen per unit area, N, is given from RuBP regeneration-limited expression for A mol CO 2 m -2 leaf s -1 V maximum RuBP carboxylation rate mol CO 2 m -2 leaf s -1 J potential electron transport rate mol e -m -2 leaf s -1 J m maximum potential electron transport rate mol e -m -2 leaf s -1 I 2 useful irradiance absorbed by PSII mol photons m -2 leaf s -1 f fraction of leaf absorbed light unavailable for CO 2 assimilation unitless θ colimitation factor relating J to J m and I 2 unitless Γ CO 2 compensation point mol CO 2 mol -1 air K' effective Michaelis-Menten constant for Rubisco mol mol -1 air χ marginal cost of assimilation in terms of X mol CO 2 mol -1 X s -1 X additional limiting resource, e.g.phosphorus mol X m -2 leaf X t total X mol X plant -1 Optimal Stomatal Control in Relation to Leaf Area and Nitrogen Content and the transpiration rate per unit area, E, is given from The idea is to maximise the total assimilation rate, a ⋅ A(N,E), where A is the assimilation rate per unit leaf area, and A(N,E) denotes that assimilation rate is a function of both the nitrogen content and the transpiration rate (or, in more familiar terms, the stomatal conductance). Variation over Time More realistically, A and E are integrals over time, t.We limit our discussion to a single day (t = 0 to T) to preclude nitrogen movement among leaves.We seek to fi nd the leaf area for which the maximum total amount of carbon is assimilated by the set of leaves over the period of interest, for example a day.The solution needed is the maximum value of aA integrated over the day, and will occur when the derivative of this integral with respect to a is zero, provided that the extremum is a maximum.Therefore we seek the solution of and denoting the daily integrals with a superscripted plus sign ( + ), and We rewrite Eq 4 as However, the direct dependence on area, a, can be eliminated by noting that: and, similarly, that so that Eq 7 becomes Equation 10 says that there is an extremum in aA + when A + is homogeneous in N and E + .Note, from Eq 7 above, that the fi rst partial derivative in Eq 10 is evaluated at constant E + , and that the second is evaluated at constant N. As an aside we note that this may be rewritten as which is a result reminiscent of metabolic control analysis of photosynthetic CO 2 fi xation (Giersch et al. 1990).It shows that at the optimum, the relative resource limitations sum to unity. From earlier theoretical work on optimisation, the partial derivatives in Eq 10 should be constant for a given set of values for E t and N t .That is, in an optimal canopy, the effect of moving a tiny element of daily transpiration, E + , from one place to another, without changing N, is zero, and the marginal gain of A + , (∂A + /∂E + ) N , is everywhere the same (Cowan and Farquhar 1977) at 1/λ.Within the optimal canopy the effect of movement of an infi nitesimal element of nitrogen from one place to another is also zero, and the sensitivity of A + to N, at constant E + , (∂A + /∂N) E + is everywhere the same, at 1/ν (in Buckley et al. 2002, it is shown that if (∂A + /∂E + ) N is not in fact invariant, the criterion for optimal nitrogen use is replaced by invariance of 1/η).The marginal N cost of A + was discussed by Field (1983) and by Farquhar (1989). When fi nite nitrogen and water supplies are optimally used, the following relations hold throughout the plant: and Note that the partial derivatives in Eq 12 represent potentially variable physiological properties, and the imposed constants ν and λ represent the optimal values of those properties.Applying Eq 12 to Eq 10 we obtain This identifi es a local property of optimized gas exchange.It is easily summed over the total leaf area, a, to relate the resource constraints (E t and N t ) to the total carbon gain of the plant (A + t ): Equation 12d appears to be linear in N t and E t + .However, this expression describes a physiological relationship that holds only at the optimum. As the resource supplies (N t and E t + ) vary, so will the values of ν and λ.This is clarifi ed by noting that ν = ν(N t , E t + ) and λ = λ(N t , E t + ).We further note that the result could be simply extended to include some other limiting resource, X, such as phosphorus, so that where is the marginal cost of assimilation (at constant N and E + ) in terms of X. Eq 13 may appear inconsistent with Eq 12c, but, as before, the invariant marginal costs (ν, λ, and χ) each depend on all three resource supplies (ν = ν(N t , E t + ,X t ) and so forth), so the values of ν and λ that apply to Eqs 12c and 13 are not the same.Presumably, when phosphorus is limiting, the marginal costs ν and λ become greater than when phosphorus is plentiful. Transpiration and Diffusion of Carbon Dioxide To fi nd how Eqs 12 and 13 translate into specifi c dependence on leaf area, we need expressions for the diffusional exchange of water vapour and carbon dioxide, considered in this section, and of the biochemistry of photosynthesis, considered in subsequent sections. The transpiration rate per unit leaf area, E, has a rectangular hyperbolic dependence on stomatal conductance to the diffusion of water vapour, g, with a maximum rate, E p , the potential transpiration rate per unit leaf area.Thus: where r b is the boundary layer resistance to water vapour and ε is the rate of increase of latent heat of water vapour saturated air with increase in sensible heat (Cowan 1977).(Note the accompanying paper, Buckley et al. 2002, identifi es g with total conductance to CO 2 ). We introduce α, which can be regarded as the ratio of the supply of water to the plant roots to the evaporative power of the atmosphere, as Note that α has the units of leaf area per unit ground area.It represents the leaf area that would be required to match a total transpiration rate of E t were the stomatal conductance infi nite.Re arranging Eq 14 using Eq 16, we have We describe the rate of diffusion of CO 2 from the atmosphere to the intercellular spaces, with C a and C i representing the CO 2 mole fractions outside and inside the leaf, respectively, by: 1 37 ./ . (18) Application to a Simple Model of Photosynthesis We now combine the above equations of optimisation and diffusion with one of the biochemistry of photosynthesis.For our initial exploration of what the optimisation of canopy carbon accumulation means explicitly in terms of dependence on a, we start with the simplest case.We consider steady conditions with no temporal (or spatial) variation in environment (and no self-shading). We also start with the most simplifi ed description of the biochemistry of rate of assimilation by a leaf, equivalent to a linear dependence of Γ is the compensation point, and k is the carboxylation effi ciency, here taken as proportional to the nitrogen content per unit area (N): Solving Eqs 18 and 19, we obtain Combining the condition for optimal leaf area (Eq 3) and the expansion of A into its responses to E and N (Eq 10), we have It is simple to substitute Eqs 17, 20 and 21 (for example, into Eq A7 of the Appendix) and fi nd that this result is always negative, i.e. that there is no optimum.More succinctly one can use Eqs 17, 20 and 21 to fi nd the total assimilation rate per unit ground area: Since the term in square brackets is always positive, aA decreases as a increases.This means, for this simple model, that the maximum total assimilation rate, aA, occurs with minimum area, and hence with the maximum conductance and the greatest nitrogen/area.To transpire all the available supply of water with a fi nite g, a must be greater than α (see Eq 17), and so the maximum aA occurs when a is infi nitesimally greater than α. In this simplest of cases, the optimisation depends only on subtle differences in the effects of the boundary layer resistance on A and E. In the next section we see strong effects when, at non-saturating light intensity, the capacity for photosynthesis is no longer linearly proportional to nitrogen concentration. Extension Using a Biochemical Model of Photosynthesis We fi rst extend the treatment to use Rubisco kinetics (Farquhar et al. 1980) with where V is the maximum velocity, and K' is the effective Michaelis Menten constant for carboxylation, taking into account oxygen inhibition.When V is made proportional to N, we obtain the same result as in the previous section.That is, that aA V increases as a decreases to its lower bound, α (see Eq A8 in the Appendix and Fig. 1). Of course, at large values of A, the system will become electron transport rate (J) limited, because of insuffi cient absorbed irradiance, I. We replace Eq 23 by and take the maximum rate of electron transport, J m , as being proportional to N, in the expression of Farquhar and Wong (1984) where I 2 is the irradiance effectively absorbed by photosystem II and f represents losses.Now we obtain the opposite result, and aA J generally decreases as a decreases (see fi nal paragraph in the Appendix and Fig. 1).Fundamentally this occurs because A (= A J ) is no longer proportional to N. Combining Eqs 23 and 24 as we fi nd that aA has a maximum on the a A J locus, but typically near the "breakpoint" where A J = A V .Fig. 1 plots aA V and aA J vs. a for α = 0.1, with I = 1000 µmol m -2 s -1 and shows such a result.von Caemmerer and Farquhar (1981) noted that in terms of optimal water use effi ciency, stomatal conductance should often adjust so that photosynthesis is working at this transition.At fi rst sight, when the calculations are tested, it appears that the homogeneity condition (Eqs 10 to 12) does not apply, but this is because Eq 27 is not continuously differentiable at the breakpoint.If Eq 27 is smoothed by hyperbolic minimization, say, 0 99 0 (shown as the smooth curve of actual assimilation rate in Fig. ( 1)), then the homogeneity condition holds at the optimum.The actual value of total assimilation rate is the minimum of aA V and aA J (solid line).Its maximum value, which therefore defi nes the optimal leaf area, a, occurs when aA J is limiting, but near to where aA V and aA J intersect, corresponding to co-limitation by Rubisco and electron transport.The parameter α, which is a measure of rainfall (see Eq 16 in the text) = 0.05.C a = 360 µmol/mol. How Does the Optimisation Change with Different Water Supply/Aridity While the optimisation condition of homogeneity (Eqs 10 to 12) is general enough to include diurnal variation in the environment, we restrict our exploration at this stage to a static environment representing a day's duration.While drought is a stochastic phenomenon, the average period between rainfalls is usually considerably greater than a day in most places of interest, so that λ can be taken as a constant here.We now examine how the simple model, incorporating leaf biochemistry (Eqs 23 to 28), but with no diurnal variation in light intensity or other variables, or effects of evaporative cooling on photosynthesis, predicts response to change in α, the supply of water relative to demand. Using numerical computation we see that as α increases, so too do a, g, A and C i , in the example chosen (see Fig. 2).As α changes from 0.05 to 0.3, a factor of 6, equivalent to a 6-fold increase in total transpiration, E t , the conductance g and area a share the required increase, becoming 3.3 times and 2.9 times their respective initial values.For α > 0.3, modelled stomatal conductance increases more than leaf area.In practice, numbers of plants per unit area also increase with increasing α, and so the sharing will be three-way, diminishing still further the changes required at the individual leaf level. Our simple analysis suggests that as conditions become more arid, there should be both a smaller stomatal conductance and less leaf area with greater nitrogen per unit area.The associated decline in intercellular CO 2 concentration means The increase in N with decreasing water availability was also observed among diverse Eucalypt species by Mooney et al.(1978), and has also been seen among Eucalypts on the NAT Transect (Schulze et al. 1998, Miller, Williams and Farquhar, unpublished data) and among other perennial species in eastern Australia (Wright et al. 2001). Fig. 3 shows a subset of the unpublished data of Miller et al.The data are of sun-exposed leaves from the upper parts of Eucalyptus dichromophloia trees, collected near the middle range of the NAT Transect (see Miller et al. 2001 for details), and plotted against mean annual rainfall at the collection site.The nitrogen concentration (expressed per unit leaf area, but not as a mass fraction) decreases with increasing rainfall.There is a hint that the slope fl attens at high rainfall, and this is clearly evident when data for all Eucalyptus species in the study are synthesised (Miller et al. unpublished). The model shows the same curvature (see Fig. 2).In the model it occurs because N t is constrained, and so N is proportional to 1/a.It turns out that a increases reasonably linearly with α, giving the positive curvature seen in N. The same curvature is seen in data on Hakea and other members of the Proteaceae (B.B.Lamont, P.K. Groom and R.M. Cowling, personal communication).The same shape occurs in the data on leaf mass per unit area in the above references, as N/mass changes less, and is also seen in the plot by Roderick et al. (2000).The latter authors draw attention to the need to consider soil acidity in assessing rainfall gradients, and obviously the present treatment is blind to those effects. It is important to note, also, that it is only the nitrogen associated with photosynthesis that is included in our model.Nitrogen that does not increase photosynthesis directly (for example, N in chlorophyll, light-harvesting complexes, and lignin) introduces an inhomogeneity in the relationship between A and N. Inclusion of lightharvesting inhomogeneities would favor higher leaf areas, as the relative nitrogen cost of lightharvesting is lower in thin leaves (Evans 1998), but nitrogen overhead that does not necessarily scale with photosynthetic capacity, such as that required for manufacturing epidermal and vascular tissue, would favour lower leaf areas.It is thus not certain, a priori, what effect inclusion of other nitrogen inhomogeneities would have on the results of this analysis. The modelling above relates to the ratio, α, of water supply, E t , to potential evaporation, E p , and so can be interpreted in terms of humidity, or of thermal radiation, as well as rainfall.So, as humidity decreases, or thermal radiation increases, the result is the same as rainfall decreasing, i.e. a decrease of total leaf area (a) -we say nothing about the size of individual leaves but do use a particular value of r b for computations -and a concomitant increase in N per unit leaf area (N). How Does the Optimisation Change with Changing Irradiance? The model predicts that as irradiance decreases, leaf area, a, increases and nitrogen per unit leaf area, N, decreases concomitantly.For the same parameter values as above, and with α set at 0.3, a is 0.37 at 2000 µmol m -2 s -1 , 0.44 at 1500, 0.59 at 1000, 1.04 at 500, and 2.50 at 200.In the calculations α is constant, but in practice thermal radiation and irradiance are correlated. Introducing such a correlation merely reinforces the pattern above.The prediction is in line with observations of decreases in N and photosynthetic capacity as growth irradiance is reduced (e.g.von Caemmerer and Farquhar 1984, Evans 1998). In practice, of course, as a becomes larger, selfshading becomes more important.Such effects on canopy gas exchange are dealt with numerically in Buckley et al. (2002).An analytical approach to the optimisation equations required when selfshading occurs will be developed elsewhere. How Does the Optimisation Change with Changing Nitrogen Availability? The model predicts that as total nitrogen increases, leaf area, a, increases.The model is parameterised such that nitrogen per unit leaf area, N, is represented by maximum Rubisco activity, V.In the examples above, the latter was set at 100/a µmol m -2 s -1 .For the same parameter values as above (I = 1000, α = 0.3), a is 0.31 (the minimum) at V = 25/a µmol m -2 s -1 , 0.39 at 50/a, 0.59 at 100/a, 0.72 at 150/a, and 0.84 at V = 200/a.This means that the eight-fold increase in total nitrogen, N t , is accompanied by only a 2.7-fold increase in Rubisco/leaf area, and a 3-fold increase in leaf area, a. Assimilation rate per unit leaf area A, increases by only 57%, from 22.8 to 30 µmol m -2 s -1 , because of the constraint on total transpiration, and hence on conductance and intercellular (CO 2 ).The result is in accord with physiological experience of what happens when nitrogen availability is increased to plants.Masle (1982) and Evans (1983) showed how leaf area in wheat increased with additional available nitrogen, for example.Of course in the present simulation water use is constrained to a fi xed rate, and we are unaware of papers where nitrogen effects on leaf area were so constrained.However, the result highlights the essential linkage between the optimization of nitrogen and water use.In fact, Buckley et al. (2002) show that it is not possible to identify optimal values for leaf nitrogen content for leaves within a canopy unless the response of stomatal conductance to a hypothetical perturbation in N is known.One obvious solution to this dilemma is to constrain stomatal behavior by optimising both water and nitrogen use simultaneously, as we have done here. The simplifi cations involved in the modelling are numerous.Only transpiration, photosynthesis, and the nitrogen associated with photosynthesis are considered.There is no self shading, or consideration of other nitrogen costs, or of leaf longevity.The optimisation has been written in terms of the "benefi t" of carbon assimilation.In practice, there are costs in terms of the carbon required to construct a leaf, and these need to be paid back over the lifetime of the leaf (Givnish 1986).See also Reich et al. (1997) for interesting data relating leaf properties to longevity.Even the treatment of photosynthesis is simplifi ed with no distinction being made between the [CO 2 ] intercellular spaces and that at the sites of carboxylation.Such a treatment would more realistically penalise high N concentrations.The N costs of light harvesting are also ignored, and the N within the leaf is assumed to be distributed in such a way that potential electron transport rate, J, is always homogeneous in N and I (Eq 25).In practice, this may be impossible even in theory for leaves that receive solar beam light from different angles over the day, and scattered light in varying quantities and directions over the day.In Buckley et al. (2002) we also explore a counter-example, by including the overhead N cost for light capture and assuming uniform distribution of all other N within a leaf (Badeck 1995).This represents a limiting, non-optimal scenario (Farquhar 1989). Despite these simplifi cations, the treatment developed in the present study manages to predict several features relating to aridity, irradiance and nitrogen availability that are in accord with observations.These are summarised below. Summary An equation is developed for the simultaneous optimisation of nitrogen and water use by leaves of a non-self-shading plant over a short period of time, such as a day.The result is that total assimilation is a scaled linear sum of total nitrogen and total transpiration (see Eq 12).The result applies when environmental conditions vary diurnally, but, again, with no self-shading. This somewhat general description of the optimisation of CO 2 assimilation with respect to water use and the display of nitrogen is then explored for ecophysiological insight.It is applied to a simple model of the environment, where there is no variation in time or space (and no self-shading), and assessed using a biochemical model of photosynthesis.The analysis suggests that as conditions become more arid, there should be (1) a smaller stomatal conductance, (2) less leaf area per plant, (3) greater nitrogen per unit leaf area, and (4) less carbon isotope discrimination.These predictions are in accord with observations of several authors.Similarly, the simple model also predicts the commonly observed decrease in nitrogen per unit leaf area, and in photosynthetic capacity, as growth irradiance is reduced, and the increase in leaf area as available nitrogen increases. Appendix. Dependence of canopy assimilation rate on leaf area for temporally and spatially constant environmental conditions, using a biochemical model of photosynthesis. We seek the condition that the following expression is zero, in order to fi nd where a has a value leading to maximum aA. We fi rst note that the second term on the right hand side of Eq A1 is To obtain the latter term: and, in turn, the last term on the denominator of Eq A2 is Substituting Eq A3 into Eq A2 we now have the second term in Eq A1.The last term in Eq A1 is In order to evaluate Eq A4 we fi rst note that Eq 14 implies that To complete the evaluation of Eq A4 we now evaluate . Ignoring temperature effects as g changes (see Buckley et al. 2002 for what would be required) Equations A6 and A5 together form the last term in Eq A1.So that by substituting Eq A3 into Eq A2 as the second term in Eq A1, we obtain We need to evaluate under both Rubisco and electron transport limited conditions. Consider fi rst the Rubisco-limited rate, as given by Eq 23 in the main text: In this case and because, from Eq 15, 1.6/h > 1.37r b , the second (large) term in the brackets on the right hand side of Eq A7 is always greater than 1. Thus aA V decreases with a (that is d aA da V ( ) < 0 ), and only becomes zero when A is zero, so that the homogeneity condition is not met.See text after Eq 23. In the electron-transport limited condition (A J ; see Eq 24) it is clear that because it is J m that is linear in N, and not J, except at very low capacities (large a).This means that aA J generally increases with a (that is, d aA da ( ) is generally > 0 but becoming zero and reversing slightly at large a). See text after Eq 28. In summary, the homogeneity condition (Eq 12) occurs in the branch A = A J , but usually near the "breakpoint" where A J = A V . Fig. 1 . Fig.1.The products aA V (Rubisco-limited whole plant assimilation rate) and aA J (electron transport-limited rate) are plotted versus leaf area, a.The actual value of total assimilation rate is the minimum of aA V and aA J (solid line).Its maximum value, which therefore defi nes the optimal leaf area, a, occurs when aA J is limiting, but near to where aA V and aA J intersect, corresponding to co-limitation by Rubisco and electron transport.The parameter α, which is a measure of rainfall (see Eq 16 in the text) = 0.05.C a = 360 µmol/mol. Fig. 2 . Fig. 2. Stomatal conductance, g, leaf area, a, assimilation rate per unit leaf area, A, intercellular (CO 2 ), C i, and leaf nitrogen concentration, N, are shown as they relate to α, which is a measure of rainfall (see Eq 16 in the text).N declines in a saturating fashion with rainfall; leaf area, assimilation rate and C i increase in a saturating manner; and stomatal conductance increases with rainfall, but with slightly positive curvature.The values of parameters are scaled to their values at α = 0.25, which are g = 0.55 mol m -2 s -1 , a = 0.51 m 2 leaf/m 2 ground, A = 19.1 µmol m -2 s -1 , and C i = 226 µmol/mol.The nitrogen content is that giving a Rubisco capacity of V = 400 µmol m -2 leaf s -1 , and an electron transport capacity of J m = 2.1 ⋅ 400 µmol m -2 leaf s -1 .Also, C a = 360 µmol/mol. Fig. 3 . Fig. 3. Nitrogen per unit leaf area of Eucalyptus dichromophloia leaves collected along the Northern Australia Tropical Transect vs. mean annual rainfall at collection site.These data form part of a larger unpublished study by Miller, Williams and Farquhar.Details of collection are as those described for carbon isotope discrimination by Miller et al. (2001).The dotted line has the form N = N 0 exp(-kr), where r (mm) is rainfall, and N 0 = 196 mmol N m -2 , and k = 8 ⋅ 10 -4 /mm. Table . List of symbols.
v3-fos-license
2019-04-16T13:28:55.718Z
2018-07-31T00:00:00.000
117676727
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://ee.zntu.edu.ua/article/download/143291/140844", "pdf_hash": "a3c0d3d193d69f8f2339f804545094a61ae3c53a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41454", "s2fieldsofstudy": [ "Physics" ], "sha1": "ac972c7cc5891f8bcd066a3e35d86698358891c7", "year": 2018 }
pes2o/s2orc
ANALYSIS OF ELECTROTECHNICAL PROPERTIES OF INNOVATIVE HIGH-TEMPERATURE WIRES FOR OVERHEAD POWER TRANSMISSION LINES Purpose. Determination of the capacity of wires of overhead power transmission lines based on innovative materials without changing the currently used structures, as well as the possibility of increasing the voltage class of overhead transmission lines when using wires based on aluminum-zirconium materials. Methodology. Analytical method for determining the throughput capacity of overhead power transmission lines. Comparative analysis of electrical characteristics of wires of overhead power transmission lines. Findings. The possibility of increasing the capacity of overhead power transmission lines while maintaining the wire cross-section, using an innovative material based on an aluminum-zirconium alloy, has been proved. The reduction of the weight of the wire based on innovative materials is justified, while maintaining the current throughput. The advantages and disadvantages of European wire structures for overhead power transmission lines are revealed using innovative material based on an aluminum-zirconium alloy. The optimal design of wires based on the innovative material of the aluminum-zirconium alloy for overhead transmission lines, permissible for use on the territory of Ukraine, has been determined. Originality. The expediency of using the traditional designs of the wires of overhead power transmission lines has been proved, in the case of using innovative material. The possibility of increasing the voltage class of overhead power transmission lines using wires based on aluminum-zirconium materials has been substantiated. Practical value. The results are obtained regarding the electrical resistance of overhead power transmission lines to peak loads, taking into account the low costs of modernization with the use of an innovative material based on an alloy of aluminum and zirconium. The use of innovative material creates conditions for increasing the voltage class of overhead power transmission lines, which allows increasing the transmitted power to the consumer. The use of materials based on aluminum-zirconium alloys makes it possible to carry out measures for the reconstruction of electric supply networks without replacement of supports and additional work on land allocation, as in the case of reconstruction without increasing the voltage line class of power lines, and in case of increasing the voltage class. I. INTRODUTION High-voltage overhead transmission lines in the coming decades will continue to determine the development of both the world and domestic electric power industry. The current state of Ukraine's high-voltage transmission lines mirrors the economic situation in the country. In order to overcome the problems in the current economic situation, many ideas have been proposed through energy saving for power supply system's [1] - [2]. The degree of physical aging of the existing fleet of high-voltage overhead electric lines depends on the commissioning of new facilities. Simple mathematical calculations show that if you commission new equipment to increase the power of the transmission systems by half every six years (8% of new equipment per year), then 8% of the network equipment is older than 30 years. If the renewal of the network park is 4% per year, then the share of equipment older than 30 years will be 29%. Taking into account the requirements of GOST 839-80 [3] on the continuity of the performance of wires based on aluminum wires, the lack of renewal of capacities for 50 years may begin to be critical for the entire electric system of Ukraine. Already, we can say that the solution of this problem through an extensive way is almost hopeless. To solve this technical problem, it seems necessary to differentiate the task of updating the fleet of electricity transmission capacities. It is technically feasible and affordable to conduct a survey of the residual load-bearing capacity of the masts of overhead power transmission lines (OPTL) in order to decide whether to perform routine maintenance instead of completely replacing them. The basis for this is the emergence of the possibility of using innovative designs of lightweight wires based on aluminum alloys «ЕЛЕКТРОТЕХНІКА ТА ЕЛЕКТРОЕНЕРГЕТИКА» № 2 (2018) ISSN 2521-6244 (Online) (Розділ «Електроенергетика») doped with zirconium [4] - [8]. There are two tasks: the renewal of existing overhead lines in connection with their wear and tear and the increase in capacity through the commissioning of new transmission lines. The use of aluminum alloys allows you to solve these two tasks simultaneously, using existing masts of transmission lines. As a rule, the cross section of the power line wires is read out taking into account peak loads. At the same time, these loads are only a few hours a day, the rest of the time the wire cross-section is used inefficiently. The solution to this problem is to increase the thermal stability of the wire, allowing to increase the transmitted power during peak loads due to a higher permissible operating temperature. One of the directions of modernization of the infrastructure of the electric power networks of the overhead line is the use of new thermally stable materials that must combine high electrical conductivity and sufficient strength, which persists after heating up to 240 ° C. Since at these temperatures the crystallographic structure of undoped aluminum is highly disordered, it is not possible to create heat-resistant wires based on aluminum grades of the type A5E and A7E. The solution is the creation of low-alloyed aluminum alloys with the addition of zirconium [9] - [10]. The indicators of quality, reliability and efficient operation of the united energy system of Ukraine directly depend on the functioning and condition of overhead transmission lines. According to data for 2017 in Ukraine, the length of the 0.4 kV transmission line amounted to 449,832 km, and the voltage of 6-10 kV to 332,568 km, in addition, there is a tendency for an annual increase in the length of the line. There is a need to increase the efficiency of power systems, which is achieved by increasing the nominal network voltage [9]. The task can be solved by choosing a new and effective direction in the development of power transmission systems, which is based on scientifically based technical solutions using modern methods and technologies. The directions and ways of solving this task in the context of reforming the relations of property in the energy sector are determined by the technical policy of the Ministry of Energy and Coal Industry of Ukraine, approved by the protocol of the scientific and technical council of September 14, 2016, as a set of tools, which are obliged to implement the provisions of the Law of Ukraine of 16.10.1997 №575 / 97 -ВР "About electric power engineering" [11]. This law provides for the creation by the state of conditions for the development and enhancement of the technical level of the electric power industry. The papers [12] - [13] show the actual and projected volumes of electricity consumption in Ukraine by groups of consumers for the period up to 2025 (Table 1). The process is replaced by the morally and physically worn-out equipment at the present time is rather slow. The volume of such equipment in Ukraine is by various estimates from 40 to 80%. At present, this process has «ЕЛЕКТРОТЕХНІКА ТА ЕЛЕКТРОЕНЕРГЕТИКА» № 2 (2018) ISSN 2521-6244 (Online) (Розділ «Електроенергетика») slowed down even more. As a result, the loss of electricity is growing, besides, given the current situation in Ukraine, the question arises of the prospects for the development of an overhead transmission line. This is due to increased energy consumption. However, the increase in demand for electricity in conditions of difficulties with land allocation determines the need to increase the capacity of existing lines and reduce power losses. II. ANALYSIS OF LAST RESEARCHES In the publication of D. Zotov [5] and V.N. Kuryanova [6] conducted a comparison of air power transmission wires from innovative material in the constructions shown in Fig. 1. Electrotechnical aluminum and aluminum-alloy alloys, doped with zirconium, have a significant difference in properties. The resistivity of aluminum zirconium alloys is somewhat higher than that of aluminum. This is also fixed in the norms for wires [3], [14], [15]. In the works [4], [5], [8], [16], researchers have been informed about the possibility of reducing the resistance of the wires of OPTL on the basis of Al-Zr alloys to the wires of the brand AC by using new design solutions. However, all considered innovative designs of OPTL wires are not based on existing national regulatory documents in Ukraine. In this regard, their application in Ukraine is extremely difficult. Therefore, comparative studies of the properties of the wires of the overhead lines made on the basis of aluminum and alumina-conium alloys in the context of the current regulatory documents for the regulatory design of the OPTL wires will enable us to compare the properties of the materials under study under the same conditions of design solutions and also to substantiate the technical feasibility the use of OPTL wires in the current legislation. Properties of heat-resistant wires for overhead lines on the basis of aluminum alloys doped with zirconium were studied earlier by a wide range of researchers [4] - [8]. They compared these innovative products with traditional wires based on electrical aluminum. However, in all of these publications, aluminum and high-temperature wires were compared in various versions of the design. As it seems to us, the design has a significant effect on the technical properties of the wires. In [12], the wires of new design types were compared on the basis of innovative materials in relation to the properties of wires in a traditional design based on traditional materials. Data given by these authors do not allow us to evaluate separately the effect of using a new material (AlZr) on the properties of products. For a more complete evaluation of the actual new material in this article, we consider the properties of new materials in the construction of wires in accordance with GOST 839-80 [3] and JEC 3406: 1995. III. FORMULATION OF THE WORK PURPOSE The purpose of this paper is to determine, on the basis of available data on the properties of aluminum and aluminum-zirconium materials, the range of technical applicability of OPTL wires based on innovative materials for overhead transmission lines without changing the currently used designs, as well as the justification of the possibility of upgrading the voltage rating of overhead power lines using aluminum-zirconium-based wires. IV. EXPOUNDING THE MAIN MATERIAL AND RESULTS ANALYSIS The basis for this was the construction of a wire of the AC type in accordance with GOST 839-80 [3]. The results of calculations for wires of high-voltage lines of the same section, obtained on the basis of the initial data and the above-mentioned standards, are shown in Fig. 2. A comparison of these data shows that the throughput for wires based on wires from a heat-resistant alloy of the AT brand is significantly increased (from 1.7 to 2.3 times, depending on the selected wire grade) compared to the AC wire. tive heat-resistant wires of various designs allow to significantly increasing the transmission capacity of power lines due to higher permissible current loads. The ordinate shows the current, A; the wire crosssection is indicated along the abscissa, the first value corresponds to the aluminum part, and the second corresponds to the steel part of the wire, mm 2 ; AС ¬ when using conventional aluminum, AT1, AT2, AT3, AT4 -using aluminum-zirconium instead of pure aluminum. When replacing an aluminum wire in wires of the AC type with an aluminum zirconium wire, it also increases the capacity of the line, which can lead to a favorable economic effect. Reducing the section of the wire, while maintaining the current load, will lead to a reduction in the amount of material expended. The preservation of the wire cross section with the use of innovative material makes it possible to increase the load of the overhead line. However, this method of reconstructing OPTL is more problematic, since it is necessary to take into account the distance between the wires when the voltage of the overhead line is increased, which is limited by the structure of the existing support. Proceeding from this, the calculation and selection of the cross-section of the OPTL wires based on the AT wires was made, the admissible current for which is compared with the permissible current of the wires of the AC of the nominal section. The obtained data are presented in Table. 2. Analyzing the data of various manufacturers on the significant increase in the cost of heat-resistant wires [5], compared to wires based on electrical aluminum and technological features of manufacturing heat-resistant wires [9], [10], it can be noted that technology manufacturing of heat-resistant wires almost completely coincides with the technology of aluminum wires. According to the research of the above-mentioned authors, the same equipment is used. There are two differences. The first is the introduction of a zirconium ligature into an aluminum melt. The second is a long-term (no less than 10-20 hours) heat treatment (up to 450 °C). Given the small amounts of zirconium introduced (0.35 ... 0.45 mass %), doping with zirconium can't give a multiple increase in cost. It seems that the most significant contribution to the cost price is made by thermal processing. According to the data of the only manufacturer of heat-resistant wires on the territory of Ukraine LLC Krok-GT (Zaporozhye), the duration of heat treatment can be substantially reduced by complex alloying of the aluminum zirconium melt. The complex doping increases the rate of decomposition of the solid solution of zirconium in aluminum to the Al 3 Zr phase and, thus, the heat treatment time is shortened and its conditions are facilitated. We estimate the transmission capacity of the transmission line as a function of the increase in voltage. Power, which is transmitted by the network, is determined by the formula: where S is the transmission power, U n is the nominal line voltage, J is the permissible current density, and F is the cross-sectional area of the wire.    Thus, the increase in transmission capacity is directly proportional to the increase in voltage in the transmission line. We estimate the voltage drop in the line of electroreduction at a double increase in the voltage in the line by the formula: where P is the active power of the line, R is the line resistance, Q is the reactive line power, and X is the reactance of the line. With the same load (transmitted power), with a twofold increase in voltage across the line, we will have: Consequently, a double increase in the voltage on the line leads to a fourfold decrease in the voltage drop in the line. The loss of power in the line can be represented as: With the same load, doubling the voltage in the line will result in: These considerations are confirmed by the data of presented in Table. 3. However, based on Ohm's law, increasing the voltage in the power line leads to an increase in the amperage. Consequently, the possibility of increasing the voltage and transmitted power is limited by the value of the permissible current load. The foregoing allows us to recommend the use of aluminum zirconium materials for the production of wires for overhead lines in order to carry out a set of measures to reduce losses in overhead lines and increase transmission capacity. Thus, based on the data obtained, it can be concluded that the use of OPTL wires based on aluminum zirconium wires is possible in two versions: firstly, the recon-struction of the transmission line, based on the reduction in the mass of innovative wires, but the invariance of the transmitted power, and the second option -a multiple increase in the transmitted power without changing the supports. In accordance with the requirements of IEC 62420, heat-resistant wires must be manufactured with a gap between the steel core and the aluminum zirconium wires (see Fig. 1, b). Constructively, this can be achieved only if the profiled wires are used. This causes a rise in the cost of making the wire. Consider how technically justified this is. The creation of a wire with a gap was intended to transfer the entire mechanical load to the steel core. In this case, the wire ceases to work as a composite material made of steel and aluminum wires. Its mechanical properties are determined by the properties of the steel core (coefficient of thermal expansion (CTE), modulus of elasticity, etc.), so a smaller CTE of steel determines a smaller sag due to a smaller elongation with increasing operating temperature. Comparative characteristics of the wire sag, depending on the achieved operating temperature on the basis of the data of are shown in Fig. 3. ACSR wire (Aluminum Conductor Steel Reinforced), the curve in red -the dependence for the wire from the innovative material of the brand GTACSR and GZNFCSR. However, a more complex design of the wire leads to a complication of the technology of its manufacture. At the same time, the use of such a design necessitates the use of more sophisticated technology for the installation of wires and, as a consequence, the need to develop and use special equipment and fittings. This requirement is due to the need to fix the wire to the insulators behind the steel core, since the traditional fastening minimizes all the positive advantages of this design. The same causes are also due to a more complicated repair of the wire, which does not exist in Ukraine. All this together causes the need for more qualified personnel. It can be assumed that it was these considerations that guided Lumpi-Berndorf, making the decision to produce TACSR (Thermally-resistant Aluminum-alloy Conductor Steel Reinforced) wires on the territory of the European Union based on the standard [14]. Structurally, these wires correspond to wires of the brand AC. The classic aluminum wires of the overhead lines are structurally identical to the TACSR / ACS and (Z) TACSR / HACIN wires. There is the possibility of mounting these wires with the help of already known types of armature, the production of which is adjusted and put on stream. As a consequence, the need to master the new installation technology, as well as the purchase of equipment and the upgrading of the skills of the workforce, is no longer necessary. Thus, the company Lumpi-Berndorf during the installation and repair of the wire uses the same methods as for installing and repairing the standard AC wire. However, it is worth noting the need to use specially designed valves for operation at high temperatures. V. CONCLUSION The possibility of increasing the capacity of overhead power transmission lines while maintaining the wire cross-section, using an innovative material based on an aluminum-zirconium alloy, is proved. The design of the wires according to IEC 62420 has a number of advantages over the classical design. However, the transition to the use of this structure is difficult. The construction of the wire in accordance with GOST 839-80 with the replacement of aluminum by aluminum, dispersed by nano-particles AlZr 3 , is able to withstand the growing loads in the power supply network. Heat resistant wires based on aluminum-zirconium have already been used and standardized by the United States and in the European Union. However, the construction of wires based on these standards is not the most optimal for their use on the territory of Ukraine. Using the new material allows you to increase the capacity of the overhead power line while maintaining the wire section (more than twice), or reduce the mass of active wire material by 40% without reducing the voltage class. The optimal design of the wires based on the innovative material from the aluminum-zirconium alloy for electric transmission lines, permissible for use on the territory of Ukraine, is determined. To implement and use heat-resistant wires in Ukraine, it is necessary to implement their production and solve problems associated with the release and registration of existing regulatory materials. Методы исследования. Аналитический метод определения пропускной способности проводов воздушных линий электропередачи. Сравнительный анализ электротехнических характеристик проводов воздушных линий электропередачи.
v3-fos-license
2019-03-08T14:19:42.010Z
2013-09-24T00:00:00.000
71689083
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/archive/2013/420361.pdf", "pdf_hash": "48bbfdc3ab4fab11342badaf7b859c77b6806f53", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41455", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2c730ba0bcc3c640fd035389ccdd637c1ecbb596", "year": 2013 }
pes2o/s2orc
Vertical Transmission of HIV in Sub-Saharan Africa: Applying Theoretical Frameworks to Understand Social Barriers to PMTCT In sub-SaharanAfrica, over 1,000 newborns are infected withHIV every day, despite availablemedical interventions. Pediatric HIV is a large contributor to the high rates, the largest in the world, of infant and child mortality in this region. Prevention of motherto-child transmission of HIV (PMTCT) can dramatically reduce the risk of infection for the infant during pregnancy, childbirth, and breastfeeding. roughout most urban areas of Africa, free medications are readily available. However, approximately 50% of HIV-positive pregnant women in sub-SaharanAfrica are not accessing or adhering to the necessarymedications to preventmotherto-child transmission. In order for this region to eliminate the vertical transmission of HIV andmeet theMillenniumDevelopment Goals, interventions need to move beyond the individual-level and address the structural and social barriers preventing women from utilizing PMTCT services. is paper reviews current literature on PMTCT interventions in sub-Saharan Africa from 2006–2012, speci�cally examining theoretical underpinnings. Overwhelmingly, the approach has been education and counseling. is paper calls to action a paradigm shi to a social ecological approach that addresses barriers at all societal levels, especially gender inequality, enabling a much greater impact on mother-to-child transmission of HIV. Introduction Worldwide, 1% of pregnant women are HIV-positive.However, sub-Saharan Africa where 95% of HIV positive women live carries the vast majority of this burden [1].Without treatment, approximately 25%-50% of HIV-positive mothers will transmit the virus to their newborns during pregnancy, childbirth, or breastfeeding [2].In 2007, over 2 million children worldwide were living with HIV/AIDS, with the overwhelming majority again in sub-Saharan Africa [3,4].Approximately 400,000 infants contract HIV from their mother every year, which is about 15% of the total global HIV incidence [5][6][7].e rate of pediatric HIV infections in sub-Saharan Africa remains unacceptably high, with over 1,000 newborns infected with HIV per day [8]. Pediatric HIV is a large contributor to the excessive infant and child mortality rates in sub-Saharan Africa.e life expectancy of HIV-positive infants is extremely short.One-third of HIV-positive infants is estimated to die before their �rst birthday and over one-half will die by their second birthday [9,10].Annually, there are approximately 260,000 pediatric deaths due to AIDS-related illnesses [7].AIDS remains one of the leading causes of death among children under the age of �ve years in sub-Saharan Africa [11]. Mother-to-child transmission (i.e., vertical transmission) of HIV is almost completely preventable through a set of interventions referred to as prevention of mother-to-child transmission (PMTCT).PMTCT begins during antenatal care (ANC) when the woman is tested for HIV and receives the result that she is HIV positive.e recommendation in sub-Saharan Africa is for the woman to then take medication throughout pregnancy, during labor, and the postnatal period while exclusively breastfeeding.e infant must also undergo periodic HIV testing and take medication to prevent transmission of the virus while he/she is breastfed. PMTCT can reduce the risk of vertical transmission of HIV to less than 1% [2].Mother-to-child transmission has almost been eradicated in the United States and Europe, but continues to be a largely uncontrolled problem in African countries [5].In 2001, the UN General Assembly committed to reduce mother-to-child transmission by 20% by 2005 and by an additional 50% by 2010.e vast majority of countries in sub-Saharan Africa, however, have not been able to meet these goals [12].Improving access and utilization to PMTCT in this region is an essential component of addressing the global HIV/AIDS pandemic and to achieving Millennium Development Goals 4, 5, and 6. PMTCT utilization in sub-Saharan Africa has signi�cantly increased over the past decade but is still far from universal.In 2003, only 3% of HIV-positive pregnant women in this region utilized PMTCT.is percentage dramatically increased to 33% in 2007 and 53% in 2010 [5,12].Unfortunately, this still leaves about half of all HIV-positive pregnant women not utilizing PMTCT, putting them at high risk for transmitting the virus to their infants.e global health community's efforts to eliminate mother-to-child transmission has been primarily focused on scaling up biomedical services with little examination of the social barriers that may be preventing women from utilizing and adhering to PMTCT."Despite technical means and political will, the percentage of pregnant women involved in PMTCT interventions is not increasing as fast as public health authorities, health professionals, and scientists would expect" [5, page 807]. ere is a current lack of analysis regarding the social structures in place hindering HIV-positive mothers' PMTCT behavior.In order to eliminate mother-to-child transmission of HIV, the context in which HIV-positive mothers make decisions regarding adherence to PMTCT needs to be better understood and addressed.is analytic paper uses theoretical frameworks from public health and social science to highlight how barriers to PMTCT have typically been understood and applied.In addition, the paper suggests a more comprehensive theory-based approach to understand underutilization and nonadherence to PMTCT. Methods From May 10th to June 25th, 2012, the author conducted an online review of PMTCT literature from sub-Saharan Africa using Pubmed, Web of Science, and Google Scholar databases.Search terms include: "PMTCT Africa, " "Pediatric HIV Africa, " and "mother-to-child transmission Africa." e review was restricted to publication dates between 2006 and 2012 to provide the most recent and relevant research.Only articles published in English were included.A review of previous �ndings was conducted with speci�c examination of public health or social science theory in the application of PMTCT interventions or study results within sub-Saharan Africa. Individual-Level eories and Constructs. e Health Belief Model has been used extensively in PMTCT literature as a conceptual framework for women's health-seeking behavior and to inform interventions.e Health Belief Model construct of perceived susceptibility has been used to explain mothers' acceptance of HIV testing, receiving the result, and believing that her infant is susceptible to contracting HIV through vertical transmission [13,14]. Perceived bene�ts are related to mothers' knowledge and belief that PMTCT interventions are bene�cial and effective in preventing mother-to-child transmission [13], which is not a universal belief across sub-Saharan Africa. Perceived barriers are the most widely addressed Health Belief construct in the literature and the most in�uential piece for PMTCT utilization.Perceived barriers are de�ned as a cost-bene�t analysis that the individual will make, in�uencing her decisions [15].Does the mother believe that the bene�t of adhering to PMTCT outweighs the costs/barriers� Established barriers in the literature for PMTCT adherence include fear of knowing one's own HIV status; stigma and discrimination of HIV status being disclosed to partner, family, or the community; opposition of the male intimate partner [16,17]. Perceived self-efficacy indicates the woman's level of con�dence that she is able to complete the steps necessary for PMTCT adherence [18].A PMTCT intervention in South Africa used this construct as one of its main outcome measures.e authors found that HIV-positive pregnant women who participated in the Mothers2Mothers intervention were "signi�cantly more likely to feel that they could do things to help themselves" and to "feel less overwhelmed by problems" [8].However, this report did not indicate if the mother's beliefs were actually translated into health seeking behavior regarding PMTCT or adherence to medication. Interventions that use the Health Belief Model framework typically attempt to increase knowledge through education and counseling as the "cue to action" for mothers [13,18].For example, a study from 2009 concluded that using the constructs of (1) perceived bene�ts and (2) cues to action may increase HIV testing during antenatal care (ANC), which is the �rst step of PMTCT.e author states that a "major information campaign focused on the advantages for pregnant women and their future children of knowing HIV status" is recommended [5, page 810].However, in many sub-Saharan African countries, widespread PMTCT campaigns are already in place [2,10], yet there is still poor utilization.For example, in Zambia, over 89% of women in 2007 knew that HIV can be transmitted by breastfeeding [19]; however, only approximately 21% in 2009 took ARVs while breastfeeding [20]. e Information Motivation Behavior model was developed speci�cally to address HIV prevention efforts.e model applies psychosocial concepts and methodologies to create behavior change.e model focuses on increasing individuals inclination and "ability to practice risk-reduction acts" [21, page 25].e model affirms that HIV prevention information, motivation, and behavioral skills are the "fundamental determinants of HIV preventative behavior" [21, page 26].Most of the Information Motivation Behavior-speci�c interventions have focused on increasing safe sex and adult HIV testing.However, the constructs are also applicable to PMTCT and have been implied in several studies. Constructs from this model are the basis for PMTCT counseling interventions during ANC that are widespread throughout sub-Saharan Africa [12,22].Many interventions have attempted to provide mothers with information and increase motivation regarding PMTCT through counseling during ANC visits.e lack of quality counseling has been cited as a reason for poor utilization and adherence.A study in Nyanza, Kenya found that "inadequate counseling services delivered to (pregnant women) were found to affect (PMTCT) service utilization" [14, page 244]. e eory of Planned Behavior is explicitly mentioned in the literature on PMTCT and constructs from Integrated Behavioral Model can be inferred.ese models focus on individual motivating factors as the main determinants of health behavior.e major assumption in these frameworks is that intention is the best predictor of behavior [15]. Constructs of attitude, perceived norms, and personal agency are appropriate to an understanding of PMTCT utilization and have been referenced in many research articles.Several studies in sub-Saharan Africa have used qualitative methods to explore HIV-positive mothers' attitudes (i.e., feelings about the behavior and behavioral beliefs) and perceived norm (i.e., other's expectations, other's behavior) regarding PMTCT [12,16,17,22,23].Frequently, these constructs have been used to analyze pregnant women's acceptance of HIV testing during ANC.Authors have found that intention to get tested has been limited, due to fear of knowing their status [16]; cost of services and con�dentiality [17]; fear of stigma and discrimination [17,22]. Igumbor et al. [13] explicitly use constructs from eory of Planned Behavior to analyze a clinic-based health education intervention in South Africa.eir measures include "salient beliefs" and "behavioral intentions" to use PMTCT services [13, page 396].Behavioral elements that the authors discuss are attitudes, normative beliefs, subjective norms, perceived control, outcome evaluation, motivation, and perceived power.Findings include that women consistently reported low-control beliefs and a weak association between PMTCT salient beliefs and behavioral intention [13, page 394].e authors, unfortunately, did not measure actual behavioral outcomes. Several authors have used Empowerment eory or have advocated women's empowerment based on their �ndings.Igumbor et al. [13] recommend expanding and enhancing interventions that empower women, in order to improve behavioral intention to use PMTCT.Besser [2,8] also concludes that underutilization is related to women's disempowerment.Mothers2Mothers is a PMTCT intervention that began in South Africa and has spread throughout numerous other sub-Saharan African countries.One of its goals is to "empower mothers living with HIV/AIDS, enabling them to �ght stigma in their communities and to live positive and productive lives" [2, page 37].Women's empowerment appears to be an underemphasized, yet crucial, component of increasing PMTCT in sub-Saharan Africa. Interpersonal eories and Constructs. Interpersonallevel factors are especially relevant for the study of PMTCT utilization.Researchers who used individuallevel approaches were generally unsuccessful in increasing uptake of PMTCT among HIV-positive pregnant women (e.g., see [13]).As mentioned previously, social stigma and discrimination are widely discussed as perceived barriers to PMTCT.In addition, fear of partner's reaction or fear of violence/con�ict with the woman's partner may also prevent women from utilizing these services.us, theories regarding social networks and social support are useful in understanding the interpersonal in�uences on HIV-positive pregnant women's decision-making and health-seeking behaviors. Adherence to ART in general has been linked to notions of social capital and social responsibility [24].Social Networks eory is especially relevant to woman's HIV status disclosure, which has been associated with signi�cant improvements in PMTCT utilization [25,26].Social integration refers to the social ties that affect women's decision making [15].Awiti Ujiji et al. [23] found that the type of relational ties that exist between the HIV pregnant woman and her network determines disclosure of an HIV diagnosis.Social in�uence describes how the actions of others affect women's thoughts and actions towards PMTCT [15].Moth et al. [14] found that pregnant women did not disclose their HIV status to relatives for fear of stigma and discrimination.Lastly, social undermining is the expression of negative affect or criticisms from others [15] that may hinder pregnant women's utilization of PMTCT.For example, pregnant women are oen reluctant to disclose HIV status for fear of family exclusion [5]. Emotional (empathy, love, trust, and caring), instrumental (tangible aid and services), and appraisal (constructive feedback and affirmation) support from one's partner also appears to affect women's HIV status disclosure to their male partners and subsequent PMTCT utilization [15,23].Unfortunately, disclosure rates remain extremely low; a multi-site mixed methods study in Burkino Faso, Kenya, Malawi, and Uganda found that only 37% of HIV-positive pregnant women disclosed their HIV status to their husband [12].One study found that a major deterrent to returning for HIV results among young women in South Africa was fear of partner's reactions if the test were positive [22].Msellati [5] also discusses that women are oen reluctant to disclose their HIV diagnosis to their husband out of fear of the consequences, especially intimate partner violence. Discussion e major theoretical shortcomings in the current literature on PMTCT are the lack of an ecological approach and analysis of structural inequality.Health education and counseling, although not entirely ineffective, are the "least effective type of intervention" [27, page 592].Socioeconomic factors (i.e., social determinants of health), which form "the basic foundation of a society", have the greatest in�uence over health behaviors [27, page 591] and should be given greater priority in our approach to decrease pediatric HIV infections. Very few papers or interventions from sub-Saharan Africa move beyond the individual or interpersonal level to explore the context of women's behavior and decisions regarding PMTCT utilization.For example, the onus has traditionally been placed entirely on the choices and behavior of infected women [28, page 182].However, this grossly overestimates the personal agency and control of HIVpositive women, especially in populations that are historically patrilineal and have large inequalities in the sexual division of power.Without examining higher levels of the social ecology, we are limited in our understanding and ability to address barriers to PMTCT.e current literature has led to a "desocialization" and "decontextualization" of women's health seeking behavior, unjustly leaving the sole responsibility to prevent vertical transmission on the infected mother [28, pages 172, 182, 198, 199]. eoretical and applied literature needs to move beyond the individual and interpersonal levels to explain why women experience social barriers to PMTCT.ere is a pressing need to take into account the sexual division of labor, the sexual division of power, and the structure of cathexis (e.g., social norms) in HIV-endemic countries.In addition, there is a lack of investigation into the imbalances in control power women experience in the family.e impact on PMTCT of gendered power imbalances that may be exhibited in the form of physical, emotional, or sexual violence in women's intimate relationships has not been appropriately investigated. Research and interventions that address multiple levels of in�uence (structural, societal, institutional, community, interpersonal) will have the greatest likelihood of creating effective behavior change [15].e Social ecological model (i.e., ecological perspective) was created to examine and address human transactions within their physical and sociocultural environments [15].e ecological perspective proposes that by adjusting the conditions in which individuals live and interact, we can alter health behaviors and health outcomes [29].e main hypothesis of this theory is that structural factors, not individual factors, are critical determinants of health. Intrapersonal factors have been widely addressed throughout the literature on uptake of PMTCT in sub-Saharan Africa, including mothers' attitudes, perceptions, beliefs, and intentions, as seen in the individual-level theories discussed above.However, there is very limited evidence of success among interventions that have only addressed intrapersonal factors.One of the reasons that PMTCT remains underutilized is that the barriers women have discussed in numerous studies cannot be addressed solely through biomedical education and counseling efforts. Interpersonal processes and primary groups have been recognized in the literature as being an in�uencing factor for HIV-positive pregnant women and new mothers.However, the interventions aimed at increasing PMTCT have again generally relied on targeting the individual with education and training messages.ere has been some outreach recently in sub-Saharan Africa to involve men in PMTCT, which is a step in the right direction.A couple's risk reduction intervention in South Africa found a signi�cant increase in PMTCT uptake and adherence when men are involved [25].ere are no studies to my knowledge on extended family interventions, which could be promising, as many women discussed fear of disclosure to family members as a major barrier to PMTCT.Addressing stigma in communities, rather than spending money on mass-education campaigns, may be a more effective means of increasing PMTCT, based on HIVpositive women's cited perceptions of fear around disclosure. Institutional-level barriers to PMTCT, including stockouts of drugs, lack of health care workers, and poor HIV counseling, have been widely addressed in the medical literature as well.is is a separate topic beyond the scope of this paper, since the author is primarily interested in social barriers to PMTCT instead of logistical constraints. Lastly, public policy and the political economy are the largest in�uencing factors for almost all health-related behaviors, including PMTCT utilization and adherence.Public policy surrounding gender inequality is a largely missing piece from the PMTCT literature in sub-Saharan Africa.e cultural and social constraints on women's behavior that may prevent them from accessing PMTCT interventions have not been thoroughly examined.In addition, research in sub-Saharan Africa regarding mother-to-child transmission rates has not been disaggregated by socioeconomic status (SES).Most sub-Saharan African countries suffer from absolute poverty, but there certainly are differing social classes and varying levels of access to health care based on SES. e eory of Gender and Power has mostly been applied to women's risk of contracting HIV, but many of the constructs are also useful in our understanding of PMTCT behavior.In many sub-Saharan Africa populations, women hold very little power in their lives and decision-making.Women's health outcomes and health-seeking behavior are intrinsically related to social structures of gender inequality.ere are three different interlinked social structures that can be used to understand women's risk (in this case, ability to utilize PMTCT): the sexual division of labor, the sexual division or power, and the structure of cathexis [21].ese societal factors are exhibited on the institutional level (e.g., work, school, family, relationships, church, medical system) and through social mechanisms (e.g., unequal pay and economic opportunity; imbalances in control power; constraints in expectations; disparities in norms) [21]. In addition, a structural violence perspective highlights the economic subordination that is a major constraint in the lives of millions of women in the developing world.e social forces that have the greatest constraint on human agency are gender and class [28, page 167].PMTCT research and interventions aimed at that the individual incorrectly assume that HIV-positive women in sub-Saharan Africa have agency, when in reality, the living conditions and environment of "poverty and gender inequality erode personal agency" [28, page 202]. e barriers that women face to PMTCT stem from the broader macrolevel economic and social conditions [28].Socioeconomic barriers include: persistent unequal power between men and women; legal discrimination against women; women's low economic status; women's low educational status; and domestic violence [28, pages 165-166].Using a theoretical framework of structural justice, "the creation of policies and programs which improve women's social status as well as their economic status" [28, page 166] would remove many of the social barriers HIV-positive women experience that prevent them from utilizing PMTCT. Conclusion What is needed currently in the research on mother-tochild transmission is a clear understanding of all the factors in�uencing underutilization and poor adherence to PMTCT.How and why are women's health-seeking behaviors constrained by gender, culture, public policy, and economic factors?Both biomedical and social approaches are needed to address the complex behavior of HIV-positive mothers' adherence to PMTCT.Instead of addressing only individuallevel factors through education and counseling about medical interventions, we should also be targeting women's broader living conditions.Only through a combination of individual, community, and structural interventions will we achieve an AIDS-free generation, which requires the elimination of vertical transmission of HIV in sub-Saharan Africa.
v3-fos-license
2018-04-03T00:05:47.114Z
1996-05-17T00:00:00.000
8755156
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/271/20/11767.full.pdf", "pdf_hash": "cf380e209a869f65d0e3207fc68695f755b5add2", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41459", "s2fieldsofstudy": [ "Biology" ], "sha1": "1f3ddd2244b6319ae27256ef38a32c6d9c8f5544", "year": 1996 }
pes2o/s2orc
Transient intermediates in the thrombin activation of fibrinogen. Evidence for only the desAA species. The structure of a fibrin gel depends on the nature of the fibrinogen activation products produced by thrombin and the physical condition under which assembly occurs. Two different structures of the intermediate fibrin protofibril have been proposed, the production of which requires different extents of fibrinopeptide A (FpA) cleavage from fibrinogen. The fibrin activation intermediates must be stable since time is required for the intermediates to diffuse to growing protofibrils. The classic Hall-Slayter model requires cleavage of both FpAs to form a desAA intermediate. The Hunziker model requires cleavage of only one FpA to form an AdesA intermediate. Electrophoretic quasi elastic light scattering has been used to show the time-dependent production of the relevant fibrinogen activation intermediates that includes desAA but not AdesA. Since the first description of fibrin structure by Ferry and Morrison (1), controversy has existed over the exact mechanism of fibrin assembly. The issues include the order of release of the fibrinogen activation peptides, fibrinopeptide A (FpA) 1 and fibrinopeptide B (FpB), the importance of the extent of removal of fibrinopeptide A, the manner in which the fibrin monomers are organized into protofibrils, and the manner of bundling of the protofibrils into fibrin fibers. The complexity of the threestep assembly process including fibrinogen activation, protofibril formation, and fiber bundling lends itself to the diversity of fibrin structure originally noted by Ferry (2). The final fibrin structure depends on many factors such as the rate of monomer production, fibrin monomer concentration, the number of polymerization sites present on the fibrin monomer, pH, ionic strength, solution viscosity, the presence of other charged molecules, and volume exclusion effects (1,(3)(4)(5)(6)(7)(8). FpA is released before FpB, and the release of FpA is sufficient to initiate fibrin assembly (9 -11). Different models for fibrin assembly have been proposed. Support for the classical Hall-Slayter model (12) for fibrin assembly in which fully activated fibrin monomers with both FpAs cleaved (desAA) are added in an overlapping half-staggered manner to growing protofibrils has been derived from assembly kinetics, light scattering studies, and electron micrographs (13)(14)(15). A recent challenge to the Hall-Slayter model by Hunziker et al. (16) refutes the overlapping monomer sequence in the protofibril. Protofibril assembly in the Hunziker model requires the existence of a fibrin monomer with only one FpA removed (AdesA) in sufficient concentrations and with a sufficient lifetime to assemble a non-overlapping protofibril. The fundamental difference in these two models arises from differences in the protofibril structure, which depends on the predominance and stability of AdesA compared with desAA fibrin monomers. Report for the existence of the AdesA monomer is based on gel exclusion chromatography and electron microscopy (16 -19). Other investigators using similar methods and peptide sequencing experiments do not find the AdesA fibrin monomer in either sufficient quantity or lifetime to be a significant factor in fibrin assembly (20 -22). It is the purpose of this report to apply a methodology that permits direct observation and evaluation in real time of the transient intermediates that form during the activation of fibrinogen. Electrophoretic quasi elastic light scattering (ELS) can resolve structural differences between fibrin monomers because of differences in the columbic charge on the different activation intermediates. Quasi elastic light scattering (QLS) without electrophoresis reports on changes in the diffusion coefficients resulting from differences in the mass of the molecule. Thus, ELS should be better suited to study activation products of fibrinogen activation than QLS, since the fibrinopeptide cleavage changes the charge substantially but does not reduce the mass appreciably. MATERIALS AND METHODS Fibrinogen Activation-Highly purified human band I fibrinogen (less than 10% band II) was used in these experiments (23). A working solution of 0.5 mg/ml stock fibrinogen was prepared from a freshly thawed fibrinogen stock solution at 0.3 M NaCl by diluting with 10 mM NaCl, 5 mM Hepes at pH 7.4. Contaminating dust and large fibrinogen aggregates in the working solution were reduced by filtration through a 0.22-mm filter followed by centrifugation at 48,000 ϫ g for 90 min. The final fibrinogen concentration was determined from the solution optical density at 280 nm in a Cary 3E UV/VIS spectrophotometer using an extinction coefficient of 1.6 ml/mg. Further dilution of the working solution to obtain a desired fibrinogen concentration was made with 10 mM NaCl, 5 mM Hepes at pH 7.4. Fibrinogen was activated with 0.005 NIH units/ml human ␣-thrombin (Sigma). A low concentration of thrombin was used to slow fibrinopeptide cleavage to a rate that could be observed in our experiment. The fibrinogen-thrombin mixture was sampled at various times (2,4,6,8,10,15,20,25,30,40,60, and 90 min), and thrombin was inhibited by the addition of phenylalanyl prolyl arginine chloromethyl ketone (final concentration of 2.5 M) at each stage. Fibrinogen quenched at various stages of activation was then examined in the ELS spectrometer for the presence of intermediate activation species. Experiments to examine the release of only fibrinopeptide A were carried out using Atroxin, an enzyme derived from Bothrops atrox (Sigma). In these experiments, Atroxin was added at a final concentration of 1.25 ng/ml, in place of thrombin, which gave equivalent fibrin gelation times. Fibrinopeptide B removal was examined in a similar * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ¶ To whom all correspondence should be addressed: CB 7035, Division of Hematology/Oncology, University of North Carolina School of Medicine, Chapel Hill, NC 27599. 1 The abbreviations used are: FpA, fibrinopeptide A; FpB, fibrinopeptide B; ELS, electrophoretic quasi elastic light scattering; QLS, quasi elastic light scattering; ACV, A. controtrix venom; desAA, fibrin assembly in which fully activated fibrin monomers with both FpAs are cleaved; AdesA, a fibrin monomer with only one FpA removed. experiment by adding an enzyme purified from Agkistrodon controtrix venom (Sigma) at 2 g/ml. Although this enzyme removes predominately fibrinopeptide B, 30% fibrinopeptide A is also cleaved (9,16). Electrophoretic Quasi Elastic Light Scattering (ELS)-ELS measurements were made on a multi-angle quasi elastic light scattering spectrometer (DELSA 440, Coulter Electronics, Inc., Hialeah, FL) mounted on a Newport vibration isolation table. Simultaneous measurements were made at four different scattering angles. The electrophoretic effect was obtained by superimposing a uniform electric field (usually 150 -500 volts/cm) across the sample. The field was pulsed and its polarity alternated to avoid mass accumulation. The scattered intensity (I s ) from a moving particle at a fixed scattering angle ( s ) is observed as an oscillating intensity described in the heterodyne experiment as a second order field autocorrelation function (24 -26), where is the time increment, I L is the intensity of the reference beam (local oscillator), and ϽI s Ͼ is the average intensity of the scattered light. K is the scattering vector defined by Equation 2 where s is the scattering angle, n is the refractive index, is the wavelength of the incident light, v d is the velocity of the scattering particle, and D is the diffusion coefficient. The important quantity in this expression is K⅐v d , the Doppler shift of the signal resulting from the particle motion. The magnitude of the Doppler shift is determined from the power spectrum, which is calculated from the Fourier transform of the autocorrelation spectrum. The Doppler shift can then be related to the electrophoretic mobility by Equation 3, where ␦ is the Doppler shift, o is the frequency of the incident light, and C is velocity of light in the medium. The electrophoretic mobility is related to the velocity of the scattering particle by the simple equation, v d ϭ E, where is the electrophoretic mobility and E is the applied electric field (25). Temperature, ionic strength, pH, and conductivity affect the electrophoretic mobility of the scattering particle and were therefore carefully controlled by monitoring the conductivity of each sample. Joule heating was governed by regulation of the pulse duration and the pulse frequency of the electric field. Thermal lensing was avoided by control of the incident laser power. Snell's law corrections were made for all scattering angles. Fig. 1 shows the changes in the electrophoretic mobility spectrum for fibrinogen at a concentration of 0.05 mg/ml (peak A) at intermediate times during a 90-min activation by thrombin. At 10 min, a new peak (peak B) with a mobility of Ϫ0.6 (-cm)/(V-s) is observed. At 25 min, the electrophoretic spectrum shows the appearance of a third peak (peak C) with a mobility of Ϫ1.0 (-cm)/(V-s). Both peaks B and C continue to increase in intensity with time as shown by the spectra at 40 and 60 min. Other contributions to the mobility spectra are seen at higher mobilities. These experiments show activation intermediates at times of 10, 25, 40, and 60 min. Under conditions of very dilute thrombin concentrations, peak B emerges well ahead of peak C. RESULTS In Fig. 2, the effect of thrombin, which removes both FpA and FpB, is compared with Atroxin, which removes only FpA. Thus, by using Atroxin, a homogeneous desAA can be produced. The experimental conditions are identical to those used in Fig. 1. The 25-min mobility spectrum using thrombin from Fig. 1 has been superimposed and is shown as a dashed line so that assignment of peak B can be identified as removal of fibrinopeptide A, yielding desAA. The same fibrinogen mobility (peak A) is observed in both the thrombin and Atroxin experiments. The difference occurs in the absence of peak C. Since Atroxin, the solid curve in Fig. 2, only releases fibrinopeptide A and since the morphology of the single new peak is symmetric, the new mobility shown as peak B must represent the desAA fibrin intermediate. Some investigators have postulated the existence of an AdesA fibrin intermediate. The inset shows a plot of the line width at half height, ⌫, for peak B versus the scattering vector, K, and confirms that the line broadening is due to diffusion (⌫/2 ϭ K D 2 ) and not sample heterogeneity (⌫/2 ϭ K D ) (24). Thus, contribution to peak B from AdesA is highly unlikely. Furthermore, since the mobility difference between peak A for native fibrinogen and peak B for desAA is so large, if AdesA was present, it should appear as an easily identifiable peak between peak A and peak B. Since no peaks are observed in this region of the mobility spectra, and since we can show the appearance of peak B ahead of peak C (see the spectrum for the 10-min sample time in Fig. 1), we take this as evidence that AdesA does not exist as a significant intermediate. If it exists at all, it must be extremely short lived or at a very low concentration. These observations are consistent with the observation of Janmey et al. (14,15,27) that the second FpA is removed 16 times faster than the first FpA, suggesting that the possibility of a stable AdesA intermediate is low. To identify the species responsible for the mobility represented by peak C produced by the thrombin activation of fibrinogen shown in Fig. 1, the following experiments were performed. The experiments shown in Fig. 3 are identical to those shown in Figs. 1 and 2 except that fibrinogen is activated by an enzyme from the venom of the Southern copperhead, Agkistrodon controtrix, which cleaves FpB at a much faster rate than FpA. Again, the 25-min spectrum from Fig. 1, shown as a dashed line, has been superimposed so that the identification of FpB removal can be established. As expected, no desAA peak (peak B in Fig. 1) is observed because the FpB is removed first. A new peak, peak CЈ, with a slightly slower mobility than the peak C from Fig. 1, is FIG. 1. Detection of fibrinogen intermediates. ELS spectra of fibrinogen and fibrinogen activation intermediates generated by the addition of 0.005 NIH units/ml of human ␣-thrombin and sampled at 10, 25, 40, and 60 min. Thrombin removes both FpA and FpB from fibrinogen. Peak A is fibrinogen. Peaks B and C represent activation intermediates of fibrinogen. Note that the appearance of peak B precedes that of peak C, which provides further support that FpA is the first fibrinopeptide released and that desAA fibrin monomer is the stable intermediate. produced. The activation product from A. controtrix should produce both desBB and desAAdesBB. Additional help to resolve the assignment of peak C was obtained from the production of a homogeneous desAAdesBB generated by first treating fibrinogen with Atroxin followed by A. controtrix venom, which had identical mobility with peak C from Fig. 1 (data not shown). Linewidth analysis on peak C generated by both venoms, i.e. desAAdesBB, and from thrombin ( Fig. 1) both have a K 2 dependence, confirming a single species identified as desAAdesBB. Linewidth analysis of the slower mobility peak CЈ from Fig. 3 shows a K dependence, indicating a mixture of species (see Fig. 3, inset). The conclusion from these experiments is that peak C represents desAAdesBB and is supported by the fact that we do not expect to find desBB in thrombin activation. These results also suggest that desAAdesBB has a faster mobility than desBB. Finally, peak C in Fig. 1 is definitely not AdesA, since it would be highly unlikely that removal of one FpA would produce a faster mobility than removal of both FpAs and since no peak C is seen in Atroxin activation. When both desBB and desAAdesBB are present, a slower peak CЈ is seen that results from contamination by the slower moving desBB. The experiments shown in Fig. 4 are identical to those shown in Fig. 1, except that the concentration of fibrinogen is higher, 2 mg/ml, since it represents the normal human plasma concentration of fibrinogen and is also identical to the fibrinogen concentration used by Smith (17) and Hunziker et al. (16). Under these experimental conditions, the activation rate is thrombin limited. As shown in Fig. 4, an activation profile is seen similar to that in Fig. 1 with no evidence for AdesA. DISCUSSION Fibrin assembly is initiated by thrombin cleavage of the N-terminal A␣-chain, fibrinopeptide A, which exposes one of the two polymerization site "A's" on the E-domain (28 -30). Structural information on the chemical nature of "A" polymerization site is limited, but His 16 on the B␤-chain and contributions from ␣-chain are necessary for polymerization to occur (31)(32)(33)(34). The polymerization "A" site on the E-domain interacts with the constitutively present "a" site on the D-domain of an adjacent monomer. In contrast to the "A" site, the critical amino acid sequence in the "a" site is better defined and located on the C-terminal ␥-chain between amino acid residues 356 and 411 (30,(35)(36)(37). Once the fibrin monomer is generated following FpA release, fibrin assembly ensues. The classical mechanism for fibrin assembly as described by the Hall-Slayter model suggests that fibrin monomers polymerize in a half-staggered manner so that the D-domain of one monomer interacts with the centrally located E-domain of the adjacent monomer to form protofibrils 2 monomers thick (12). In this model, polymerization symmetry permits monomer addition to either end of the growing protofibril (bipolar growth), but only if an "A" site faces the "a" site, which implies rotational symmetry about the minor hemi-axis (equivalent ends) but not about the major hemi-axis with respect to polymerization sites (38). An additional critical factor is the structural contribution of the dihedral angle present in fibrinogen that introduces helical structure to the protofibril. However, the exact details of the interaction and packing re- Fig. 1 (dashed line). The inset shows a plot of the line width at half height, ⌫, versus the scattering vector, K, and confirms that the line broadening is due to diffusion (⌫/2 ϭ K D 2 ) and not sample heterogeneity (⌫/2 ϭ K D ) (24). If AdesA was present, it would be seen as a specific mobility between peaks A and B. No mobility is observed in the region between peaks A and B. FIG. 3. Effect of fibrinopeptide B removal on peak C. ELS spectra of fibrinogen (0.05 mg/ml) incubated with 2.0 g/ml A. controtrix venom (ACV) (solid line). Although ACV removes both FpA and FpB, FpB is removed at a much greater rate than FpA. Linewidth analysis of peak C from Fig. 1 shows a K 2 dependence and indicates a single species, desAAdesBB. When superimposed on Fig. 1, peak C generated by both Atroxin and ACV, also desAAdesBB, has an identical mobility to peak C in Fig. 1 (data not shown). In contrast, peak CЈ generated with ACV alone (shown in this figure) has a slightly slower mobility. The inset shows the linewidth of peak CЈ is dependent on K, which indicates more than one species is present in peak CЈ. ACV is known to produce desBB and desAAdesBB. Thus, the slightly slower moving peak CЈ generated with ACV alone results from the presence of desBB in peak CЈ and indicates that desBB has a slower mobility than desAAdesBB. main unknown (38). Recently, the Hall-Slayter model for fibrin assembly has been challenged (16). A fundamental feature of the alternative model described by Hunziker is that only one D-domain from each fibrin monomer is involved in the initial polymerization process. The second D-domain is then left free to form branches. For this model to be possible, the second "A" polymerization site on the E-domain must not be activated, i.e. the AdesA species must predominate and be stable long enough to diffuse to the surface of the assembling fibrin protofibril. If the second FpA is cleaved, protofibril assembly will proceed in an overlapping bipolar manner as postulated in the Hall-Slayter model. Assembly requires close proximity of monomers and growing fibrin oligomers and sufficient time for both rotational and lateral diffusion to occur so that the correct spatial orientation occurs. Thus, the monomer must be stable long enough for diffusive processes to bring monomers and oligomers together. If the AdesA species is the important fundamental monomeric species, it must not encounter a second thrombin molecule that would result in the cleavage of its second FpA before its assembly into the growing fibrin protofibril. These critical interactions between monomer and growing oligomer are highly important in the determination of fiber assembly kinetics and structure (39). Thus, a fundamental issue in the differentiation between the Hall-Slayter and Hunziker models is proof of the existence of the AdesA versus the desAA intermediate as the predominant species during fibrin assembly. The existence of a transient AdesA fibrin monomer is controversial and is dependent on the nature of interaction between thrombin and fibrinogen. For example, the AdesA could be produced through one thrombin bound for each FpA so that removal of each FpA is a temporally independent event. The desAA species could be produced by either two thrombin molecules bound simultaneously to fibrinogen or by the sequential removal of FpAs by one bound thrombin molecule. In favor of the latter model, a 16-fold increased rate of removal for the second fibrinopeptide A has been observed (14,15). The existence of an AdesA intermediate in the early stage of fibrin formation was first proposed by Smith (17), and his analysis, based on N-terminal amino acid analysis of gel chromatography isolated fibrinogen activation intermediates, reported that the interior of fibrin oligomers was composed of desAA monomers but that the oligomer was capped by AdesA monomers. It is not clear from this model how desAA monomers can be added to the growing oligomer if each end is capped by an AdesA. Based on his analysis, Smith postulated that AdesA was the early predominant species. A similar finding also using chromatography to isolate fibrin intermediates was reported by Alkjaersig and Fletcher (18). Dietler et al. (19) used light scattering to arrive at a similar conclusion. It should be emphasized that quasi elastic light scattering gives highly accurate diffusion coefficients but only for monodispersed solutions (40). Sample heterogeneity caused by fibrinogen or fibrin monomer aggregates will produce uncertainty in the result as evidenced by a large second moment in the cumulant analysis (40). More recently, Hunziker et al. (16) have used electron microcopy to examine fibrin oligomers that appear to contain AdesA intermediates. Electron microscopy studies offer the possibility of analyzing individual monomers and oligomers, but it is not clear if the drying process alters the monomers so that their original solution appearance is altered. Monomer aggregation artifacts may also occur during the drying process. Important species present in solution may not be represented in the observed species. Other investigators do not find evidence for the AdesA fibrin monomer. Wilf and Minton (21) used gel permeation chromatography and only found desAA fibrin monomers. Wilf and Minton (21) also examined Smith's original proposed assembly mechanism and found that Smith's predictions did not agree with either their analysis or with Shainoff's sedimentation analysis (41). In addition, only desAA was observed by Janmey et al. (14,27) in their light scattering studies. Henschen (22) was not able to detect AdesA intermediates using a more direct analysis of the FpA and amino acid sequencing of the central, dimeric fragments derived from the N-terminal region of all A␣-chains and ␣-chains present in the thrombin digest. In this report, we have used light scattering to examine fibrinogen activation intermediates. However, we have avoided the problem of sample heterogeneity by adding electrophoresis to quasi elastic light scattering as described by Ware and Flygare (42). Differences between the molecular weight and hence the diffusion coefficient of fibrinogen and fibrin monomer using standard quasi elastic light scattering may not resolve subtle differences present in fibrinogen activation intermediates. On the other hand, substantial differences may be present in the molecular charge for each activation species, which would be highly sensitive to detection by electrophoresis. We view the results from QLS and ELS as complementary. We have shown that ELS is well suited for the study of fibrinogen activation and protofibril formation. ELS reports the surface charge of a particle and is observed as the electrophoretic mobility. The ELS method can measure the mobilities of a mixture of multiple particles with different structures and charges. In the present case, differences in the mobility of fibrin intermediates depend on small changes in the surface charge in activation intermediates. When the fibrinopeptides are removed and fibrin monomers are produced, only a small change in the molecular weight occurs, but a large difference in the FIG. 4. Fibrinogen activation at physiologic concentrations of fibrinogen. ELS spectra of fibrinogen at the normal physiologic concentration (2 mg/ml) and fibrinogen activation intermediates generated by addition of 0.005 NIH units/ml human ␣-thrombin sampled at 15, 40, and 60 min are shown. The designations of the peaks are the same as for Fig. 1. electrophoretic mobility occurs. In fact, the change is so large that the existence of the AdesA species would be easily observed between peaks A and B, which is not seen (Fig. 1). We have also used limiting concentrations of thrombin to enhance detection of the AdesA intermediate, if it is present. In separate experiments, physiologic concentrations of fibrinogen were examined (Fig. 4). Our data strongly support the hypothesis that desAA fibrin monomer is the significant intermediate in fibrin assembly. If AdesA does exist, its lifetime and concentration are insufficient to exert a significant role in fibrin assembly.
v3-fos-license
2022-08-11T05:29:59.534Z
2022-07-01T00:00:00.000
251467252
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/102842-paclitaxel-induced-pneumonitis-in-trinidad-a-case-report.pdf", "pdf_hash": "88aed986d1cf65ae3b44f5e96cceda16de7e7b6e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41461", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "88aed986d1cf65ae3b44f5e96cceda16de7e7b6e", "year": 2022 }
pes2o/s2orc
Paclitaxel-Induced Pneumonitis in Trinidad: A Case Report Paclitaxel-induced pneumonitis (PIP) is an immune-mediated disease resulting from a delayed hypersensitivity reaction (type IV) to paclitaxel, an anti-microtubule chemotherapeutic drug commonly used to treat breast cancer in both neoadjuvant and adjuvant settings. PIP is diagnosed by exclusion utilizing laboratory work-up, imaging, biopsy studies, and results of antibiotic therapy because there is no single diagnostic test. Ground-glass opacifications on CT, coupled with minimal restrictive disturbance with decreased diffusion on pulmonary function tests (PFTs), negative bronchoalveolar lavage (BAL), and bronchoscopy cultures, may assist physicians in diagnosing paclitaxel-induced pneumonitis. In this report, we describe a case of PIP present in Trinidad, West Indies, which has not been described previously in this region. Introduction Paclitaxel-induced pneumonitis (PIP) is an immune-mediated disease resulting from a delayed hypersensitivity reaction (type IV) to paclitaxel, an anti-microtubule chemotherapeutic drug of the taxane class, commonly used to treat breast cancer in both neoadjuvant and adjuvant settings. Positive leukocyte migration inhibition test to paclitaxel in lymphocytes extracted during bronchoalveolar lavage of affected patients further suggests that PIP is a delayed hypersensitivity reaction [1]. Additionally, patients with PIP were noted to have a significant elevation of cyclooxygenase-2 (COX-2) protein [2]. COX-2 is a proinflammatory mediator that may cause lung injury by activating a cascade of inflammatory reactions, which might be the mechanism of paclitaxel-induced lung injuries [2]. PIP is a very rare, poorly characterized, and potentially life-threatening complication of paclitaxel therapy with an estimated incidence of 0.73 -12% [1,3]. Possible risk factors of PIP include pre-existing interstitial lung disease (ILD), 12-cycle vs. 4-cycle dosing regimen, and tumor type (presuming lung cancer patients may have lower pulmonary reserves) [1,4]. PIP is diagnosed by exclusion of other causes of respiratory distress because there are no set diagnostic criteria. The option of resuming taxane chemotherapy following clinical recovery, hereafter referred to as 'rechallenge', has not been well documented thus far. Therefore, there is a great need for further research in this regard because the cessation of taxane chemotherapy disrupts curatively intended treatment for breast cancer patients, particularly those at high risk of developing metastatic disease [4]. Upon review of the literature, there were no published case reports on PIP in Trinidad and Tobago. We describe a case of PIP presented at the Apley Medical Clinic, Trinidad, West Indies, and the outcome of subsequent taxane rechallenge. Case Presentation A 56-year-old female presented with a persistent cough of four weeks duration that worsened over time. The cough was productive of clear sputum and occurred mostly during the daytime. Following coughing fits, the patient experienced dyspnea on exertion and excessive sweating. She denied chest tightness, palpitations, and lightheadedness. The patient was a non-smoker and did not use recreational drugs. Given her history of asthma, Ventolin was initially used by the patient, which brought no relief. Additionally, she was being treated with adjuvant chemotherapy for stage 3 breast cancer and had received four of the twelve planned cycles of paclitaxel which followed four cycles of dose-dense doxorubicin and cyclophosphamide. At the time of initial evaluation, vital signs showed oxygen saturation of 99% on room air, heart rate of 109 bpm, blood pressure 150/80 mmHg, and a temperature of 36.7ºC (98.0 F). Physical examination revealed clear lungs on auscultation with no crepitus or rhonchi. Polymerase chain reaction (PCR) testing for COVID-19 was negative. The differential diagnoses included gastroesophageal reflux disease (GERD), postnasal drip, pulmonary hypertension, pulmonary embolism, and drug-induced pneumonitis. She was started on medication pantoprazole, montelukast, fluticasone furoate/vilanterol inhaled, mometasone nasal spray, Tuscosed Linctus, and albuterol inhaled. Her oncologist prescribed a three-day course of azithromycin due 1 2, 3 4 5, 6 Open Access Case to concern for infection, but her symptoms did not resolve. Following a week of treatment, her condition deteriorated with complaints of more frequent coughing fits. Laboratory workup revealed neutrophilia (7.6 x 109/L), negative troponin, negative myoglobin, negative autoimmune screen, and elevated d-dimer (1.42ug/mL). ECG and echocardiogram (ECHO) findings were unremarkable except for sinus tachycardia. CT pulmonary angiogram (CTPA) was negative for pulmonary embolism but revealed upper lobe mild subpleural reticular changes and mild mid-zone central ground-glass opacifications ( Figure 1). Furthermore, pulmonary function tests (PFTs) demonstrated mild restrictive ventilatory defect and severe gas transfer defect (DLCO (Hb) = 11.74 mL/min/mmHg; 50% predicted) and normal DLCO/VA with no significant reversibility ( Table 1). Therefore, based on the combination of clinical presentation and radiological findings, the relationship between exposure to paclitaxel and onset of respiratory distress as well as the exclusion of other causes of respiratory distress, a diagnosis of druginduced pneumonitis secondary to paclitaxel was made. The interventions employed were immediate cessation of Paclitaxel treatment and initiation of high dose dexamethasone taper (8 mg for three days, 4 mg for three days, 2 mg for three days, 1 mg for three days). Following steroid therapy, the patient reported that the cough was completely resolved. Chest X-ray and PFT test was repeated, which revealed clear lung fields and a persistent mild restrictive ventilatory defect with an improved moderate gas transfer defect (DLCO (Hb) = 12.80 mL/min/mmHg; 60% predicted), respectively ( Table 1). She was deemed clinically recovering and thus was scheduled to have a rechallenge with another taxane drug, docetaxel, with careful monitoring. Two weeks after she received the first of the three planned cycles of docetaxel, there was no recurrence of respiratory symptoms; however, PFT showed moderate restrictive ventilatory defect with severe gas transfer defect (DLCO (Hb) = 11.84 mL/min/mmHg; 55% predicted) and normal DLCO/VA ( Table 1). In the absence of symptoms, she was cleared to continue with the remaining two cycles of docetaxel. She experienced a successful outcome with docetaxel rechallenge following PIP. Discussion The onset of clinical manifestations associated with PIP is variable, with a range of two to 20 days from the first cycle of paclitaxel to the development of symptoms [1]. PIP is characterized by non-specific symptoms such as dry or productive cough, fever, and dyspnea; therefore, it is imperative to rule out other causes of respiratory failure, including pneumonia, cardiogenic pulmonary edema, and diffuse alveolar hemorrhage [5]. PIP is diagnosed by exclusion utilizing laboratory workup, imaging, biopsy studies, and results of antibiotic therapy because there is no one single diagnostic test [4]. Ground-glass opacifications on CT coupled with minimal restrictive disturbance with decreased diffusion on PFTs, negative bronchoalveolar lavage (BAL), and bronchoscopy cultures, as well as histological picture suggestive of drug-induced pneumonitis -diffuse alveolar damage and foamy macrophages in the alveoli in the absence of granulomas or giant cells -strongly favor the diagnosis of paclitaxel-induced pneumonitis [4]. In our case, CTPA revealed upper lobe mild subpleural reticular changes and mild mid-zone central ground-glass opacifications (Figure 1). PFTs demonstrated mild restrictive ventilatory defect and severe gas transfer defect (Table 1). Additionally, the patient's condition continued to deteriorate despite treatment with azithromycin. These findings gave sufficient evidence to consider the temporal relationship between the patient's first exposure to paclitaxel and the start of her symptoms. Hence, a diagnosis of PIP was made based on clinical presentation, radiographic pattern, exposure history, and exclusion of other causes of diffuse pulmonary infiltrates [6]. The main stay of management of PIP includes immediate cessation of paclitaxel and initiation of high-dose oral glucocorticoid within 24 hours of clinical diagnosis [1]. However, there is no established regimen and as such, systemic glucocorticoid therapy is based on the success of immunomodulation reported in previous case reports [4]. Of note, one study reported resolution of pneumonitis symptoms without the use of steroids in two cases [7]. In contrast to previous case reports, our patient was not placed on maintenance steroid therapy over months [1,4,6]. The consensus in the existing medical literature recommends against re-exposure to taxanes following a diagnosis of PIP in breast cancer patients [1,4]. To the best of our knowledge, only two other studies have described a taxane rechallenge. In both case series, the patients met successful outcomes with no lung sequelae. The first case series described three patients in which paclitaxel was switched to an alternate agent, nanoparticle albumin-bound paclitaxel, known to be the least likely taxane to cause severe pneumonitis [8,9]. However, a more recent case series of 19 patients described 17 patients being rechallenged with the same inciting taxane and two switched to docetaxel, similar to our patient [6]. Notably, our patient was not put on short-course oral steroids during and following the rechallenge for one week as a safety measure in contrast to the latter [6]. In our case, we considered the possibility of rechallenging with docetaxel due to our patient's milder clinical course and complete clinical recovery. Chest X-ray revealed clear lungs. Moreover, PFTs demonstrated an improvement in DLCO (Hb) of 1.06 mL/min/mmHg i.e. from 50% to 60% predicted. A study found that affected patients had recovered clinically despite reduced carbon monoxide diffusion capacity on PFTs within six weeks of presentation [1]. Our patient was cleared to rechallenge with three cycles of docetaxel on the basis of clinical and radiological recovery. Following the first cycle, she did not experience respiratory deterioration. Subsequent PFTs demonstrated a decrease in DLCO (Hb) of 0.96 mL/min/mmHg i.e. from 60% to 55% predicted and an increase in DLCO/VA from 3.91 to 4.04 mL/min/mmHg/L. Despite reduced diffusion capacity, our patient was allowed to proceed with the remaining two cycles of docetaxel in the absence of pneumonitis symptoms. Following a diagnosis of PIP, the outcome of the docetaxel rechallenge was successful. Our case report has some limitations due to a lack of certain tests being readily available. Our patient did not undergo BAL to exclude atypical infection. A leukocyte migration test was not performed to determine if paclitaxel caused the reaction. The diagnosis of paclitaxel-induced pneumonitis was not confirmed by lung biopsy, which would have excluded pulmonary infiltration with malignant cells. Additionally, a histologic picture of drug-induced pneumonitis combined with radiographic and PFTs findings would have further increased diagnostic accuracy. Subsequently, our patient was treated based on a clinical and radiological diagnosis. Conclusions In conclusion, we report a case of PIP in Trinidad, West Indies, which has not previously been reported in this region. PIP should be suspected in any patient that presents with clinical respiratory symptoms while undergoing paclitaxel therapy, as this is a potentially fatal complication if left undiagnosed. Our patient illustrated clinical improvement of paclitaxel-induced pneumonitis following prompt discontinuation of paclitaxel and glucocorticoid therapy. Moreover, the patient had a favorable clinical outcome when rechallenged with docetaxel. We hope that this case report may lead to further research into docetaxel and other drugs being used as potential alternatives for patients diagnosed with PIP. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2023-02-04T16:06:15.137Z
2023-02-01T00:00:00.000
256570442
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "f9f6af55a5ae4b5d52837f00d2707b7c073c4358", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41462", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "cec01f90d16a79195289bbf5afc56295d3648f8b", "year": 2023 }
pes2o/s2orc
SARS-CoV-2 Post-Infection and Sepsis by Saccharomyces cerevisiae: A Fatal Case Report—Focus on Fungal Susceptibility and Potential Virulence Attributes The pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been responsible for approximately 6.8 million deaths worldwide, threatening more than 753 million individuals. People with severe coronavirus disease-2019 (COVID-19) infection often exhibit an immunosuppression condition, resulting in greater chances of developing co-infections with bacteria and fungi, including opportunistic yeasts belonging to the Saccharomyces and Candida genera. In the present work, we have reported the case of a 75-year-old woman admitted at a Brazilian university hospital with an arterial ulcer in the left foot, which was being prepared for surgical amputation. The patient presented other underlying diseases and presented positive tests for COVID-19 prior to hospitalization. She received antimicrobial treatment, but her general condition worsened quickly, leading to death by septic shock after 4 days of hospitalization. Blood samples collected on the day she died were positive for yeast-like organisms, which were later identified as Saccharomyces cerevisiae by both biochemical and molecular methods. The fungal strain exhibited low minimal inhibitory concentration values for the antifungal agents tested (amphotericin B, 5-flucytosine, caspofungin, fluconazole and voriconazole), and it was able to produce important virulence factors, such as extracellular bioactive molecules (e.g., aspartic peptidase, phospholipase, esterase, phytase, catalase, hemolysin and siderophore) and biofilm. Despite the activity against planktonic cells, the antifungals were not able to impact the mature biofilm parameters (biomass and viability). Additionally, the S. cerevisiae strain caused the death of Tenebrio molitor larvae, depending on the fungal inoculum, and larvae immunosuppression with corticosteroids increased the larvae mortality rate. In conclusion, the present study highlighted the emergence of S. cerevisiae as an opportunistic fungal pathogen in immunosuppressed patients presenting several severe comorbidities, including COVID-19 infection. Introduction Starting 3 years ago, the coronavirus disease-2019 (COVID- 19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), became a serious pandemic, which has re-A 75-year-old woman was admitted on 13 May 2021 to a University Hospital in Rio de Janeiro, Brazil, due to an arterial ulcer in the left foot (Fontaine IV Peripheral Arterial Disease). The disease started a year before and clinical treatments were unsuccessful, leading to bone exposure. Therefore, she was being prepared for a surgical amputation. Positive polymerase chain reaction results from two nasal swabs for SARS-CoV-2 was found twice, 30 and 10 days before admission, and she was treated symptomatically at home. Remarkable data in her past pathological history are systemic arterial hypertension, stroke with resulting aphasia and past history of smoking. She was started on intravenous antimicrobials (piperacillin and tazobactam) and heart monitoring due to the atrial fibrillation. Her general condition worsened quickly, showing drowsiness with disorientation, bradycardia and hypotension, metabolic acidosis, leukocytosis and elevated C-reactive protein. At that time, a nasal swab antigen test did not detect SARS-CoV-2. Two blood samples were collected and sent to the laboratory for culture approximately 6 hours before she died due to the septic shock, 4 days after hospitalization, on 17 May 2021. Yeast-like organisms were isolated from both blood culture samples ( Figure 1A). Yeast-like Identification The clinical yeast strain, designated as HUPE-Sc1, was cultured on Sabouraud dextrose agar (SDA; Difco, Becton, Dickinson and Company, La Jolla, CA, USA) plate at 37 • C. Yeast identification was carried out by both phenotypic and molecular assays. Carbohydrate assimilation and metabolic enzymatic profiles were evaluated by VITEK 2 ® system (bioMérieux, Marcy-l'Étoile, France) using the yeast (YST) card, according to the manufacturer's guidelines. In parallel, amplification and sequencing of the ITS1-5.8S-ITS2 gene were performed, and the amplicons were purified and sequences from both DNA strands were generated and edited with the Sequencher TM version 4.9 (Gene Codes Corporation, Yeast-Like Identification The clinical yeast strain, designated as HUPE-Sc1, was cultured on Sabouraud dextrose agar (SDA; Difco, Becton, Dickinson and Company, La Jolla, CA, USA) plate at 37°C. Yeast identification was carried out by both phenotypic and molecular assays. Carbohydrate assimilation and metabolic enzymatic profiles were evaluated by VITEK 2 ® system (bioMérieux, Marcy-l'Étoile, France) using the yeast (YST) card, according to the manufacturer's guidelines. In parallel, amplification and sequencing of the ITS1-5.8S-ITS2 gene were performed, and the amplicons were purified and sequences from both DNA strands were generated and edited with the Sequencher TM version 4.9 (Gene Codes Corporation, Ann Arbor, MI, USA), followed by alignment using Mega version 4.0.2 software (https://www.megasoftware.net). Sequences corresponding to the ITS genes from Saccharomyces spp. were obtained from the GenBank database (www.ncbi.nlm.nih.gov/genbank/). Antifungal Susceptibility Assay Antifungal susceptibility testing was performed according to the standardized broth microdilution technique described by Clinical & Laboratory Standards Institute (CLSI) in the document M27-A3 [7]. Antifungal drugs tested were amphotericin B, caspofungin, 5flucytosine, fluconazole and voriconazole (Sigma-Aldrich, St Louis, MO, USA). The minimum inhibitory concentration (MIC) values of the drugs on planktonic yeast cells were determined according to the CLSI M27S3 protocol [8]. Candida parapsilosis ATCC 22019 and Candida krusei ATCC 6258 were used as quality control strains. (C) Phylogenetic neighbor-joining dendrogram generated from a genetic similarity matrix based on comparison of ITS1-5.8S-ITS2 gene sequences from HUPE-Sc1 strain and type strains belonging to the Saccharomyces genus; sequences were obtained from GenBank database. Antifungal Susceptibility Assay Antifungal susceptibility testing was performed according to the standardized broth microdilution technique described by Clinical & Laboratory Standards Institute (CLSI) in the document M27-A3 [7]. Antifungal drugs tested were amphotericin B, caspofungin, 5-flucytosine, fluconazole and voriconazole (Sigma-Aldrich, St Louis, MO, USA). The minimum inhibitory concentration (MIC) values of the drugs on planktonic yeast cells were determined according to the CLSI M27S3 protocol [8]. Candida parapsilosis ATCC 22019 and Candida krusei ATCC 6258 were used as quality control strains. Detection of Extracellular Molecules The production of extracellular molecules by fungal cells was carried out in agar plate assays. Briefly, the aspartic peptidase activity was determined using 1.17% yeast carbon base (YCB; Sigma-Aldrich, St Louis, MO, USA) medium supplemented with 1% bovine serum albumin (BSA; Sigma-Aldrich, St Louis, MO, USA) [9]. Caseinolytic activity was assessed using SDA containing 0.4% casein (Sigma-Aldrich, St Louis, MO, USA) [10]. Phospholipase activity was performed using egg yolk agar plate [11]. Esterase production was assayed using the Tween agar plate [12]. Phytase activity was evaluated using the calcium phytate agar [13]. Hemolysin production was evaluated by adding 7 mL of fresh sheep blood to 100 mL of SDA supplemented with 3% glucose [14]. Siderophore production was determined using blue indicator dye, chrome azurol S (CAS; Sigma-Aldrich, St Louis, MO, USA) [15,16]. To determine the production of these extracellular molecules, aliquots (10 µL) of 48 h old cultured fungal cells (10 7 yeasts/mL) were spotted on the surface of each agar medium and incubated at 37 • C for up to 7 days. The colony diameter (a) and the diameter of the colony plus the hydrolysis/precipitation zone (b) were measured by a digital paquimeter and the production of each molecule was expressed as Pz value (a/b), as previously described [11]. Candida haemulonii (clinical isolate LIPCh16) was used as positive control for detecting all the extracellular molecules investigated under the experimental conditions employed herein [16]. Additionally, catalase activity was also evaluated by the addition of 3% hydrogen peroxide (H 2 O 2 ; Sigma-Aldrich, St Louis, MO, USA) to 10 µL of a fungal cell suspension in PBS containing 10 8 yeasts/mL on glass slides. Catalase is an enzyme able to cause the hydrolysis of H 2 O 2 in water and oxygen, and the release of oxygen results in the formation of bubbles that can be easily visualized. Culture Supernatant Harvesting Saccharomyces cerevisiae HUPE-Sc1 strain was grown in Sabouraud dextrose broth (SDB) (5 mL containing 10 6 cells/mL) for 48 h at 37 • C and then this culture was transferred to 50 mL of the same medium and incubated for additional 48 h to achieve substantial growth. Afterwards, the culture was harvested by centrifugation (4000× g, 5 min, 4 • C), and the supernatant was filtered through a 0.22 µm membrane (Millipore, São Paulo, SP, Brazil). The cell-free supernatant was concentrated approximately 10 times in a 10,000 molecular weight cutoff AMICON micropartition system (AMICON, Beverly, MA, USA), and then the protein concentration was determined by the method described by Lowry and colleagues [17], using BSA as standard. During yeast growth, the cells release enzymes that support fungal nutrition and growth, such as peptidases, into the extracellular environment. In this sense, we evaluated the ability of the enzymes present in the culture supernatant of S. cerevisiae to cause hemolysis of fresh erythrocytes and the hydrolysis of hemoglobin, as described in the sections below. Hemolysis Assay The hemolytic activity from both yeast cells and cell-free culture supernatant of S. cerevisiae was assayed by incubation with sheep erythrocytes. The whole sheep blood (10 mL; Cultilab, Rio de Janeiro, RJ, Brazil) was centrifuged at 500× g for 10 min at 4 • C to isolate erythrocytes, followed by three washes with phosphate-buffered saline (PBS; pH 7.2) until the supernatant was clear. Then, the erythrocytes were resuspended in 10 mL of PBS. Afterwards, 100 µL of PBS containing different concentrations of yeasts (10 6 , 10 7 and 10 8 cells) and different protein amounts from fungal culture supernatant (2.5, 5 and 10 µg) were added to 100 µL of erythrocyte suspension (4%) in a 96-well plate at 37 • C for 3 and 24 h. After incubation, the supernatant was collected by centrifugation, and 100 µL was transferred to a new 96-well microtiter plate [18]. The absorbance at 415 nm was measured. A 0.1% Triton X-100 solution was used as positive control (100% lysis), and PBS was used as negative control [18]. Hemoglobin Hydrolysis To investigate the ability of S. cerevisiae-secreted enzymes to hydrolyze hemoglobin, different protein amounts (2.5, 5 and 10 µg) of the culture supernatant were incubated with hemoglobin (20 µg) in PBS at 37 • C for 1 or 24 h. The control systems were prepared in the same way, containing (i) hemoglobin (without the addition of culture supernatant) and (ii) each different amount of protein in the culture supernatant (without hemoglobin). After incubation, the samples were treated with an equal volume of sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) sample buffer (125 mM Tris, pH 6.8, 4% SDS, 20% glycerol and 0.002% bromophenol blue) containing 10% β-mercaptoethanol, followed by heating at 100 • C for 5 min. Proteins were analyzed on 20% SDS-PAGE by the method described by Laemmli [19]. Electrophoresis was carried out at 120 V and 120 mA for 90 min at room temperature, and the gels were silver stained [20]. A clinical isolate of C. haemulonii (LIPCh2) was used as positive control for this experiment [21]. Fungal Growth in Different Nutrient Sources To evaluate the growth capability of S. cerevisiae in different nutrient sources, yeasts were grown overnight in SDB at 37 • C, washed twice with sterile PBS and suspended in the same buffer. Then, 10 4 yeasts/mL were incubated for 24 h at 37 • C in four different conditions: SDB, sheep erythrocytes (100%), diluted sheep erythrocytes (2% in PBS) and fetal bovine serum (FBS). After incubation, the yeasts grown on each medium were washed in PBS, diluted and then 10 µL of the cell suspensions were plated onto SDA in order to determine the colony forming units (CFUs). Plates were incubated for 24 h at 37 • C and then the CFUs were counted. In parallel, aliquots (10 µL) of each system were spotted onto SDA before the incubation at 37 • C (time 0 h) and after 24 h of incubation. In both cases, plates were incubated at 37 • C for 24 h to allow fungal growth. Biofilm Formation To evaluate the biofilm formation, fungal suspensions in SDB (200 µL containing 10 6 yeasts) were transferred into wells of 96-well polystyrene microtiter plates (Jet Biofil, Guangzhou, China) and then incubated without agitation at 37 • C up to 96 h. Medium-only blanks were also set up in parallel. After each time point, the supernatant fluids were carefully removed, and the wells were washed three times with PBS to remove nonadherent cells. Biofilm biomass quantification was assessed in a microplate reader (SpectraMax 190, Molecular Devices, Sunnyvale, CA, USA) at 590 nm after crystal violet incorporation (Sigma-Aldrich, St Louis, MO, USA) in methanol-fixed biofilm [22]. The metabolic activity of the biofilm was determined using a colorimetric assay, which measures at 492 nm the reduction of 2,3-bis (2-methoxy-4-nitro-5-sulfophenyl)-5-[(phenylamino) carbonyl]-2Htetrazolium hydroxide (XTT; Sigma-Aldrich, St Louis, MO, USA) to a water-soluble brown formazan product [22]. The extracellular matrix was quantified at 530 nm after safranin impregnation in non-fixed biofilms [23]. The clinical isolate LIPCh4 of C. haemulonii was used as positive control for this experiment [24]. Antifungal Susceptibility of Biofilm-Forming Cells In this assay, the S. cerevisiae cells were incubated at 37 • C for 48 h to allow biofilm formation, as described above. Then, the supernatant was carefully removed, and the mature biofilm was washed once with sterile PBS. An aliquot of 200 µL of RPMI-1640 buffered with MOPS and supplemented with the antifungals prepared according to the CLSI M27-A3 [7] protocol was added. The plates were incubated at 37 • C for additional 48 h. Finally, crystal violet staining and XTT reduction assay were performed in order to evaluate biofilm biomass and viability parameters, respectively [22]. In Vivo Infection in Tenebrio molitor Larvae Tenebrio molitor larvae exhibiting clear and uniform color and weighing between 70 and 100 mg were selected for the survival studies. For these experiments, S. cerevisiae yeasts were grown overnight in SDB at 37 • C, washed twice with sterile PBS and suspended in the same buffer. The curves were performed through injection of different fungal inocula (10 4 , 10 5 , 10 6 and 10 7 fungi/larva) to determine the appropriate concentration to be injected in the subsequent experiments. Larvae (10 per each assayed group) were inoculated with fungi using an insulin syringe (10 µL/larva) and incubated at 37 • C in Petri dishes containing rearing diet. The inoculation was performed by the injection of cell suspensions into the larvae hemocoel in the ventral portion at the second visible sternite above the legs [25]. Larvae inoculated with sterile PBS were used as control groups. Larvae were assessed daily, up to 7 days, to check their survival, being scored as dead when they displayed no movement in response to touch. Experiments were performed in triplicate with 10 larvae per group, totaling 30 animals per group, which were used to construct the survival curve [25]. Impact of Immunosuppression on Larvae Survival To evaluate the effect of corticosteroid immunosuppression on T. molitor survival, each larva was exposed to 100 µg of methylprednisolone acetate (40 mg/mL stock solution in water) and infected with 10 6 yeasts of S. cerevisiae. Control groups were composed by larvae inoculated only with (i) PBS, (ii) 100 µg of methylprednisolone acetate and (iii) 10 6 yeasts of S. cerevisiae. Larvae rearing and incubation conditions used were the same described in the section above [26]. Statistics Experiments were performed in triplicate, in three independent experimental sets, and data were expressed as mean ± standard deviation. The results were evaluated by analysis of variance (one-way ANOVA) and Tukey's multiple comparison test or Dunnett's multiple comparison test using GraphPad Prism 8 computer software. Survival analyses were determined using the long-rank test and the Kaplan-Meier curves on GraphPad Prism 8 software. In all analyses, p values of 0.05 or less were considered statistically significant. Yeast Identification by Biochemical and Molecular Approaches The yeast-like fungal isolate strain HUPE-Sc1 ( Figure 1A) was identified by mycology methodologies. Firstly, the fungal isolate developed a white color after 48 h of incubation on SDA ( Figure 1B). The carbohydrate assimilation and metabolic enzymatic profiles evaluated with VITEK 2 ® automated system identified it as S. cerevisiae with a probability of identity of 96% and, in this regard, three contradictory tests were detected: D-maltose assimilation (dMALa), D-trehalose assimilation (dTREa) and D-lactate assimilation (LATa). In parallel, PCR followed by sequencing of the ITS gene was used as the gold standard for the precise identification of this fungal isolate. The ITS sequencing alignment scores of the HUPE-Sc1 strain exhibited 100% identity compared with corresponding ITS sequence from a reference S. cerevisiae strain deposited in GenBank ( Figure 1C). The ITS sequences obtained during this study were deposited in GenBank under the accession number OQ030183. Susceptibility Profile to Antifungal Agents As no standardized interpretative criterion to determine susceptibility has been established until now for the Saccharomyces species, breakpoints adopted for Candida species have previously guided interpretation. In the present study, the assayed antifungals (amphotericin B, 5-flucytosine, caspofungin, fluconazole and voriconazole) showed greater activity against the S. cerevisiae HUPE-Sc1 strain with low MIC values (Table 1). Production of Biologically Active Extracellular Molecules and Growth in Different Nutrient Sources The production of biologically active extracellular molecules associated with fungal virulence was evaluated using the classical plate method containing specific substrates. In this sense, the S. cerevisiae HUPE-Sc1 strain was able to secrete different classes of hydrolytic enzymes, including aspartic peptidase, phospholipase, esterase and phytase ( Figure 2A). The Pz values classified HUPE-Sc1 strain as a good producer of these extracellular molecules, as proposed by Price et al. [11]. In contrast, caseinolytic activity was not detected under the employed experimental conditions (Figure 2A). Additionally, the HUPE-Sc1 strain also produced catalase activity as can be seen by the formation of bubbles, which correspond to the hydrolysis of H 2 O 2 in water and molecular oxygen when the fungal cells were exposed to H 2 O 2 ( Figure 2B). * Minimal inhibitory concentration. Production of Biologically Active Extracellular Molecules and Growth in Different Nutrient Sources The production of biologically active extracellular molecules associated with fungal virulence was evaluated using the classical plate method containing specific substrates. In this sense, the S. cerevisiae HUPE-Sc1 strain was able to secrete different classes of hydrolytic enzymes, including aspartic peptidase, phospholipase, esterase and phytase ( Figure 2A). The Pz values classified HUPE-Sc1 strain as a good producer of these extracellular molecules, as proposed by Price et al. [11]. In contrast, caseinolytic activity was not detected under the employed experimental conditions (Figure 2A). Additionally, the HUPE-Sc1 strain also produced catalase activity as can be seen by the formation of bubbles, which correspond to the hydrolysis of H2O2 in water and molecular oxygen when the fungal cells were exposed to H2O2 ( Figure 2B). The S. cerevisiae HUPE-Sc1 strain was also able to produce other two important extracellular molecules: hemolysin and siderophores ( Figure 3A), exhibiting weak and good activities, respectively, according to the Pz classification. Once the strain was obtained The S. cerevisiae HUPE-Sc1 strain was also able to produce other two important extracellular molecules: hemolysin and siderophores ( Figure 3A), exhibiting weak and good activities, respectively, according to the Pz classification. Once the strain was obtained from blood and demonstrated hemolytic activity, we evaluated its ability to lyse fresh erythrocytes. In this sense, the in-solution co-incubation of fresh erythrocytes and S. cerevisiae yeast cells induced the hemolysis in a typical fungal-concentration-dependent way, but no differences were observed regarding the incubation time, being the hemolytic activity after 3 h and 24 h of incubation almost the same ( Figure 3B). Additionally, we evaluated the ability of the enzymes present in the cell-free culture supernatant of the HUPE-Sc1 strain to lyse fresh erythrocytes and, similar to planktonic cells, we also observed that the hemolysis occurred in a dose-dependent but not time-dependent manner ( Figure 3C). Additionally, the enzymes (belonging to the peptidase class) present in the culture supernatant of HUPE-Sc1 strain were also able to hydrolyze hemoglobin in a dose-dependent manner, as demonstrated by SDS-PAGE ( Figure 3D). Nonetheless, the growth ability of the HUPE-Sc1 strain in FBS and blood (non-diluted and diluted) was considerably reduced in comparison with SDB, but the yeast cells were able to survive in all tested sources ( Figure 3E,F). Biofilm Formation and Impact of Antifungals on Mature Biofilms The capacity to form biofilm was also evaluated, since biofilm is considered a multifunctional structure with both virulence and resistance properties. S. cerevisiae HUPE-Sc1 strain adhered to polystyrene surface forming a classical and viable biofilm structure, as observed by means of quantification of three parameters: biomass, viability and extracellular matrix production (Figure 4). the hemolysis occurred in a dose-dependent but not time-dependent manner ( Figure 3C). Additionally, the enzymes (belonging to the peptidase class) present in the culture supernatant of HUPE-Sc1 strain were also able to hydrolyze hemoglobin in a dose-dependent manner, as demonstrated by SDS-PAGE ( Figure 3D). Nonetheless, the growth ability of the HUPE-Sc1 strain in FBS and blood (non-diluted and diluted) was considerably reduced in comparison with SDB, but the yeast cells were able to survive in all tested sources ( Figure 3E,F). Biofilm Formation and Impact of Antifungals on Mature Biofilms The capacity to form biofilm was also evaluated, since biofilm is considered a multifunctional structure with both virulence and resistance properties. S. cerevisiae HUPE-Sc1 strain adhered to polystyrene surface forming a classical and viable biofilm structure, as In order to evaluate the antifungal susceptibility profile of mature biofilm-forming S. cerevisiae cells, mature biofilms were incubated for 48 h with different concentrations of antifungals. Our results revealed that none of the tested antifungals were able to significantly reduce the biofilm biomass ( Figure 5A). Regarding the biofilm viability, only caspofungin reduced the cellular metabolic activity at concentrations ranging from 0.25 to 2 mg/L (approximately 40%), while at higher caspofungin concentrations, a classical paradoxical effect was observed ( Figure 5B). In order to evaluate the antifungal susceptibility profile of mature biofilm-forming S. cerevisiae cells, mature biofilms were incubated for 48 h with different concentrations of antifungals. Our results revealed that none of the tested antifungals were able to significantly reduce the biofilm biomass ( Figure 5A). Regarding the biofilm viability, only caspofungin reduced the cellular metabolic activity at concentrations ranging from 0.25 to 2 mg/L (approximately 40%), while at higher caspofungin concentrations, a classical paradoxical effect was observed ( Figure 5B). Mortality of T. molitor Larvae Infected with S. cerevisiae and Impact of Immunosuppression on Larvae The in vivo virulence of the S. cerevisiae HUPE-Sc1 strain was investigated using the invertebrate model of T. molitor larvae. In this context, larvae mortality rate increased as the fungal inoculum increased as well: 90% of larvae infected with 10 4 yeasts survived after 7 days post-infection, 70% of larvae infected with 10 5 yeasts survived after the same incubation time, and only 20% of larvae infected with 10 6 yeasts survived, while all larvae infected with 10 7 yeasts died after 96 h post-infection ( Figure 6A). As S. cerevisiae is considered an opportunistic fungus generally causing infection in immunocompromised patients, we treated the larvae with corticosteroids, resulting in an immunosuppression condition. In this sense, we observed that the corticosteroid treatment significantly increased (p < 0.0001; long rank-rank [Mantel-Cox] test) the larvae mortality rate at the fungal cell density used (10 6 cells/larvae) in comparison with non-treated larvae ( Figure 6B). Regarding the negative controls, all larvae survived the injection with PBS and 90% survived the treatment with corticosteroids. Mortality of T. molitor Larvae Infected with S. cerevisiae and Impact of Immunosuppression on Larvae The in vivo virulence of the S. cerevisiae HUPE-Sc1 strain was investigated using the invertebrate model of T. molitor larvae. In this context, larvae mortality rate increased as the fungal inoculum increased as well: 90% of larvae infected with 10 4 yeasts survived after 7 days post-infection, 70% of larvae infected with 10 5 yeasts survived after the same incubation time, and only 20% of larvae infected with 10 6 yeasts survived, while all larvae infected with 10 7 yeasts died after 96 h post-infection ( Figure 6A). As S. cerevisiae is considered an opportunistic fungus generally causing infection in immunocompromised patients, we treated the larvae with corticosteroids, resulting in an immunosuppression condition. In this sense, we observed that the corticosteroid treatment significantly increased (p < 0.0001; long rank-rank [Mantel-Cox] test) the larvae mortality rate at the fungal cell density used (10 6 cells/larvae) in comparison with non-treated larvae ( Figure 6B). Regarding the negative controls, all larvae survived the injection with PBS and 90% survived the treatment with corticosteroids. Figure 6. Survival curves of T. molitor larvae infected with different inoculum sizes of S. cerevisiae HUPE-Sc1 strain (10 4 , 10 5 , 10 6 and 10 7 fungi/larvae) (A) and effect of the treatment with corticosteroids (100 µg) on larvae survival after injection of 10 6 fungi/larva (B). In both cases, groups of 10 larvae were infected with indicated systems, repeated three times and pooled together in order to build survival curves with 30 animals. Negative controls were composed by T. molitor larvae injected only with PBS or corticosteroids (100 µg). Discussion It is well known that viral infections, such as mononucleosis, chikungunya and dengue, for example, can lead to several months of immunosuppression in a previously healthy patient. Therefore, it is possible that the previous COVID-19 led to an immunosuppression in a patient with peripheral arterial disease, exposing her to an opportunistic fungal disease. The fast progression to septic shock with the isolation of S. cerevisiae in the bloodstream favors the link of a recent COVID-19 infection followed by fungal sepsis. The S. cerevisiae HUPE-Sc1 strain showed low MICs for the commonly used antifungals for serious systemic invasive infections. Unfortunately, the diagnosis of sepsis was made too late due to the rapid general health deterioration of the patient. Opportunistic invasive fungal infections should be kept in the mind of all emergency or intensive care unit doctors. Indeed, a recent report of bloodstream infection caused by S. cerevisiae in two elderly patients with COVID -19 hospitalized in an ICU in Greece after receiving supplementation due to the diarrhea highlighted the importance of caution while using probiotic preparations in COVID-19 patients [5]. Although the patients had other complications caused by the COVID-19 infection, the treatment first with anidulafungin and then with fluconazole resulted in the resolution of the fungal infection [5]. The in vitro susceptibility tests suggested that the isolates were susceptible to fluconazole, voriconazole, posaconazole, amphotericin B, anifulafungin and 5-flucytosine [5]. Additionally, in Brazil, another COVID- Figure 6. Survival curves of T. molitor larvae infected with different inoculum sizes of S. cerevisiae HUPE-Sc1 strain (10 4 , 10 5 , 10 6 and 10 7 fungi/larvae) (A) and effect of the treatment with corticosteroids (100 µg) on larvae survival after injection of 10 6 fungi/larva (B). In both cases, groups of 10 larvae were infected with indicated systems, repeated three times and pooled together in order to build survival curves with 30 animals. Negative controls were composed by T. molitor larvae injected only with PBS or corticosteroids (100 µg). Discussion It is well known that viral infections, such as mononucleosis, chikungunya and dengue, for example, can lead to several months of immunosuppression in a previously healthy patient. Therefore, it is possible that the previous COVID-19 led to an immunosuppression in a patient with peripheral arterial disease, exposing her to an opportunistic fungal disease. The fast progression to septic shock with the isolation of S. cerevisiae in the bloodstream favors the link of a recent COVID-19 infection followed by fungal sepsis. The S. cerevisiae HUPE-Sc1 strain showed low MICs for the commonly used antifungals for serious systemic invasive infections. Unfortunately, the diagnosis of sepsis was made too late due to the rapid general health deterioration of the patient. Opportunistic invasive fungal infections should be kept in the mind of all emergency or intensive care unit doctors. Indeed, a recent report of bloodstream infection caused by S. cerevisiae in two elderly patients with COVID -19 hospitalized in an ICU in Greece after receiving supplementation due to the diarrhea highlighted the importance of caution while using probiotic preparations in COVID-19 patients [5]. Although the patients had other complications caused by the COVID-19 infection, the treatment first with anidulafungin and then with fluconazole resulted in the resolution of the fungal infection [5]. The in vitro susceptibility tests suggested that the isolates were susceptible to fluconazole, voriconazole, posaconazole, amphotericin B, anifulafungin and 5-flucytosine [5]. Additionally, in Brazil, another COVID-19 patient also developed fungemia by S. cerevisiae after supplementation with Saccharomyces due to the diarrhea, which could be facilitated by the antibiotic regimen he was going through in addition to the use of vasoactive amines and the intestinal damage commonly caused by SARS-CoV-2 [6]. Fluconazole treatment was sufficient for the resolution of fungal infection, but the patient died due to the pulmonary infection and other complications caused by the COVID-19 infection [6]. As mentioned above, the S. cerevisiae HUPE-Sc1 strain exhibited low MICs to amphotericin B, caspofungin, 5-flucytosine, fluconazole and voriconazole, which is in accordance with literature reports that demonstrate that S. cerevisiae is usually susceptible to the main antifungal classes used in clinical practice, with the exception of fluconazole, for which variable susceptibility profiles have been described over the years [27][28][29][30]. Similar to our results, Pérez-Cantero and coworkers [27] and Echeverría-Irigoyen and coworkers [29] reported excellent in vitro antifungal activity of 5-flucytosine against S. cerevisiae. Indeed, the current guidelines recommend the use of amphotericin B or amphotericin B in combination with 5-flucytosine to treat severe cases of infections caused by S. cerevisiae [27]. Unfortunately, 5-fluocitosine is not commercially available in Brazil. Echinocandins are also recommended as treatment options for S. cerevisiae infections, but despite in vitro susceptibility, caspofungin treatment in a pediatric surgical ICU patient with respiratory distress did not result in a good clinical response, the caspofungin treatment being switched to liposomal amphotericin B, resulting in the cure of S. cerevisiae infection [4]. As reviewed by the authors, amphotericin B or liposomal amphotericin B is frequently effective against S. cerevisiae fungemia in pediatric patients [4]. Extracellular enzymes are known as important virulence factors produced by different Candida species and have also been described in S. cerevisiae strains. Herein, we described that the S. cerevisiae HUPE-Sc1 strain was able to produce aspartic peptidase, phospholipase, esterase and phytase. Reports of S. cerevisiae production of aspartic peptidase and phospholipase can be found in the literature, both in clinical and industrial strains [31]. In this sense, Llanos and coworkers [31] observed that 81% and 96% of industrial and clinical isolates of S. cerevisiae, respectively, were moderate/high producers of aspartic peptidases, while the remaining isolates presented weak activity. The same authors also demonstrated that clinical isolates of S. cerevisiae were able to produce higher amounts of phospholipase than industrial isolates, with approximately 85% of clinical isolates being moderate/high producers, while almost 48% of industrial strains were low producers; the same percentage was considered moderate producers of phospholipase [31]. These results suggest that phospholipase activity can be related in some level to S. cerevisiae virulence. Corroborating these results, Irme and coworkers [32] also demonstrated that non-mycoses S. cerevisiae isolates produced lower phospholipase activity than clinical isolates of the same species. Phytase is an enzyme responsible for the hydrolysis of phytic acid into inorganic phosphate and inositol [13]. Indeed, phytate is the main reservoir of phosphorous in plants and, consequently, is common in animal and human diets. In this context, phytase seems to contribute to survival of microorganisms in the gastrointestinal tract where nutrients are scarce [13]. Herein, we demonstrated that our clinical isolate of S. cerevisiae was a good producer of phytase, which could play a role in the infectious process. Additionally, it is worth mentioning that phytate is considered an anti-nutrient factor in diets due to its ability to chelate minerals such as calcium, zinc and iron, and, additionally, it binds to proteins and lipids, resulting in the reduction of intestinal absorption of these nutrients, being particularly a problem for animal diets [33]. In this sense, it is common the use of inorganic phosphorus to supplement the diet of pigs, poultry and fish, but the phosphorus not used by the organism is excreted into the environment, resulting in environmental problems. Strains of S. cerevisiae have already been used in biotechnological processes to optimize phytase production, which can be used to reduce the phytate in animal feed and improve the bioavailability of phosphorus in monogastric animals [33]. We also observed that the S. cerevisiae HUPE-Sc1 strain produced catalase, which is an enzyme known to offer protection against oxidative damage caused by H 2 O 2 [34]. Indeed, the yeast S. cerevisiae is used as a model of eukaryotic cell to study oxidative stress responses. Hemolytic activity is considered an important virulence factor described in many Candida species and not often common in environmental strains of S. cerevisiae [35]. In our work, we demonstrated that our clinical isolate was able to produce hemolytic activity by two different methodologies: through supplementation of SDA with blood and through the co-incubation of the yeasts with fresh sheep blood, demonstrating the virulence potential of this strain to lyse erythrocytes and obtain nutrients when it reaches the bloodstream of vulnerable individuals. Imre and coworkers [32] demonstrated that commercial, nonmycosis and mycosis isolates of S. cerevisiae were able to produce both αand β-hemolytic activity, but the mycoses' isolates exhibited higher β-hemolytic activity when compared with the other strains. Additionally, we also demonstrated that enzymes released into the extracellular environment by the S. cerevisiae HUPE-Sc1 strain were able to hydrolyze hemoglobin, which corroborates its potential to acquire nutrients in the bloodstream of susceptible individuals. Biofilm formation by S. cerevisiae is a well-known event, being characterized as a thin layer of yeast cells surrounded by an extracellular matrix with low density [36]. Indeed, S. cerevisiae has been used as a model of yeast to study the development of biofilm and its regulation using molecular tools [36]. Similar to us, Bojsen and coworkers [36] demonstrated that voriconazole, caspofungin and 5-flucytosine have no action on the mature biofilm of S. cerevisiae; on the other hand, they demonstrated that amphotericin B was the only antifungal used able to kill the biofilm-forming cells, but in our study amphotericin B did not impact neither the cell viability nor the biomass of S. cerevisiae biofilm. Those authors also demonstrated that antifungal susceptibility is dependent on the growth phase of both planktonic and biofilm cells, being the response against antifungals better in exponentially growing planktonic cells and in growing biofilms than in nongrowing planktonic cells and in mature biofilms [36]. In this sense, the presence of biofilms is an aggravating factor for the successful treatment of hospitalized patients, representing a challenge to clinicians especially in the case of the opportunistic infections that generally affect individuals with other underlying diseases. The use of in vivo models is undoubtedly necessary to the progress of scientific research, including the study of the basis of microbial pathogenicity and also the development of vaccines and new therapies for a wide range of diseases [37]. This historically included mainly vertebrate model of animals, which resulted in ethical concerns due to the distress, pain and sacrifice of animals used in experimental research, leading to the approval of laws to regulate animal use for research purposes [37]. Furthermore, animal use also requires more economic investment and qualified people, making the process more complex and expensive to maintain. To overcome these problems, the use of invertebrate models increased considerably in the last decade, avoiding ethical issues and high financial investments. In this scenario, insects were found to be a good host model for the study of microbial infections since they possess an innate immune system similar to vertebrates, a short lifecycle and allow the conduction of large-scale experiments [37]. In this regard, T. molitor larvae, known as the mealworm, have been used recently as a host model for microbial infections studies, including bacteria, yeasts and filamentous fungi. A recent study using the yeasts C. albicans and C. neoformans demonstrated that an increase of the fungal inoculum injected into the larvae resulted in increasing mortality rates [25]. Herein, we observed the same pattern when infecting T. molitor larvae with our clinical strain of S. cerevisiae. Additionally, we demonstrated that larvae treated with corticosteroid and infected with S. cerevisiae exhibited a higher mortality rate when compared with larvae only infected by the yeasts, indicating a possible immunosuppression of larvae. However, to our knowledge, no previous studies regarding T. molitor larvae immunosuppression was conducted until now and more experiments are necessary to clarify these issues. Conclusions In conclusion, the present study highlighted the emergence of S. cerevisiae as an opportunistic pathogen with no resistance profile to antifungal agents commonly used in clinical practice, but with ability to produce important virulence factors that could facilitate the establishment of the infectious process in vulnerable individuals, contributing to the worsening of the health status of patients that are often experiencing other infections, as happened with the patient with COVID-19 infection reported in the present work.
v3-fos-license
2023-02-08T15:44:32.024Z
2020-05-12T00:00:00.000
256633835
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-020-15940-3.pdf", "pdf_hash": "fb41c242a561ab7978a0376b6dc86300192a57c3", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41463", "s2fieldsofstudy": [ "Physics" ], "sha1": "fb41c242a561ab7978a0376b6dc86300192a57c3", "year": 2020 }
pes2o/s2orc
Circuit implementation of a four-dimensional topological insulator The classification of topological insulators predicts the existence of high-dimensional topological phases that cannot occur in real materials, as these are limited to three or fewer spatial dimensions. We use electric circuits to experimentally implement a four-dimensional (4D) topological lattice. The lattice dimensionality is established by circuit connections, and not by mapping to a lower-dimensional system. On the lattice’s three-dimensional surface, we observe topological surface states that are associated with a nonzero second Chern number but vanishing first Chern numbers. The 4D lattice belongs to symmetry class AI, which refers to time-reversal-invariant and spinless systems with no special spatial symmetry. Class AI is topologically trivial in one to three spatial dimensions, so 4D is the lowest possible dimension for achieving a topological insulator in this class. This work paves the way to the use of electric circuits for exploring high-dimensional topological models. Higher-dimensional topological phases are predicted but cannot be realised in real materials as they are limited to three or fewer dimensions. Here, Wang et al. realise a four-dimensional topological insulator associated with a nonzero second Chern number using electric circuits. T opological insulators are materials that are insulating in the bulk but host surface states protected by nontrivial topological features of their bulk bandstructures 1,2 . They are classified according to symmetry and dimensionality [3][4][5][6][7] , with each class having distinct and interesting properties. The celebrated two-dimensional Quantum Hall (2DQH) phase 8 , for instance, has topological edge states that travel unidirectionally on the one-dimensional (1D) edge, whereas three-dimensional (3D) topological insulators based on spin-orbit coupling have surface states that act like massless 2D Dirac particles. The classification of topological insulators contains hypothetical highdimensional phases 3 that cannot be realised with real materials, since electrons only move in one, two, or three spatial dimensions. These include several types of four-dimensional Quantum Hall (4DQH) phases, which are characterised by a 4D topological invariant called the second Chern number and exhibit a much richer phenomenology than the 2DQH phase [9][10][11][12] . In recent years, topological phases have been implemented in a range of engineered systems including cold atom lattices 13 , photonic structures 14 , acoustic and mechanical resonators 15,16 , and electric circuits [17][18][19][20][21][22][23][24][25][26][27][28] . Some of these platforms can realise lattices that are hard to achieve in real materials, raising the intriguing prospect of using them to create high-dimensional topological insulators. Although there have been demonstrations of topological pumps that map 4D topological lattice states onto lower-dimensional systems [29][30][31][32] , there has been no experimental realisation of a 4D topological insulator with protected surface states on a 3D surface. Here, we describe the implementation of a 4DQH phase using electric circuits to access higher dimensions. Since electric circuits are defined in terms of lumped (discrete) elements and their interconnections, lattices with genuine high-dimensional structure can be explicitly constructed by applying the appropriate connections [33][34][35] . In this way, we experimentally implement a 4D lattice hosting the first realisation of a Class AI topological insulator 5,6 , which has no counterpart in three or fewer spatial dimensions. In the symmetry-based classification of topological phases 3-7 , Class AI includes time-reversal (T) symmetric, spinless systems that are not protected by any special spatial symmetries. Whereas the 2DQH phase is tied to nontrivial values of the first Chern number, which requires T-breaking 36 , 4DQH phases rely on the second Chern number, which does not [9][10][11][12] . Even though the Class AI conditions are ubiquitous 13,14 , the class is topologically trivial in one to three dimensions [3][4][5][6][7] . Hence, realising a Class AI topological insulator requires going to at least 4D. We focus on a theoretical 4D lattice model recently developed by one of the authors 37 , which exhibits a nonzero second Chern number with vanishing first Chern numbers. Hence, we obtain the first observations of topological surface states that are intrinsically tied to 4D band topology, with no connection to lower-dimensional topological invariants. The present approach, based on circuit connections, is distinct from other recently-investigated methods for accessing higherdimensional models. One of the alternatives involves manipulating internal degrees of freedom, such as oscillator modes, to act as synthetic dimensions [38][39][40][41][42][43][44][45][46][47][48][49][50][51][52] . Although there have been theoretical proposals for using synthetic dimensions to build 4D topological lattices 40,43 , all experiments so far have been limited to 1D and 2D 51 . Another approach involves adiabatic topological pumping schemes, which map high-dimensional models onto lower-dimensional setups by replacing spatial degrees of freedom with tunable parameters [29][30][31][32] . As mentioned above, 2D topological pumps based on cold atoms and photonics have recently been used to explore Class A (T-broken) 4DQH systems 30,53,54 . However, topological pumps have the drawback of being inherently limited to probing specific quasi-static solutions of a high-dimensional system, without realising a genuinely highdimensional lattice. Moreover, in those experiments the second Chern number in 4D is not truly independent of the first Chern numbers in 2D, which are nonzero. Our 4D lattice is implemented using electric circuits with carefully chosen capacitive and inductive connections. The lattice model has two topologically distinct phases: a 4DQH phase and a conventional (i.e. topologically trivial) 4D band insulator, with the choice of phase governed by a parameter m that maps to certain combinations of capacitances and inductances. Using impedance measurements that are equivalent to finding the local density of states (LDOS), we show that the 4DQH phase hosts surface states on the 3D surface, while the conventional insulator phase has only bulk states. By varying the driving frequency, we show that the topological surface states span a frequency range corresponding to a bulk bandgap, as predicted by theory. Our experimental results also agree well with circuit simulations. This work demonstrates that electric circuits are a flexible and practical way to realise higherdimensional lattices, paving the way for the exploration of other previously-inaccessible topological phases. Results 4DQH model and circuit realisation. The 4D lattice model is shown schematically in Fig. 1a. The spatial coordinates are denoted x, y, z, and w. The lattice contains four sublattices labelled A, B, C and D, with sites connected by real nearest neighbour hoppings ±J. The four bands host two pairs of Dirac points in the Brillouin zone; each pair is the time-reversed counterpart of the other. To control the pairs separately, longrange hoppings with amplitudes ± J 0 and ±J″ are added within the x-z plane (these hoppings are omitted from Fig. 1a for clarity, but are shown in Fig. 1c). Upon adding mass +m to the A and B sites, and −m to the C and D sites, the Dirac masses for the different Dirac point pairs close at m ¼ J 0 À 2J 00 and m ¼ J 00 À 2J 0 . These gap closings are topological transitions, such that, for J 00 ¼ ÀJ 0 , the second Chern number of the lower bands is −2 (nontrivial) if jmj < 3jJ 0 j. Since T is unbroken, the first Chern number is always zero, so the model exhibits QH behaviour stemming purely from the second Chern number 37 . For further details about the model, see Supplementary Note 1. We take J = 1 and J 0 ¼ ÀJ 00 ¼ 2, so that the topological transition of the bulk lattice occurs at m = ±6. We target a finite 4D lattice with six sites along the x and z directions, and two sites along y and w. To mitigate finite-size effects, periodic boundary conditions are applied in y and w using nearest neighbour type connections between opposite ends of the lattice. This corresponds to sampling at k y = k w = 0 in momentum space, where the gap closing occurs during the topological transition (see Supplementary Note 1). Regardless of these periodic boundary conditions, the spatial dimensionality established by the connectivity of the lattice sites is 4D 33 . The lattice has a total of 144 sites, of which we consider 16 to be bulk sites (defined as being more than two sites away from a surface) and 128 to be surface sites. The finite 4D lattice is implemented with a set of connected printed circuit boards, shown in Fig. 1b. Each site i of the tightbinding model maps to a node on the circuit, and the mass term maps to a circuit component of conductance −D ii connecting the node to ground. Each hopping J ij between sites i and j maps to a circuit element of conductance D ij connecting the nodes. We add extra grounding components with conductance D 0 ii in parallel with −D ii . If an external AC current I i flows into each node i at frequency f, and V i is the complex AC voltage on that node, Kirchhoff's law states that We define D ij ( f ) = iαH ij ( f ), where α is a positive real constant. Then capacitances (inductances) correspond to positive (negative) real values of H ij . We require that at a reference working binding lattice Hamiltonian (see "Methods"). We then tune D 0 so that for f = f 0 , for some target energy E; the required value of D 0 depends on the m parameter. Equation (1) now becomes where L ij are the components of the circuit Laplacian L. The impedance between node r and ground is It can be shown that Re½Z r ðf 0 Þ is, up to a scale factor, the LDOS of the target lattice at energy E (see "Methods"). For further details about the circuit analysis, see Supplementary Note 2. Experimental results. Figure 2a shows the band diagram of the infinite bulk tight-binding model as a function of the mass detuning parameter m. For |m| < 6, the system is in a 4DQH phase, with a topologically nontrivial bandgap centred at E = 0, which hosts topological surface states. The band diagram for the 144-site tight-binding model is shown in Fig. 2b. The colours of the curves indicate the degree to which each eigenstate is concentrated on the surface, as defined by where ψ(r) denotes the energy eigenfunction, whose magnitudes are averaged over either surface or bulk sites. Due to the finite lattice size, both the bulk and surface spectrum are split into subbands. The closing of the bulk gap is shifted to |m| ≈ 4, and the surface states occur most prominently at small values of E and |m|. We now fabricate a set of circuits with parameters m ∈ {0, 1, …, 8} and E ∈ {0, 1}. Figure 2c-f shows the measured LDOS (at f = f 0 ) for four representative samples. From the experimental data, we see that the surface LDOS is high and the bulk LDOS is low when in the topologically nontrivial bandgap (Fig. 2c, d). For E = 0, m = 4, which corresponds roughly to the gap-closing point, there is no significant difference between the surface and bulk LDOS. For E = 0, m = 8, the LDOS on all sites is low, consistent with being in a topologically trivial bandgap. These results also agree well with circuit simulations (see Supplementary Note 3). The robustness of the surface states, a feature imparted by topological protection, can be inferred from the fact that each individual circuit component has up to 10% deviation in its capacitance or inductance (see "Methods"). We emphasise that the surface states cannot be explained by the first Chern numbers, which are necessary zero since the circuit design is T symmetric. To confirm that the discrepancy between Fig. 2a and b is just a finite-size effect, Fig. 3 shows calculated band edges (i.e. the pair of eigenvalues closest to E = 0) for a series of lattices with 6, 8, 10, 14, 20, and 50 sites along both x and z (the lattices are kept two sites wide along y and w, with periodic boundary conditions). The colours indicate whether the eigenstate is concentrated on the surface (red) or in the bulk (blue). As the size of the lattice increases in x and z, the eigenvalues at large m (in the conventional insulator regime) approach the predicted bulk band edges, while the eigenvalues in the topological insulator regime spread over a larger range of m corresponding to the topologically nontrivial gap. To quantify the difference between the 4DQH and conventional insulator phases, we examine the ratio of the mean LDOS on surface sites to the mean LDOS on bulk sites, for different values of the mass detuning parameter m (Fig. 4a). The ratio is derived from experimental measurements performed at f = f 0 , corresponding to E = 0; with increasing m, it decreases sharply from around 4.5 in the 4DQH regime to around 1 in the conventional insulator regime. Circuit simulations produce results in agreement with the experimental data (Fig. 4f). The frequency dependence of the circuit impedance is also consistent with the spectral features of a topological insulator at small values of m. Figure 4b-e plots the experimentally-obtained frequency dependence of the LDOS measure Re½Z r , averaged over surface or bulk sites. To interpret these results, recall that the impedance measurements probe the response at fixed energy (in this case, E = 0) of an effective Hamiltonian H(f) that depends parametrically on the frequency f [Eq. (6)], and matches the target tight-binding model at f = f 0 . For m = 0 (Fig. 4b), the circuit exhibits a strong edge response and suppressed bulk response at f = f 0 , consistent with the fact that H(f 0 ) has a topologically nontrivial gap at E = 0. For f ≠ f 0 , the effective Hamiltonian H(f) deviates from the target model (e.g. the positive and negative hoppings become unequal in magnitude, lifting the band degeneracy), but remains in Class AI. So long as the gap remains open, H(f) must possess a topologically nontrivial gap at E = 0 associated with the same second Chern number. The signatures of the topological bandgap persist as m is slightly increased (Fig. 4c); upon further increasing m, the bulk gap closes and thereafter the surface and bulk LDOS measures exhibit no notable frequency dependent features (Fig. 4d, e). These experimental results are in good agreement with simulations ( Fig. 4g-j). Discussion We have used electric circuits to implement a 4D lattice hosting a 4D Quantum Hall phase. This is the first experimental demonstration of a topological lattice with a 4D structure, and of a Class AI topological insulator. This is also the first experimental exploration of a 4DQH model with nontrivial second Chern number but trivial first Chern numbers. Using impedance measurements, we have demonstrated that the LDOS on the 3D surface is enhanced in the 4DQH phase, due to the presence of topological surface states, and that the enhanced surface response spans the frequency range of the bulk bandgap. The gap-closing associated with a topological phase transition is clearly observed, despite being shifted by finite-size effects. In future work, it is desirable to find ways to probe the detailed features of the 3D surface states, which are predicted to be two robust isolated Weyl points of the same chirality, a situation that does not occur in lower-dimensional topological models 37 . The successful implementation of 4D lattices of very substantial size (144 sites) shows that electric circuits are an excellent platform for exploring exotic band topological effects, and a promising alternative to the synthetic dimensions approach to realising higher-dimensional lattices 51 . While this work was being done, we became aware of related theoretical proposals to use circuits to realise high-dimensional TIs 55,56 . Methods Circuit implementation and experimental procedure. The implementation of the LC circuit, so as to map its impedance response to a target Hamiltonian, follows a design strategy similar to recent works, which targeted different topological models [20][21][22][23][26][27][28] . As explained in the main text, positive and negative hoppings in the tight-binding Hamiltonian are represented by capacitors and inductors respectively. Defining the complex conductance between sites i and j as D ij = iαH ij , we take α = 2πf 0 C 0 to map the positive nearest neighbour hopping J = 1 to capacitance C 0 = 1 nF, and the long-range hopping J 0 to capacitance C 0 ¼ 2 nF, at f = f 0 . Next, setting f 0 ¼ 1=ð2π ffiffiffiffiffiffiffiffiffiffi L 0 C 0 p Þ % 113 kHz maps the negative nearest neighbour hopping to inductance L 0 = 2 mH, and the negative long-range hopping J″ = −2 to L 0 ¼ 1 mH. Each site is connected to ground by additional components to satisfy Eq. (2); see Supplementary Note 2. The required capacitances are obtained by connecting 1 nH capacitors (Murata GCM155R71H102KA37D) in series or parallel, and the inductances are achieved by connecting 1 mH inductors (Taiyo Yuden LB2518T102K). The circuit is divided into several printed circuit boards (PCBs), stacked on top of each other. Each PCB is divided into 6 × 6 = 36 nodes, corresponding to the dimensions of the 4D lattice in the x-z plane (see Fig. 1c of the main text). Each xz lattice plane actually consists of several PCBs stacked with vertical electrical interconnects, in order to fit all the necessary circuit components. We measure the impedance between any given node r and the common ground by applying a 1 V sine wave of frequency f 0 on that node, and measuring the voltage V r and the current I r . As stated in Eq. 4, the impedance between node r and the ground is the rth diagonal term of the inverse of the circuit Laplacian L. Using Eq. (3), one obtains 20,27 where ψ n (r) is the n-th energy eigenstate's amplitude on site r, and E n is the corresponding eigenenergy. Thus, if the impedance measurement is performed at f = f 0 , then Re½Z r ¼ ð1=παÞ P n δðE À E n Þ jψ n ðrÞj 2 is equivalent to the lattice's LDOS at energy E. With resistances present, the eigenenergies in Eq. (6) acquire an imaginary part, which has the effect of smoothing out the impedance curves (see Supplementary Note 3). Circuit simulations. All circuit simulations are performed with ngspice, a free software circuit simulator. We assign to each 1 nF capacitor a 10 Ω resistance, consistent with the resistance in the manufacturer-supplied SPICE model at our operating frequency. For each 1 mH inductor, we assign a 24 Ω resistance consistent with the manufacturer-provided data sheet. Each resistance is applied in series with the corresponding capacitive or inductive element. Other sources of resistance, such as the PCB interconnects, are much harder to characterise and were thus not accounted for in the circuit simulations. To model the disorder in the capacitors and inductors, we apply 10% uniformly-distributed disorder to each capacitance and inductance, consistent with the stated tolerances in their data sheets. The simulations are performed like the experiments: i.e. sine wave voltage source is applied to each node, and the steady-state voltage and current are used to determine the complex impedance. Data availability The circuit measurement data that support the findings of this study are available in DR-NTU(data) with the identifier "https://doi.org/10.21979/N9/KXL3TD" 57 . The results in f, and the solid curves and dashes in g-j, assume no disorder in the circuit components. The red and blue areas in g-j indicate the range of impedances assuming 10% variation in individual capacitances and inductances, over 50 independent disorder realisations.
v3-fos-license
2019-12-14T23:30:37.649Z
2019-12-01T00:00:00.000
209356893
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/9/12/1118/pdf", "pdf_hash": "ca0ff4d578cae12a9cb2b8648af0859985debd66", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41464", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "ca0ff4d578cae12a9cb2b8648af0859985debd66", "year": 2019 }
pes2o/s2orc
Milk Urea Concentration in Dairy Sheep: Accounting for Dietary Energy Concentration Simple Summary In this paper, we show that milk urea concentration (MUC) of dairy ewes is markedly affected not only by dietary protein concentration, as evidenced by previous research, but also by dietary energy concentration. Thus, to avoid misleading interpretations, the utilization of MUC as indicator of the protein status of ewes should account for the dietary energy concentration. Minimal, optimal, and maximal MUC values for different combinations of dietary energy and protein are proposed. Because frequent bulk tank MUC analysis is easy to perform and cost-effective, the reference values proposed here can be used for optimizing sheep milk and reproductive performances while curbing N release from excreta. Abstract In dairy sheep milk urea concentration (MUC) is highly and positively correlated with dietary crude protein (CP) content and, to a lesser extent, with protein intake. However, the effect of dietary energy and carbohydrate sources on MUC of lactating ewes is not clear. Thus, the objective of this study was to assess the effects of diets differing in energy concentration and carbohydrate sources on MUC values in lactating dairy ewes. Two experiments were conducted (experiment 1, E1, and experiment 2, E2) on Sarda ewes in mid and late lactation kept in metabolic cages for 23 d. In both experiments, homogeneous groups of five ewes were submitted to four (in E1) or three (in E2) dietary treatments, consisting of pelleted diets ranging from low energy (high-fiber diets: 1.2–1.4 Mcal of net energy for lactation (NEL)) to high energy (high-starch diets: 1.7–1.9 Mcal of NEL) contents, but with a similar CP concentration (18.4% dry matter (DM), on average). Each diet had a different main ingredient as follows: corn flakes, barley meal, beet pulp, or corn cobs in E1 and corn meal, dehydrated alfalfa, or soybean hulls in E2. Regression analysis using treatment means from both experiments showed that the best predictor of MUC (mg/100 mL) was the dietary NEL (Mcal/kg DM, MUC = 127.6 − 51.2 × NEL, R2 = 0.85, root of the mean squared error (rmse) = 4.36, p < 0.001) followed by the ratio CP/NEL (g/Mcal, MUC = −14.9 + 0.5 × CP/ NEL, R2 = 0.83, rmse = 4.63, p < 0.001). A meta-regression of an extended database on stall-fed dairy ewes, including the E1 and E2 experimental data (n = 44), confirmed the predictive value of the CP/ NEL ratio, which resulted as the best single predictor of MUC (MUC = −13.7 + 0.5 × CP/NEL, R2 = 0.93, rmse = 3.30, p < 0.001), followed by dietary CP concentration (MUC = −20.7 + 3.7 × CP, R2 = 0.82, rmse = 4.89, p < 0.001). This research highlights that dietary energy content plays a pivotal role in modulating the relationship between MUC and dietary CP concentration in dairy sheep. Introduction Blood urea concentration (BUC) and milk urea concentration (MUC) are currently used as nutritional indicators in ruminants, because they are closely related to digestive tract activity [1] and endogenous ammonia production [2], the latter being associated with gluconeogenesis. Because urea is the major end product of N metabolism in ruminants, blood and milk urea contents are good predictors of nitrogen excretions [3]. Blood urea concentration cannot be measured routinely, because sampling requires invasive techniques and its concentration can change rapidly after meals. On the contrary, MUC is more stable and easier to sample than BUC. In dairy cows, several studies have shown that MUC is related to dietary crude protein intake (CPI), percentage of rumen degradable and undegradable protein, and the protein-to-energy ratio in diet [4,5]. In dairy sheep fed diets ranging from 14% to 21% of dietary CP (dry matter (DM) basis), MUC was positively and linearly related to dietary CP content and, to a lesser extent, to protein intake [6]. In that experiment, a relatively narrow range of energy concentration was also tested (1.55-1.65 Mcal of net energy for lactation (NE L )), and energy was not found correlated with MUC. This contrasts with previous findings on dairy cattle [7] and goats [8]. However, in a later study [9] comparing diets with 1.40 and 1.59 Mcal/kg DM of NE L and 19-20% DM of CP fed to mid-lactation ewes, a significantly lower MUC in the ewes fed the diet with higher energy content was found. There are no studies directly testing the effect of dietary energy concentration on MUC in dairy sheep. Because MUC reference values in lactating ewes are substantially higher compared with those in lactating goats and cows [10], it is worth exploring the effects of factors other than crude protein content and intake on MUC in lactating ewes, with particular reference to dietary energy level and source (fiber vs. non-fiber carbohydrates). Prediction equations obtained from individual experiments can be easily compared with findings from other studies by using a meta-regression or meta-analysis approach [11]. It can be useful to check the robustness of the observed relationships and gather a more generic algorithm to be empirically used in broad contexts and at the field level or for other modeling purposes [12]. Comprehensive analysis of experimental data identified the dietary CP concentration as the main factor influencing MUC in dairy cows [13,14], whereas no similar efforts have been carried out for sheep. Thus, this study was carried out with two main objectives: (i) assessing the relationships between MUC and the dietary content or intake of nutrients in dairy ewes fed diets characterized by a wide range of energy contents and carbohydrates sources; and (ii) comparing the same relationships within a meta-regression based on a broader database inclusive of other studies on stall-fed dairy sheep. Animals and Diets The study was conducted at the Bonassai experimental farm of the Agricultural Research Agency of Sardinia (AGRIS Sardegna), located in the northwest of Sardinia (40 • N, 32 • E, 32 m a.s.l.), in Italy. The animal protocol described below was fully in compliance with the European Union (EU) and Italian regulations on animal welfare and experimentation, and it was approved by the veterinarians responsible of the ethic and welfare control in animal experimentation of AGRIS and the University of Sassari. All measurements were taken by personnel previously trained and authorized by the institutional authorities on ethical issues both from AGRIS and the University of Sassari. The study consisted of two feeding experiments conducted on Sarda dairy sheep during mid (in March) and late (June-July) lactation. In experiment 1 (E1), four complete pelleted diets were tested on 20 ewes, and in experiment 2 (E2), other three complete pelleted diets were tested on 15 ewes. Each experiment consisted of a seven-day preliminary period, a fourteen-day adaptation period, and a nine-day experimental period. During the preliminary period, the ewes of each experiment grazed ryegrass-based pastures. were supplemented with a mixture made of equal proportions of the relative experimental pelleted diets for four days, and were then confined in pens for three days, during which time they received only the mixture of the experimental pelleted diets for three days. After the preliminary period, the ewes were allocated to homogeneous groups and put in individual metabolic cages for the adaptation and experimental periods. In E1, the 20 mid-lactation ewes were allocated to four homogeneous groups of five animals each on the basis of their days in milk (DIM; Table 1; mean ± s.d.), milk yield (MY), body weight (BW), body condition score (BCS), age, and parity. The ewes were fed the pelleted diets ad libitum in two daily meals. The same animals were then re-randomized and used in late lactation. In E2, 15 mid-lactation ewes were assigned to three homogeneous groups of five animals each, on the basis of their DIM (Table 1), MY, BW, BCS, age, and parity. The same animals were then re-randomized and used in late lactation. Throughout the period between the mid-and late-lactation measurements, the ewes were fed at pasture and machine-milked twice a day at 07:00 h and 15:00 h in a milking parlor. The animals were machine-milked twice a day at 07:00 h and 15:00 h inside the cages during the adaptation and experimental periods. All the animals had ad libitum access to water throughout the study. The ingredients and the chemical composition of the diets used in E1 and E2 are summarized in Table 2. On the basis of their main ingredient, the following diets were tested: CF = corn flakes, BM = barley meal, BP = beet pulp, and CC = corn cobs, in E1; and CM = corn meal, AA = dehydrated alfalfa, and SH = soybean hulls, in E2. All the diets contained dehydrated alfalfa as a common base and other ingredients (barley meal, corn flakes, corn meal, beet pulp, corn cobs, corn germ, corn gluten meal, soybean hulls, wheat middlings, minerals, and vitamins) were added in order to obtain different fiber (neutral detergent fiber (NDF), acid detergent fiber (ADF), and acid detergent lignin (ADL)) and energy contents, while keeping CP concentrations similar. The energy content of the diets was calculated as net energy for lactation (NE L ) on the basis of total digestible nutrient (TDN, % DM) [15]: whereas TDN was calculated as follows: where dCPI = digestible CP intake (g/day), dNDFI = digestible NDF intake (g/day), dNFCI = digestible non-fiber carbohydrates (NFC) intake (g/day), dEEI = digestible ether extract intake (g/day), and DMI = dry matter intake (g/day). The data on intake of digestible nutrients measured in vivo and used in the above equations are reported in [16]. In order to prevent acidosis in late lactation, due to a possible uneven feeding pattern associated with the high diurnal temperature typical of the late-lactation period (June-July), 10 g/day per head of sodium bicarbonate were added to all the diets. Measurements In both experiments, the individual intake was measured by weighing the offered diets and the corresponding orts 24 h after the first daily meal during the experimental period. Samples of feed on offer were collected once a week and stored until analyses. Individual milk yield was measured three times during the experimental period, and individual milk samples, at the morning and afternoon milking, were also taken. The concentration of protein digestible in the intestine, when energy or nitrogen is not limiting rumen microbial growth (PDIN and PDIE), was calculated for each diet using tabular values [17]. The requirements of dairy sheep in terms of protein digestible in the intestine (PDI) were estimated with the equation of [18], and PDIN balance was then estimated as the difference between intake and requirements (g/day) and as the ratio between PDI balance (g/day) and PDI requirements (g/day), expressed as a percentage. Statistical Analysis To target the first objective of the study (assessment of the effect of dietary energy on milk urea concentration), the results of the experiments were averaged by dietary group and physiological stage, and treatment means (n = 14) of the two experiments were pooled and used to study the relationships between dietary variables, nutrient intake, and MUC, as detailed below. First, values of MUC were regressed against dietary concentration and intake of nutrients, as well as PDIN balance, using a simple linear regression model: where B 0 = intercept, C 0 = regression coefficient, Xi = independent variable, and ei = random error. Second, a stepwise regression analysis was performed to verify if any multiple regression model could fit better than simple regression models to predict MUC. All dietary variables already quoted were tested using p < 0.15 as entry and stay probability thresholds. Because no variables were kept in the model, except for the content of dietary energy, no further attempts were made to test multiple regression models. To pursue the second objective of the study (meta-analysis of the available literature on stall-fed dairy sheep) an extended dataset was made by adding to the current experiment results data from other experiments where dietary CP and energy intake and or their contents in sheep diet were related to milk urea. The search of relevant papers was done using Scopus with the keywords "sheep and nutrition and milk urea", exploring all the scientific literature available to those engines in the time range of 1970-2019. Overall, the search resulted in 25 papers in the Scopus database, including primary and secondary documents, most of which were not relevant due to the focus on non-nutritional aspects or to the lack of accurate information on CP and energy intake (mostly grazing studies) or because urea was measured erratically or only in sheep plasma. Furthermore, all experiments that included diets containing tannins were discarded from the dataset because tannins are known for of their effects in modulating the use of dietary proteins. Low to moderate levels of tannins in the diet may actually reduce the protein degradation in the rumen and increase amino acid flow to the small intestine, while high levels can reduce voluntary feed intake and nutrient digestibility. At the end of this screening process, only six studies on stall-fed dairy sheep reported in the literature were selected [6,9,[23][24][25][26], and their treatment means merged to those obtained in our study to form the extended dataset. The relationships derived from the extended dataset were calculated using two statistical models: (1) simple linear regression models and (2) meta-analytical mixed models [27], which included the regressors as fixed effects and the "study effect" (Exp) as random effect [11]. In particular, the implemented model was as follows: where A 0 = overall intercept, B 0 = overall regression coefficient, X ij = independent variable, Exp i = random effect of the study on the intercept, b i = random effect of the study on the regression slope and e ij = random error. Composition of the Diets The composition of the diets ( Animal Data Mean values of the animal data used in the subsequent regression analysis and their ranges are displayed in Table 3. The dietary CP content tended to be related to MUC (p = 0.09). In contrast, the ratio CP/NE L was strongly related to MUC, performing similarly to dietary NE L (R 2 = 0.83, p < 0.001). The sum of A + B1 protein fractions was more closely related to MUC (R 2 = 0.53) than the single fractions B1 (R 2 = 0.49) or A (R 2 = 0.35). Significant relationships were found when MUC was regressed against the fiber fractions, among which ADL showed the highest coefficient of determination (R 2 = 0.57), followed by ADF (R 2 = 0.42) and NDF (R 2 = 0.27). The relationships between NE L I or CPI and MUC were not significant (data not shown). NDFI was not related to MUC either. On the contrary, NFCI had a weak but significant negative relationship with MUC (R 2 = 0.36, p < 0.01), which was also related negatively to starch intake (starchI, R 2 = 0.48, p < 0.003). A linear positive relationship was found between PDIN balance, expressed as g/day, and MUC (R 2 = 0.27, p < 0.056). The strength of the relationship markedly increased when MUC was regressed against PDIN balance, expressed as % of PDIN requirement (R 2 = 0.54, p < 0.003). Results of the Meta-Analyses The extended database on stall-fed sheep had a total of 44 dietary treatments, characterized by a wide range of dietary CP and energy contents (CP: from 12.3% to 24.6% DM; NE L : from 1.20 to 1.88 Mcal/kg DM). The diets were based on pelleted concentrates, hay and concentrates, and fresh forages clipped at a height of 5 cm above the soil surface. The trend of MUC and the variation of dietary CP, NE L , or CP/NE L , considering each experiment separately, are depicted in Figures 1-3, respectively. forages clipped at a height of 5 cm above the soil surface. The trend of MUC and the variation of dietary CP, NEL, or CP/NEL, considering each experiment separately, are depicted in Figures 1-3, respectively. The results of regression meta-analyses are shown in Table 5. The results of regression meta-analyses are shown in Table 5. Both fixed and mixed regression models indicated a strong linear relation between dietary CP and MUC (R 2 = 0.82 and 0.68 for mixed and fixed models, respectively, Table 5). The close relationship between NE L and MUC found after the analysis of pooled data from E1 and E2 (Table 4) was confirmed by the meta-analysis, with similar intercepts and slopes (R 2 = 0.73 and 0.46, for mixed and fixed models, respectively). The ratio of CP/NE L was the best single predictor of MUC. The equations estimated by the two models were similar for intercept and slope, whereas R 2 was slightly higher in the mixed than in the fixed model (0.93 vs. 0.88). The slopes of these equations were similar to those reported after E1 and E2 regression analysis (Table 4). A positive relationship was found between NDF and MUC, but only when the mixed model was implemented. As expected, when MUC was regressed against NFC, the slopes were negative and identical between the two models (p < 0.001, Table 5). As regards the intake of nutrients, only when MUC was regressed against CPI did the relationship became significant, with a higher determination coefficient and lower root of the mean squared error (rmse) in the mixed model (R 2 = 0.76, p = 0.001, Table 5). Discussion In both experiments, the small particle size of the diets did not seem to impose physical constraints to DMI, which was probably regulated mostly by the energy demand. The overall lower DM and CP intake values observed in late-lactation ewes were probably due to the lower requirements of the animals with decreased milk yield, typical of this stage; particularly high MUC values were probably due to an excess of protein concentration compared with the needs of the animals. Results of E1 and E2 Average group data were used in these analyses, because MUC urea is usually sampled for groups of ewes and not individually and for the need of developing relationships comparable with those developed in the meta-analysis, based on treatment means in the literature. The non-significant relationship between MUC and dietary CP concentration (Table 4) was very likely due to the small variation in dietary CP considering both experiments (from 17.5% to 19.8% of DM; Table 2). Milk urea concentration was more related to the soluble N fractions of the diets than to total dietary CP. In particular, the B1 protein fraction was more strongly associated with MUC than the A protein fraction, despite its lower presence in the diets. This result could be explained by (i) the wider range of variation of the fraction B1 in the diets under study and (ii) the likely variability in its utilization at the rumen level. Indeed, while fraction A is usually completely fermented in the rumen, part of fraction B1 can escape the rumen, depending on the combination of its degradation and passage rates [28], which in turn affects the partitioning of potentially degradable CP into rumen degradable (RDP) and rumen undegradable protein (RUP). Indeed, the dietary percentage of RDP and RUP influenced milk urea N in sheep fed almost isoenergetic diets, with higher milk urea N values in ewes fed 14% RDP and 4% RUP (DM basis) than in those fed 12% RDP and 4% RUP (DM basis, [25]). The significant regression of MUC against dietary NE L found in this study on dairy sheep (Table 4) is in agreement, as a general trend, with previous studies on dairy cows [7] and goats [8]. This result suggests that enhancing the energy content of the diet increases the uptake of N by rumen microorganisms and reduces amino acid gluconeogenetic utilization, thus reducing the wastage of N. In the present study, the range of dietary NE L was set to be much wider than in other previously cited experiments conducted on sheep [6,9] in order to cover the range of the energy densities frequently experienced by lactating dairy sheep (from 1.20 to 1.95 of NE L per kg DM, [29]). Interestingly, although MUC was highly associated with NE L , it was poorly associated with dietary NFC and NDF, suggesting that the overall energy availability was more important as a determinant of MUC than the carbohydrate sources (fibrous or starchy) from which the energy was derived. In this study, the passage rate was high for all the diets due to the small particle size of the pellets, as reported elsewhere [16,30]. This suggests that fermentable energy in the rumen was probably the main limiting factor for N utilization, as also indicated by the negative relationship between NFC content and MUC and the positive relationship between fiber fractions concentrations and MUC found in our study. Another reason for high levels of MUC in the sheep fed high-fiber diets could be the poor synchronization between energy and N supply at the rumen level, as shown in sheep [31]. As expected, MUC was strongly and positively related to PDIN balance (expressed as %), in agreement with the results obtained in Saanen goats [8]. These authors reported a regression equation with a higher coefficient of determination (R 2 = 0.92) than that found in our experiment (R 2 = 0.54). Interestingly the slopes of the regressions for dairy sheep and goats were similar (0.28 and 0.34 mg/dL, respectively), unlike the intercepts (26.9 and 22.9 mg/dL, respectively). The lower value of the intercept found in the goat regression could be ascribed to their more efficient recycling of urea from blood to the rumen in goats than sheep [32]. According to this author, the greater secretion of saliva and the broader rumen surface for NH 3 absorption in goats would explain the differences in MUC values among these animal species. Another possible explanation of the higher values of MUC in sheep compared with goats (and cattle) is their higher consumption of sulfur-containing amino acids for wool production. Therefore, sheep have a lower efficiency of conversion of metabolizable protein to net protein and thus higher ammonia wastage than cattle and goats, as suggested by [33] and [10]. Results of the Meta-Analyses Pooling the data of E1 and E2 with those from other studies allowed to evaluate to what extent the relationships found in the present research fit a broader database ( Table 5). The strict relationship (R 2 = 0.82, p < 0.001) between dietary CP and MUC observed after meta-analyses confirms that the weak relationship between these variables found in E1 and E2 was strongly determined by the small range of CP variation between the experimental diets. Indeed, when the experiments considered a wide range of dietary CP concentration (studies 1, 6, and 7; Figure 1), the relationship between MUC and CP was tight and linear. This did not occur in the experiments with a small range of dietary CP concentration (studies 2, 3, 4, 5, and 8, Figure 1), where other experimental variables were likely more influential. Similarly, the linear relationship between MUC and NE L was evident only when a wide range of NE L was considered (experiments E1 and E2, which correspond to studies 2 and 3, respectively, in Figure 3). In contrast, all the studies included in the reviewed dataset were characterized by a good relationship between MUC and CP/NE L ratio ( Figure 2). Moreover, the residual plot distribution from the mixed model analysis, which includes the random effect of study, showed that the distribution of residual errors was closer to zero when CP/NE L , rather than CP or NE L , was used as the single predictor of MUC. The adjusted values of MUC based on the model residuals and the equation obtained from the meta-regression model plotted against CP/NE L values are shown in Figure 4. This is in agreement, as a general trend, with what it was found in a study on dairy cows [4], suggesting that the ratio between dietary protein and energy content is more related to MUC than their singular concentrations or intakes. The above considerations confirm that MUC can have a practical application for assessing the adequacy of protein nutrition in dairy ewes [10] and also point out that the 'modulation effect' of dietary energy has to be taken into account. According to [34], other dietary components, such as tannic phenols, can contribute to this modulation, particularly when ruminants are exposed to plant secondary metabolites, which is a common situation under grazing conditions. Practical Applications Milk urea concentration in dairy sheep farms can be easily and cost-effectively analyzed in bulk tank samples. This suggests that MUC can be used as a diagnostic tool to monitor the nutritional status of groups of lactating ewes in view of optimizing both milk [6] and reproductive performance [35], while curbing the release of N from excreta. Based on the results of the meta-analysis (Table 5) This equation was used in Table 6 to predict the level of MUC corresponding to different CP and NEL concentrations in the diet of lactating ewes. Thus, as long either CP or and NEL concentration is known, the other variable can be predicted by measuring MUC. The grey area reported in Table 6 represents a risk zone; it reports MUC values close to or higher than 56 mg/100 mL, indicated by [35] as a threshold above which there is high probability of impairment of reproductive function in sheep with a decrease of conception rate. Practical Applications Milk urea concentration in dairy sheep farms can be easily and cost-effectively analyzed in bulk tank samples. This suggests that MUC can be used as a diagnostic tool to monitor the nutritional status of groups of lactating ewes in view of optimizing both milk [6] and reproductive performance [35], while curbing the release of N from excreta. Based on the results of the meta-analysis (Table 5) This equation was used in Table 6 to predict the level of MUC corresponding to different CP and NE L concentrations in the diet of lactating ewes. Thus, as long either CP or and NE L concentration is known, the other variable can be predicted by measuring MUC. The grey area reported in Table 6 represents a risk zone; it reports MUC values close to or higher than 56 mg/100 mL, indicated by [35] as a threshold above which there is high probability of impairment of reproductive function in sheep with a decrease of conception rate. The MUC levels, reported in Table 6, are higher than the corresponding estimates based on the regression equation developed by [6] using their experimental data and data from the literature, except for diets with high NE L concentration (equal to or higher than 1.6 Mcal/kg of DM) and CP concentration (equal to or higher than 15% CP, DM basis), for which the values are similar between the two studies. This discrepancy may be related to the fact that the database used by Cannas et al., 1998 [6] to develop their equations included not only milk urea data but also plasma urea data from experiments on non-lactating sheep, which had much lower dietary CP and CPI than the lactating animals. In contrast, our extended database is focused on milk urea, by itself usually higher than blood urea at equal nutritional conditions, from lactating ewes fed, in most cases, diets characterized by a positive PDI balance and high dietary CP and CPI. Indeed, not only the dietary CP but also the CPI positively affects MUC (Table 5). In addition, [6] considered both blood and milk urea in their relationship, thus including in their analysis dietary treatments applied to dry ewes characterized by lower protein levels than those typical of lactating sheep rations. It is noteworthy that MUC levels as high as 50 mg/dL are not uncommon in sheep flocks grazing immature pasture with a high content of readily fermentable N. Therefore, the incorporation of high CP dietary treatments in the meta-analysis seems an important way to account for these excess conditions, frequently encountered in commercial dairy flocks. Equation (6) indicates that measurements of MUC can provide an accurate prediction of the CP/NE L ratio of the diet. Thus, if either CP or NE L concentrations are known, MUC can be used to estimate the other unknown variable, e.g., by interpolation of data in Table 6. Conclusions The experiments undertaken in this study found a marked negative linear relationship between dietary energy content and milk urea level in dairy ewes in mid and late lactation. The dietary NE L content was the best singular predictor of MUC, closely followed by the CP/NE L ratio. In contrast, MUC was not associated with dietary CP content because of the very small range of CP variation between the tested diets. The relevance of the dietary energy content and of the CP/NE L ratio, as predictors of MUC, was confirmed after the meta-analyses of the extended database. Further research is needed to improve the value of MUC as a nutritional index in the management of feeding in dairy sheep, especially under grazing conditions not considered in this research.
v3-fos-license
2019-02-04T11:04:09.755Z
2019-01-25T00:00:00.000
140049415
{ "extfieldsofstudy": [ "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2313-7673/4/1/10/pdf", "pdf_hash": "4dc584c224bb4e925c4660001ddefe9f5ddec9be", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41465", "s2fieldsofstudy": [ "Biology", "Engineering" ], "sha1": "4dc584c224bb4e925c4660001ddefe9f5ddec9be", "year": 2019 }
pes2o/s2orc
Analyzing Moment Arm Profiles in a Full-Muscle Rat Hindlimb Model† Understanding the kinematics of a hindlimb model is a fundamental aspect of modeling coordinated locomotion. This work describes the development process of a rat hindlimb model that contains a complete muscular system and incorporates physiological walking data to examine realistic muscle movements during a step cycle. Moment arm profiles for selected muscles are analyzed and presented as the first steps to calculating torque generation at hindlimb joints. A technique for calculating muscle moment arms from muscle attachment points in a three-dimensional (3D) space has been established. This model accounts for the configuration of adjacent joints, a critical aspect of biarticular moment arm analysis that must be considered when calculating joint torque. Moment arm profiles from isolated muscle motions are compared to two existing models. The dependence of biarticular muscle’s moment arms on the configuration of the adjacent joint is a critical aspect of moment arm analysis that must be considered when calculating joint torque. The variability in moment arm profiles suggests changes in muscle function during a step. Introduction In recent years, models of multimuscled systems have sought to replicate both the kinematics and control regimes of living organisms. These models are now being used to explore the challenges of coordinating many actuators to move a relatively small number of joints. Understanding how muscled organisms rapidly coordinate overactuated systems to accomplish tasks could suggest new design frameworks for the robotics community. The agility and elegance of even the simplest muscled organism is an enviable benchmark for robotics, which struggle to emulate the power and adaptability of animals. Modeling the kinematics of a multimuscle system and its associated neurological control signals is a complex task that requires an understanding of both engineering and biological principles. Modeling muscle coordination could provide insight into the impact and evolutionary incentive for utilizing overactuated systems. The overabundance of muscles with respect to the number of degrees of freedom has been a focus of research for many years. Muscle redundancy has long been characterized as a defense mechanism to preserve limb actuation in the case of individual muscle failure. Recently, this assumption has pivoted from less of a physiological imperative to more of a task-specific evolutionary development [1,2]. Muscle redundancy may be more of an evolutionary adaptation for carrying out a wide range of activities, as well as a safeguard against injury. This idea is in line with studies that characterize muscles based on task-driven physiological parameters called muscle synergies [3][4][5]. These synergies have been shown to reduce the computational burden on the nervous system by organizing muscle activity into functional groups. Rather than the nervous system coordinating the activity of each muscle individually, higher-level processing units may direct muscle groups to carry out abstract limb motion to accomplish a goal. Further research is needed to better understand the kinematic and neurological implications of multimuscle coordination. Previous work by Hunt et al. [6] has produced a three-dimensional (3D) model of a rat skeleton with muscle-driven locomotion controlled by modeled motorneuron signals. This model is implemented in AnimatLab [7], software that simulates both a 3D physics environment and a neural design environment. AnimatLab allows users to simultaneously monitor the biomechanical properties of simulated animals while also providing insight into the neurophysiological connections associated with locomotion. Hunt's model abstracts rat walking through the contraction of monoarticular muscles which flex and extend the hindlimb joints. The model demonstrates the capability of "deconstructing" an organism's joint kinematics into modeled motorneuron signals, which can then act as the drivers for artificial locomotion. Here, we expand upon the model by incorporating a complete muscular system, including biarticular muscles that span two joints. Accounting for biarticular muscles is mandatory for understanding physiological locomotion in rodents. For example, in the rat, the average cross-sectional area of biarticular muscles is about 33% larger than for monoarticular muscles [8]. Human leg models have also shown that biarticular muscles play a critical role in the generation of forces that monoarticular muscles lack [9][10][11]. Therefore, including these muscles, their forces, and their effect on joint torques is critical to understanding nervous system control of locomotion. Joint torque calculations depend both on accurate muscle force modeling as well as robust moment arm profiles. Calculating muscle moment arms is a complex task, as muscle paths are seldom unidirectional and different analytical methods can yield varied results [12][13][14][15]. Moment arm calculations have been shown to affect error in a force prediction model [16]. A technique rooted in biomechanical fundamentals is necessary to understand the varied nature of moment arm profiles during walking. Muscle moment arm profiles in static hindlimb rat models [17] and in the mouse hindlimb [18] have been developed. In these works, moment arms are calculated by fixing all joints except one, and moving that joint through its range of motion. In combination with joint angle data from a step, these data can be used to analyze moment arms of monoarticular muscles in locomotion. However, to consider the joints in isolation is to ignore the dependency of the function of biarticular muscles on the coordinated motion of adjacent joints. The moment arms and resultant torque of biarticular muscles of one joint depend on the motion of the other joint. Moreover, biarticular muscle length changes and, thus, their dynamics also depend on the coordinated motion of adjacent joints [19]. This methodology is therefore not sufficient for analyzing the complete physiological locomotion. This work describes a methodology rooted in fundamental principles to generate moment arm profiles from muscle attachment points for use in analyzing physiological locomotion. Specifically, this method is well suited for analyzing moment arms of biarticular muscles, muscles which wrap around joints, and muscles that contain multiple via points. In contrast to the single-joint analysis of the comparative models, the work includes the effects of simultaneous multijoint motion during a normalized step cycle, producing the physiologically relevant muscle moment arms as the hindlimb moves through stepping. This method is validated against two existing hindlimb models by comparing the moment arm profiles of single-joint motion. Finally, 3D moment arm profiles for multijoint motion are developed and examined. Hindlimb Bone Segments Four hindlimb segments had previously been scanned from the bones of a brown rat, Rattus norvegicus [6]. The pelvis, femur, tibia, and foot of the hindlimb were articulated using hinge joints, as shown in Figure 1. Hinge joints allow the connected segments to move through a range of flexion and extension, allowing for the application of sagittal plane motion from X-ray video analysis of parasagittal plane locomotion. For this analysis, only sagittal plane motion was considered, resulting in three degrees of freedom. Joint centers were placed using similar methods to both Johnson et al. [17] and Charles et al. [18]. The hip joint was placed such that the femoral head rested within the acetabulum. The knee joint was placed between the condyles of the femur such that the tibia and femur do not collide within the joint's range of motion. The ankle joint was placed between the tibial malleoli, proximal to the calcaneus. Joint angle limits were determined from the work of Fischer et al. [20]. Centers of rotation are stationary relative to the reference frame of the distal body and, therefore, do not undergo relative translation during walking. As motion is considered only in the sagittal plane, all joint axes remain parallel during motion. Anatomically Derived Muscle Paths The addition of muscles to the model was guided, in part, by 3D data gathered from anatomical dissections [17] as well as from Greene's primer on rat anatomy [21]. The xyz muscle attachment coordinates from Johnson et al. [17] served as a baseline approximation for the attachment area. Due to the differences in scanned bone sizes and the misalignment of bone-centric coordinate systems, utilizing the xyz data directly was not sufficient for generating a physiologically plausible model. AnimatLab does not restrict muscle pass-through on bone structures, allowing muscles to pass completely through bone during the walking cycle. Moreover, AnimatLab does not have built-in capabilities for muscle wrapping. To make a physiologically relevant model, the muscles were guided from origin to insertion using via points along paths that avoid bone pass-through according to the descriptions and figures from Greene. Special care was taken to guide muscles around joints and insert them onto structures that were physiologically similar to their real-life counterparts. No effort was made to avoid muscle pass-through. The colored muscle paths in Figures 2 and 3 represent muscle lines of action. Attachment points are shown as single points representing the centralized attachment area for muscles. Coloring is included to aid in model design, and muscles are sorted based on their general muscle activity, although this colorization has no impact on muscle parameters or moment arm profiles. Figure 2. Muscle paths were determined based on anatomical descriptions and diagrams from Greene [21]. Enlargements of (A) the knee and (B) hip detail the complex interconnection of muscle attachment points. Attachment points are stationary within local bone coordinate systems. Attachment points are carefully placed such that there is no bone pass-through during the physiological representative walking cycle. A Physiologically Representative Walking Profile X-ray video analysis of a walking rat was used to generate sagittal plane motion of the hip, knee, and ankle for a normalized stride period during walking on a flat plane as reported elsewhere [20,22,23]. Average joint motion profiles from X-ray data were decomposed into sum-of-sines equations in MATLAB (MATLAB 2017b, The MathWorks, Inc., Natick, MA, USA), then implemented in AnimatLab. In AnimatLab, a joint angle motor induces the motion according to the acquired equation. Application of joint motion directly into the 3D environment allows for muscle motion analysis during multijoint stepping motions, aiding in the analysis of muscle movement with respect to hindlimb segments. Calculating Muscle Moment Arms A muscle moment arm describes the distance of the muscle line of action from a joint axis, as shown in Figure 4. This distance is critical to analyzing the muscle's ability to generate torque about the joint axis. For example, the biceps femoris posterior can generate a relatively large amount of torque about the knee with little force because of its large moment arm. The plane of interest and its coordinate system is defined by the joint center and the joint axis representing flexion/extension (blue). Joint axes are defined using the same convention as Johnson et al. [17] and Charles et al. [18]. Orthogonal joint axes represent abduction/adduction, and inversion/eversion. (B) The free muscle segment that connects the adjacent bone segments (monoarticular muscles) or to the bone segment after the next (biarticular muscles) is projected onto the plane of interest. This projected free segment is called In AnimatLab, muscle attachment points remain stationary relative to the bones they are attached to. For this reason, moment arms for multisegment muscles (muscles that wrap around structures and have more than two attachment points) are solely dependent on the free muscle segment spanning the joint(s) that undergoes a length change over joint motion. Moment arm measurements are taken at discrete times during motion. To calculate the moment arm of a muscle segment, the muscle attachment points of the free muscle segment → p att,i are projected onto the plane of interest defined by the joint axis Muscle segments are represented by the subtraction of projection attachment positions of the free segment, creating the projected muscle segment vector → p f : The projected muscle segment is then crossed with the joint axis to determine the moment arm's direction. The sign of the dot product between a projected muscle attachment path → p proj,i and the moment arm determines whether the muscle is inducing positive or negative joint motion. The final moment arm length is represented as the scalar value r: (3) Sensitivity Analysis To determine the impact of that muscle attachment geometry has on the moment arm profiles, a sensitivity analysis was performed. For the isolated joint motion simulation, each free muscle segment attachment point was moved by 1 mm independently, and the moment arm profile was calculated. The moment arm profiles for the four attachment movements are examined. For the biceps femoris anterior, the proximal attachment was moved cranially and caudally along the body of the ischium, and the distal attachment was moved proximally and distally along the lateral condyle of the tibia. The proximal attachment of the pectineus was moved cranially/caudally along the acetabulum, and the distal attachment was moved proximally/distally along the linea aspera of the femur. The proximal attachment of the semimembranosus was moved dorsally/ventrally along the body of the ischium, and the distal attachment was moved proximally/distally along the dorsomedial ridge of the tibia. The proximal attachment of the vastus intermedius was moved proximally/distally along the line of the femur and the distal attachment was moved proximally/distally across the surface of the tibial tuberosity. The proximal attachment of the medial gastrocnemius was moved proximally/distally along the tibial line of action and the distal attachment was moved dorsally/ventrally along the posterior calcaneus of the foot. The proximal attachment of the tibialis anterior was moved proximally/distally along the extensor surface of the tibia and the distal attachment was moved proximally/distally along the dorsiflexor surface of the foot. Results The moment arms of three biarticular muscles are shown in Figure 5, demonstrating the impact that joint selection and leg configuration has on moment length calculations. As part of model validation, single-joint articulation was compared to moment arm data from Johnson et al. [17] and Charles et al. [18]. These works focus on moment arm generation in static models of the mouse and rat hindlimb, respectively. Models in these studies were moved through a much larger range of motion than that typically seen in rat walking and the data has, therefore, been truncated to match the physiological joint ranges studied in this model. The associated models do not include multijoint motion, an important characteristic for determining the muscle moment arms of biarticular muscles. Figure 6 shows the relative moment arm sizes for the range specified, along with the joint of interest and the range of motion. For proper size comparison, the moment arms have been scaled to the femur lengths of the respective animals (mouse = 16.25 mm [18], rat = 35.00 mm [17], AnimatLab model = 35.75 mm). Joints that are not in motion during moment arm analysis are held at zero-angle as specified in [17] and [18]. Noteworthy, these moment arm profiles lack the influence of biarticularity on the selected muscles. Figure 7 shows the sensitivity analysis of the moment arm profiles as the free muscle segment attachment points are moved. In general, the moment arm profile is most sensitive to the movement of the attachment point closest to the joint. Figure 6. y-Axes are moment arm lengths normalized to the model's femur length. Figure 8 shows the cyclical moment arm profiles of biarticular muscles under the action of a physiological stepping motion. The red surfaces represent the range of moment arm lengths reachable within the bounds of physiological walking. These moment arm profiles demonstrate the extreme variability of muscle moment arms for different joint angles. Using the moment arm profile as a simple lookup table for finding the semitendinosus accessory moment arm about the knee at −30 degrees, for example, could yield two different moment arm lengths that differ by about 35%. Not only are moment arm profiles of individual muscles unique among muscles of similar action, but moment arm profiles are different about the same rotational axes of different joints. In Figure 8, the three biarticular muscle moment arm profiles are shown with respect to their two joints of impact. When analyzing the moment arm about the knee, the biceps femoris anterior generates moment arms within the range of 1.5-4.0 mm. By contrast, about the hip the same muscle generates moment arms within the range of 3.7-7.7 mm. The range of moment arms of the plantaris about the ankle varies from 6 to 1 mm, while it remains almost constant at about 6 mm at the knee. Normalized biarticular moment arm profiles are plotted against their normalized muscle lengths in Figure 9. Muscle length coupled with the moment arm length can be used to infer torque directions during the step cycle. During the stance phase, the hip to knee moment arm ratios change from 0.6 to 1 for the biceps femoris posterior, compared to a change from 1 to 2 for the biceps femoris anterior. Interestingly, at the same time, the moment arms at the knee remain rather constant for all shown muscles. Both biceps femoris posterior and anterior show pronounced shortening during the stance phase, while the length of the plantaris remains constant. Discussion This work describes the development process for a model of the rat hindlimb with a complete set of muscles. It expands upon previous models that have linked the kinematics of the hindlimb to the nervous system with the intent of understanding the interactivity of the nervous system and the musculature. An expanded muscle model provides the fundamental framework for future intermuscle coordination for limb motion. In addition to the physical makeup of the leg, a physiological walking process has been adapted from existing work in order to analyze the moment arms of each muscle. The sensitivity analysis results shown in Figure 7 demonstrate the impact that muscle attachments can have on moment arm calculations. Trends in the muscle attachment motions provide insight into the possible differences between the associated models. Moving the proximal attachment point of the pectineus caudally along the pelvis shifts the functional transition angle (angle at which the muscle changes from a flexor to extensor) more negative, similar to the transition angle demonstrated by Charles et al. [18]. Moving the distal attachment of the semimembranosus distally along the tibia has a similar effect. While the attachment point motion is capable of describing some differences in the model, there are overall trends which are different. As the knee extends, the AnimatLab moment arm profile increases. A similar but less dramatic trend is seen in the model by Charles et al. [18], but the trend is reversed in the model by Johnson et al. [17]. This could be the result of the distal attachment point of the vastus intermedius moved further along the length of the femur or the inclusion of the distal attachment point in the tibial reference frame. Simplifying muscle geometry to a single path is an effective method for calculating moment arm lengths, but abstracts away from the complex muscular environment of the hindlimb. Broad muscles with long insertion lines (e.g., the biceps femoris muscle) are involved in a complex array of joint motion, making it difficult to identify a single moment arm length to represent the entire muscle. For this reason, larger muscles have been separated into multiple lines of actions. This method is not a perfect representation of a muscle's torque-generating capabilities, but is an adequate method for exploring the impact of neural-controlled muscular force generation in a simulated modeled environment. In future, the simulation could be expanded by including more lines of action or generalizing the muscle insertions in such a way as to accurately simulate the impact that broad muscles have on torque generation. The location of the joint center has a large impact on the calculation of a muscle moment arms since their length is defined relative to the position of the joint. Hinge joints in the rat are the result of the articulation of nonspherical bone surfaces which cause the joint center to migrate about articular surfaces during motion. As such, muscle moment arm lengths can change during locomotion by the motion of the joint center alone. The manual placement of joint centers applies an inherent simplification into the model which abstracts away the complicated nature of joint motion. This simplification is the result of software limitations wherein the joint center maintains a constant position relative to the articulating surfaces. In future, a more detailed version would incorporate a moving joint center to capture its impact on 3D muscle moment arm profiles. Isolated joint motion, while limited in scope, can provide some insight into muscle function. Figure 6 compares isolated joint muscle moment arms against two existing hindlimb models. This information can be used as a litmus test to infer which (if any) limb motion (flexion/extension, abduction/adduction, etc.) the muscle is most likely to impact. For example, the isolated joint moment arm profile for the pectineus shows that the muscle switches from a hip extensor to a hip flexor as the joint extends. This transition in muscle function is unique among monoarticular muscles in the model. Most monoarticular muscles have a single primary action while walking. The tibialis anterior, a muscle that runs along the length of the tibia and inserts into the foot, acts solely as an ankle dorsiflexor. Since muscles are only capable of generating contractile tension along their lines of action, the muscle moment arm is the definitive factor in the directionality of torque applied to a joint. Modeling torque generation accurately in a hindlimb model is a fundamental step toward generating the proper control signals for coordinated, muscle-driven locomotion. By contrast, most biarticular muscles serve multiple roles. Rats seldom move a single joint in their hindlimb while walking, which makes it necessary to consider biarticular moment arm length changes induced by both spanned joints. Figure 8 shows 3D surfaces representing viable moment arm lengths as the spanned joint angles change during physiological walking. For most monoarticular muscles, like the tensor fascia latae and tibialis posterior, the moment arm profile is almost completely coupled to the joint of interest. Conversely, muscles like the biceps femoris posterior and the plantaris are dramatically affected by both joints that they span. To determine the correct moment arm length, it is necessary to know the configuration of both associated joints. The variability in the moment arm and length change profiles (Figures 8 and 9) indicates different, and maybe changing, functions of the muscles during the step. Previous studies of biarticular muscle function often relied on the simplifying assumption of constant muscle moment arms [25,26]. The extremely varied moment arm profile of the plantaris about the ankle (Figure 8) shows that, while always acting as an extensor, ankle extension may rely on the assistance of other muscle contractions during times when the plantaris' moment arm is short. Considering their pronounced shortening throughout the stance phase, the biceps femoris posterior and anterior could be used as motors, while the rather constant length of the plantaris suggests an energy transferring function ("ligamentous action" [27]) resulting in synchronized joint movements. These distinctions could provide insight into how the nervous system prioritizes motorneuron activation when inducing motion. Conclusions We developed a method that uses vector analysis to calculate moment arms during physiological walking. Moment arms for a set of muscles were compared to that in the literature, demonstrating comparable moment arm ranges ( Figure 6) to existing hindlimb muscle models. In addition to single-joint model validation, this work demonstrates the importance of multijoint moment arm analysis. Moment arm profiles which capture the effects of simultaneous multijoint action demonstrate the complex roles that muscles can assume while walking. Future analysis will expand upon our planar model by incorporating additional degrees of freedom at the hip and ankle. Joint torque calculations coupled with moment arm profiles can be used to determine muscle forces through a distribution scheme. Once this distribution scheme is developed, activation curves for individual muscles can be developed, replicating the torque data observed from walking animals. Understanding how the kinematics of the system manifest in the model is essential to developing a complete neuromechanical model of rat locomotion that utilizes physiologically relevant data to understand neural control and redundant muscle coordination.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-01-01T00:00:00.000
9529453
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CC0", "oa_status": null, "oa_url": null, "pdf_hash": "34574ff0a0bd90e552a1db175e2621ca677d6b9f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41466", "s2fieldsofstudy": [ "Medicine" ], "sha1": "34574ff0a0bd90e552a1db175e2621ca677d6b9f", "year": 2014 }
pes2o/s2orc
Focus on: The Burden of Alcohol Use—Trauma and Emergency Outcomes Hospital emergency departments (EDs) see many patients with alcohol-related injuries and therefore frequently are used to assess the relationship between alcohol consumption and injury risk. These studies typically use either case–control or case–crossover designs. Case–control studies, which compare injured ED patients with either medical ED patients or the general population, found an increased risk of injury after alcohol consumption, but differences between the case and control subjects partly may account for this effect. Case–crossover designs, which avoid this potential confounding factor by using the injured patients as their own control subjects, also found elevated rates of injury risk after alcohol consumption. However, the degree to which risk is increased can vary depending on the study design used. Other factors influencing injury risk include concurrent use of other drugs and drinking patterns. Additional studies have evaluated cross-country variation in injury risk as well as the risk by type (i.e., intentional vs. unintentional) and cause of the injury. Finally, ED studies have helped determine the alcohol-attributable fraction of injuries, the causal attribution of injuries to drinking, and the impact of others’ drinking. Although these studies have some limitations, they have provided valuable insight into the association between drinking and injury risk. A lcohol consumption is a leading risk factor for morbidity and mortality related to both intentional (i.e., violencerelated) and unintentional injury. In 2000, 16.2 percent of deaths and 13.2 percent of disability-adjusted life years (DALYs) from injuries, worldwide, were estimated to be attributable to alcohol (Rehm et al. 2009). Alcohol affects psychomotor skills, including reaction time, as well as cognitive skills, such as judgment; as a result, people drinking alcohol often place themselves in high-risk situations for injury. Much of the data linking alcohol with nonfatal injuries have come from studies conducted in hospital emergency departments (EDs). As described in this article, in these settings the prevalence of alcohol involvement in the patients' injuries, as measured by a positive blood alcohol concentration (BAC) at the time of arrival in the ED or self-reported drinking prior to the injury event, is substantial. To accurately assess the relationship between alcohol use and injury risk, ED studies generally have used probability sampling designs, in which all times of day and days of the week are represented equally. This approach circumvents biases associated with sampling that might occur, for example, if samples were identified only on weekend evenings, when a higher prevalence of drinking and, possibly, of injury might be expected. Although the high prevalence rates mentioned above suggest that alcohol is an important risk factor for injury, they do not provide the information necessary to evaluate the actual level of risk for injury at which drinking places the individual. Data to establish drinking-related risk of both intentional and unintentional injury in ED samples generally have come from two types of study design: case-control studies and case-crossover studies. This article summarizes the findings of these studies and explores specific aspects of the relationship between alcohol use and injury risk. Case-Control Studies Two types of case-control studies have been used to estimate the risk of injury from drinking for patients treated in the ED. The most commonly used type of case-control study uses noninjured (i.e., medical) patients attending the same ED during the same period of time as quasi-control subjects. These patients presumably come from the same geographic area as the injured patients and likely share other characteristics (e.g., socioeconomic status). Researchers conducted a meta-analysis of 15 ED studies conducted in 7 countries that participated in the Emergency Room Collaborative Alcohol Analysis Project (ERCAAP) (Cherpitel et al. 2003a) and which all used the same methodology and instrumentation. The studies only included those patients who arrived at the ED within 6 hours of the injury event and excluded those medical patients who primarily were admitted to the ED for alcohol intoxication or withdrawal symptoms. The metaanalysis found a pooled odds ratio (OR) of injury associated with a positive BAC (≥0.01 percent) of 2.4 (95% CI = 1.9-3.0); 1 moreover, the OR was higher (OR = 2.9) for patients with higher BAC levels (≥0.10 percent) (Ye and Cherpitel 2009). A similar likelihood of injury (OR = 2.1) was found for patients who reported drinking within 6 hours prior to the injury event, regardless of time of arrival in the ED. One concern with this approach of using medical patients as control subjects for injured patients is the possibility of underestimating the true risk of drinking associated with injury. Noninjured patients have been found to be heavier drinkers than people in the general population from which they come who do not seek emergency care (Cherpitel 1993). Thus, these patients may be attending the ED for conditions related to their drinking (in addition to those associated with alcohol intoxication or withdrawal). In the second type of case-control study used to estimate risk of injury from drinking in ED patient samples, people in the general population of the community from which the ED patients come are used as control subjects. These individuals presumably are free of conditions that may be related to their drinking. Only four such studies have been reported to date, including two from Australia (Mcleod et al. 1999;Watt et al. 2004) and one each from the United States (Vinson et al. 2003) and Mexico (Borges et al. 1998). In these studies, the ORs ranged from 6.7 in the Mexican study to 3.1 in the U.S. study and around 2.0 in the Australian studies. Moreover, both the U.S. and the Australian studies demonstrated a dose-response relationship. Case-Crossover Studies The second study design used to estimate the risk of injury from alcohol consumption is the case-crossover study (Maclure 1991). This approach is thought to circumvent at least some of the problems raised with the case-control design, such as demographic and others differences between case and control subjects that may be related to both alcohol consumption and likelihood of injury. There are two approaches to the case-crossover design, both of which use injured patients as their own control subjects, thereby theoretically reducing confounding of the alcohol-injury relationship from stable risk factors, such as age and gender. • The matched-interval approach. Studies using the matchedinterval approach compare drinking within 6 hours prior to the injury event with drinking during a predetermined control period, such as the same 6-hour period during the previous day or previous week. Such studies have reported ORs ranging from 3.2 (based on any drinking at the same time the previous day) ( Vinson et al. 2003) to 5.7 in a 10-country study (based on any drinking at the same time the previous week) ( Borges et al. 2006b). Both studies demonstrated a dose-response relationship. Thus, the analysis of Vinson and colleagues (2003) determined ORs ranging from 1.8 with consumption of 1 to 2 drinks prior to injury to 17 with consumption of 7 or more drinks. Likewise, Borges and colleagues (2006b) found ORs ranging from 3.3 with consumption of one to two drinks to 10.1 with consumption of six or more drinks prior to injury. • The usual-frequency approach. This approach compares the patients' drinking in the 6 hours preceding the injury to their expected drinking during that time, based on their usual frequency of drinking. In a study using this approach that included 28 EDs across 16 countries, the estimated ORs ranged from 1.05 (Canada) to 35.0 (South Africa), with a pooled estimate of 5.69 (95% CI = 4.04-8.00) (Borges et al. 2006a). Comparison of Methods to Estimate Risk The results described above indicate that the estimates of risk of injury in samples from the same country can vary depending on the method used. For example, in analyses across eight countries participating in ERCAAP, analyses using the case-control method found that the pooled OR of injury for self-reported drinking prior to the event was 2.1, compared with an OR of 5.2 when the usual-frequency method of case-crossover analysis was used (Ye and Cherpitel 2009). Furthermore, the World Health Organization (WHO) Collaborative Study on Alcohol and Injury, which used the case-crossover method across 12 countries, found a pooled OR of injury of 6.8 using the usual-frequency approach, compared with 5.7 using the matched-interval approach (Borges et al. 2006b). Case-control designs may underestimate the risk of injury if noninjured control subjects are presenting to the ED with other conditions related to their drinking, whereas both the matched-interval and usualfrequency approaches to the case-crossover design are subject to recall bias of drinking in the past. Effects of other drug use None of these estimates of risk of injury related to drinking have taken into consideration other drug use at the time of injury, although multiple substances commonly are used together in ED populations (Buchfuhrer and Radecki 1996). Other drug use might be expected to elevate the risk of injury, either alone or in combination with alcohol; however, this may not be the case. One study found an OR of 3.3 for drinking within 6 hours prior to injury and an OR of 3.0 for drinking in combination with other drug use during the same time; in contrast, drug use alone had no significant effect on risk (Cherpitel et al. 2012b). It is important to consider that in this study the majority of drug users reported using marijuana. However, given their different pharmacological properties, all drugs would not be expected to act in a similar manner, either alone or in combination with alcohol. Consequently, in other populations with different drug use patterns the findings might be different. Effects of usual drinking Patterns The risk of injury from drinking prior to the event (i.e., acute consumption) also is influenced by the drinker's usual drinking patterns (i.e., chronic consumption). Cherpitel and colleagues (2004) found that the risk of injury from drinking prior to the event was lower among frequent heavy drinkers than among infrequent heavy drinkers, suggesting that heavier drinkers may have developed tolerance against some adverse effects of alcohol that lead to injury. Likewise, in an analysis by Gmel and colleagues (2006), the risk of injury was greater among usual light drinkers who occasionally drink heavily (i.e., report episodic heavy drinking) than among people who usually drink heavily but report no episodic heavy drinking or among people who usually drink heavily as well as report episodic heavy drinking. Risk of Alcohol-Related injury Although acute alcohol consumption, modified by drinking pattern, has been found to be associated with risk of injury, drinking pattern also has been found to be associated with risk of an alcohol-related injury 2 (defined as drinking within 6 hours prior to injury), with frequency of drinking among non-heavy drinkers (Cherpitel et al. 2003b) and both episodic and frequent heavy drinking predictive of alcoholrelated injury (Cherpitel et al. 2012c). An analysis of combined data from ERCAAP and from the WHO Collaborative Study on Alcohol and Injury across 16 countries found the pooled risk of alcohol-related injury was increased with heavy episodic drinking (OR = 2.7) as well as with chronic high-volume drinking (OR = 3.5); moreover, the risk was highest for people reporting both patterns of drinking (OR = 6.1) (Ye and Cherpitel 2009). Cross-country variation in Risk of injury A great deal of variation has been found across countries in risk of injury and risk of alcohol-related injury, and this heterogeneity seems to be associated with a country's level of detrimental drinking pattern (DDP). The DDP score, which is based on aggregate survey data and key informant surveys, is a measure developed for comparative risk assessment in the WHO's Global Burden of Disease study (Rehm et al. 2004). It includes such indicators of drinking patterns as heavy drinking occasions, drinking with meals, and drinking in public places. The DDP has been assessed in a large number of countries around the world as a measure of the "detrimental impact" on health, and other drinking-related harms, at a given level of alcohol consumption (Rehm et al. 2001(Rehm et al. , 2003). Countries with a higher level of DDP have been found to have a higher risk of injury related to alcohol than those with lower DDP scores (Cherpitel et al. 2005b). Risk by type and Cause of injury Risk of injury from alcohol also varies by type (i.e., intentional vs. unintentional) and cause of injury. For example, Macdonald and colleagues (2006) found that the risk was highest for violence-related (i.e., intentional) injuries. A case-crossover analysis using the usual-frequency approach that included data from 15 countries in the ERCAAP and WHO projects found that greater variations across countries existed in risk of an intentional injury than in risk of unintentional injury; this difference was at least in part explained by the level of DDP in a country (Cherpitel and Ye 2010). Overall, the pooled OR for intentional injury related to drinking in these countries was 21.5, compared with 3.37 for unintentional injury (Borges et al. 2009). Furthermore, the risk of intentional injury showed a greater dose-response association than the risk of unintentional injury (Borges et al. 2009). Thus, the ORs for intentional injuries ranged from 11.14 for one to two drinks prior to injury to 35.57 for five or more drinks during this time, whereas the ORs for unintentional injuries ranged from 3.86 to 6.4, respectively. Among the unintentional injuries, the risk also varied depending on the cause of the injury. For example, the OR was 5.24 for traffic-related injuries, compared with 3.39 for injuries related to falls. Alcohol-Attributable Fraction Another variable that has been studied in the context of assessing the risk of injuries after drinking is the alcoholattributable fraction (AAF). This variable represents the proportional reduction in injury that would be expected if the risk factor (i.e., drinking prior to injury) was absent; it reflects the burden of injury in a given society that results from alcohol use. The AAF also varies across countries in ED studies, because it is related to both the risk of injury and the prevalence of alcohol-related injury. In a case-control study of 14 EDs from six countries in ERCAAP, the AAF based on self-reported drinking within 6 hours prior to the injury event varied from 0.5 percent to 18.5 percent for all types of injury, and from 19.1 percent to 83.3 percent for intentional injury (Cherpitel et al. 2005a). The pooled estimate from all EDs for the AAF was 5.8 percent for all types of injury and 42.5 percent for intentional injury. In other words, more than 40 percent of all intentional injuries would not have occurred if the people involved had not been drinking. Moreover, the investigators determined higher AAF estimates for male than female subjects for both unintentional injuries (5.5 percent vs. 1.7 percent) and intentional injuries (50.0 percent vs. 7.7 percent). Causal Attribution The ED studies in the ERCAAP and WHO projects also assessed the patients' causal attribution of their injuries to their drinking-that is, patients were asked whether they believed the injury would have occurred if they had not been drinking. In an evaluation that included 15 countries, onehalf of the patients who reported drinking prior to injury also reported a causal attribution ). This information was used to establish a subjective AAF-an AAF derived from the patient's own causal attribution of their injury to drinking. This subjective AAF then was compared to the AAF obtained using the standard formula based on the relative risk of injury from alcohol and prevalence of drinking in the 6-hour period (i.e., the objective AAF) from the six ERCAAP countries, as described above. This comparison found that for unintentional injuries, the subjective AAF generally was somewhat higher than the objective AAF. For intentional injuries, however, the subjective AAF was substantially lower (i.e., 5.9 percent to 46.7 percent) than the objective AAF (i.e., 24.9 percent to 83.3 percent) (Bond and Macdonald 2009). others' Drinking Researchers also increasingly are interested in studying the harm, including injury, resulting from other people's drinking. Evaluating these so-called externalities is important for a fuller understanding of the burden of alcohol-related injury in society. To assess such externalities, investigators for the ED studies in the WHO project also obtained data on whether the patient being treated for a violence-related injury believed the other person had been drinking. Across the 14 countries, from 14 percent to 73 percent of the victims believed that others definitely had been drinking. Based on these data, the pooled estimate for the AAF was 38.8 percent when both victim and perpetrator were considered, compared with an AAF of 23.9 percent when only the patient was considered (Cherpitel et al. 2012a). Considerations and Limitations in Estimating Risk of injury The data reported here on the risk of injury primarily were derived from patients' self-reports of drinking prior to injury. Although the ED studies all estimated the patient's BAC at the time of ED admission based on breath alcohol levels, self-reports seem to be a better measure of drinking, because in many cases a substantial period of time may have lapsed between the patient's last drink, the injury event, and arrival at the ED. As a result, the BAC may be negative even though the patient reports drinking prior to injury. Indeed, this discrepancy has been found in an analysis of the concordance between self-reported drinking and BAC measurements in the ERCAAP and WHO studies across 16 countries (Cherpitel et al. 2007). The studies reported here all have been conducted in EDs, rather than in trauma centers that generally treat the most serious injury cases and, consequently, are less conducive to the detailed data collection effort required in studies of alcohol and injury, unless the patient is admitted to the hospital. It is unknown how this may affect the resulting conclusions regarding the rates of the risk of injury from drinking, because the literature has been mixed regarding alcohol's association with injury severity. As noted earlier, some limitations also apply to the methods that have been used to estimate the risk of injury related to alcohol consumption. Case-control studies may underestimate this risk because the medical patient controls also may have drinking-related conditions. The matched-interval approach to case-crossover analyses eliminates the heaviest drinkers (i.e., those who report drinking both during the period preceding the injury and during the control period), which may lead to underestimates of the risk of injury for these drinkers. Likewise, the usual-frequency approach may underestimate the risk of injury for heavy drinkers because of the increase in expected drinking occasions for the heaviest drinkers. In addition, when estimating risk of injury using the casecrossover approach, it is important to consider the activity in which the patient was engaged at the time of injury. For example, for a patient injured in a motor vehicle accident who had been drinking, the comparison with the control time interval only would be valid if the patient also had been in a motor vehicle during the control interval. Otherwise, the patient would not have been exposed to the risk of incurring a motor vehicle-related injury, regardless of whether he or she had been drinking. This is an important consideration in future studies that seek to examine risk of injury related to alcohol. Lastly, the risk of injury related to drinking likely is affected by a number of individual-level characteristics such as age, gender, and risk-taking disposition, as well as by societal-level characteristics such as detrimental drinking pattern, as discussed above. Estimates of AAFs for injury, which are required for determining the global burden of disease for injury related to alcohol, generally have not taken these variables into consideration, and this is a necessary direction for future research on the burden alcohol-related injury puts on society. ■
v3-fos-license
2018-04-03T05:12:31.781Z
2015-04-20T00:00:00.000
13671643
{ "extfieldsofstudy": [ "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.23.010405", "pdf_hash": "30f036799dab6185ebcad1eb85af02a1a20ac75f", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41467", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "30f036799dab6185ebcad1eb85af02a1a20ac75f", "year": 2015 }
pes2o/s2orc
Short and robust silicon mode ( de ) multiplexers using shortcuts to adiabaticity Compact silicon mode (de)multiplexers based on asymmetrical directional couplers are designed using shortcuts to adiabaticity. The coupling coefficient and propagation constants mismatch are engineered to optimize the device robustness. Simulations show that the devices are broadband and have large fabrication tolerance. © 2015 Optical Society of America OCIS codes: (130.3120) Integrated optics devices; (060.1810) Buffers, couplers, routers, switches, and multiplexers; (230.7370) Waveguides. References and links 1. D. J. Richardson, J. M. Fini, and L. E. Nelson, “Space-division multiplexing in optical fibres,” Nat. Photonics 7, 354–362 (2013). 2. F. Yaman, N. Bai, B. Zhu, T. Wang, and G. Li, “Long distance transmission in few-mode fibers,” Opt. Express 18(12), 13250–13257 (2010). 3. L.-W. Luo, N. Ophir, C. P. Chen, L. H. Gabrielli, C. B. Poitras, K. Bergmen, and M. Lipson, “WDM-compatible mode-division multiplexing on a silicon chip,” Nat. Commun. 5, 3069 (2014). 4. N. Riesen and J. D. Love, “Design of mode-sorting asymmetric Y-junctions,” Appl. Opt. 51(15), 2778-2783 (2012). 5. J. B. Driscoll, R. R. Grote, B. Souhan, J. I. Dadap, M. Lu, and R. M. Osgood, “Asymmetric Y junctions in silicon waveguides for on-chip mode-division multiplexing,” Opt. Lett. 38(11), 1854–1856 (2013). 6. T. Uematsu, Y. Ishizaka, Y. Kawaguchi, K. Saitoh, and M. Koshiba, “Design of a compact two-mode multi/demultiplexer consisting of multimode interference waveguides and a wavelength-insensitive phase shifter for mode-division multiplexing transmission,” J. Lightwave Techcnol. 30(15), 2421–2426 (2012). 7. Y. Li, C. Li, C. Li, B. Cheng, and C. Xue, “Compact two-mode (de)multiplexer based on symmetric Y-junction and multimode interference waveguides,” Opt. Express 22(5), 5781–5786 (2014). 8. J. Xing, Z. Li, X. Xiao, J. Yu, and Y. Yu, “Two-mode multiplexer and demultiplexer based on adiabatic couplers,” Opt. Lett. 38(17), 3468–3470 (2013). 9. Y. Ding, J. Xu, F. Da Ros, D. Huang, H. Ou, and C. Peucheret, “On-chip two-mode division multiplexing using tapered directional coupler-based mode multiplexer and demultiplexer,” Opt. Express 21(8), 10376–10382 (2013). 10. M. Greenberg and M. Orenstein, “Multimode add-drop multiplexing by adiabatic linearly tapered coupling,” Opt. Express 13(23), 9381–9387 (2005). 11. D. Dai, J. Wang, and Y. Shi, “Silicon mode (de)multiplexer enabling high capacity photonic networks-on-chip with a single-wavelength-carrier light,” Opt. Lett. 38(9), 1422–1424 (2013). 12. D. Dai, J. Wang, and S. He, “Silicon multimode photonic integrated devices for on-chip mode-divisionmultiplexed optical interconnects,” Prog. Electromagn. Res. 143, 773–819 (2013). 13. E. Torrontegui, S. Ibáñez, S. Martı́nez-Garaot, M. Modugno, A. del Campo, D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga, “Shortcuts to adiabaticity,” Adv. At., Mol., Opt. Phys. 62, 117–169 (2013). 14. S. Longhi, “Quantum-optical analogies using photonic structures,” Laser Photonics Rev. 3(3), 243–261 (2009). 15. S.-Y. Tseng, “Counterdiabatic mode-evolution based coupled-waveguide devices,” Opt. Express 21(18), 21224– 21235 (2013). #234095 $15.00 USD Received 6 Feb 2015; revised 10 Mar 2015; accepted 7 Apr 2015; published 14 Apr 2015 © 2015 OSA 20 Apr 2015 | Vol. 23, No. 8 | DOI:10.1364/OE.23.010405 | OPTICS EXPRESS 10405 16. S.-Y. Tseng, R.-D. Wen, Y.-F. Chiu, and X. Chen, “Short and robust directional couplers designed by shortcuts to adiabaticity,” Opt. Express 22(16), 18849–18859 (2014). 17. S.-Y. Tseng, “Robust coupled-waveguide devices using shortcuts to adiabaticity,” Opt. Lett. 39(23), 6600–6603 (2014). 18. S. Martı́nez-Garaot, S.-Y. Tseng, and J. G. Muga, “Compact and high conversion efficiency mode-sorting asymmetric Y junction using shortcuts to adiabaticity,” Opt. Lett. 39(8), 2306–2308 (2014). 19. X. Chen, H.-W. Wang, Y. Ban, and S.-Y. Tseng, “Short-length and robust polarization rotators in periodically poled lithium niobate via shortcuts to adiabaticity,” Opt. Express 22(20), 24169–24178 (2014). 20. R. R. A. Syms and P. G. Peall, “The digital optical switch: analogous directional coupler devices,” Opt. Commun. 69(3,4), 235-238 (1989). 21. K. Bergmann, H. Theuer, and B. W. Shore, “Coherent population transfer among quantum states of atoms and molecules,” Rev. Mod. Phys. 70(3), 1003-1025 (1998). 22. A. Ruschhaupt, X. Chen, D. Alonso, and J. G. Muga, “Optimally robust shortcuts to population inversion in two-level quantum systems,” New J. Phys. 14(9), 093040 (2012). 23. X.-J. Lu, X. Chen, A. Ruschhaupt, D. Alonso, S. Guérin, and J. G. Muga, “Fast and robust population transfer in two-level quantum systems with dephasing noise and/or systematic frequency errors,” Phys. Rev. A 88(3), 033406 (2013). 24. D. Daems, A. Ruschhaupt, D. Sugny, and S. Guérin, “Robust quantum control by a single-shot shaped pulse,” Phys. Rev. Lett. 111(5), 050404 (2013). 25. FIMMWAVE/FIMMPROP, Photon Design Ltd, http:/www.photond.com. 26. M. L. Cooper and S. Mookherjea, “Numerically-assisted coupled-mode theory for silicon waveguide couplers and arrayed waveguides,” Opt. Express 17(3),1583-1599 (2009). 27. A. Ruschhaupt and J. G. Muga, “Shortcut to adiabaticity in two-level systems: control and optimization,” J. Mod. Opt. 61(10), 828-832 (2014). Introduction Optical interconnects have emerged as a very promising technology for on-chip data communication.Current photonic integrated circuits operate almost exclusively in the single-mode regime, and wavelength-division multiplexing (WDM) provides a straightforward approach to increase the transmission capacity by scaling up the number of wavelengths in the interconnect link.However, WDM may be too costly for short-reach interconnects due to the requirement of multiple laser sources.For fiber communications, multimode communications utilizing spacedivision multiplexing (SDM) in multi-core fibers [1] or mode-division multiplexing (MDM) in few-modes fibers [2] have been exploited as an effective approach to increase the capacity of a single wavelength carrier.Recently, on-chip optical communication using MDM has also attracted lots of attentions.One of the key components to realize on-chip MDM is an mode (de)multiplexer with low crosstalk, broad bandwidth, small footprint and large fabrication tolerance.Several schemes have been proposed to realize mode (de)multiplexers, including microrings [3], asymmetric Y-junctions [4,5], multimode interference (MMI) [6,7], adiabatic couplers (ACs) [8,9], and asymmetrical directional couplers (ADCs) [10][11][12].For microring and Y-junction based devices, very precise fabrication are usually required to obtain the desired ring size and the ultrasmall gaps in Y-branches.MMI-based devices, on the other hand, are less flexible for incorporating more channels.AC-based devices are usually broadband but associated with long device lengths.ADC-based devices usually require accurate control of the coupling length and waveguide width, unless the adiabatic scheme is employed [10], which inevitably leads to long device length. Conceptually, the problem of power coupling between a spatial mode in a multimode bus waveguide and a single-mode access waveguide in a mode (de)multiplexer is analogous to the problem of coherent quantum system state control with laser pulses, with the goal of performing precise and robust state transfer in a short time.In this framework, a family of protocols called shortcuts to adiabaticity (STA) [13] has been developed to optimize quantum state transfer, providing design rules to shape the profile and phase of the laser pulses to achieve the desired transfer properties.Using the analogies between quantum mechanics and wave optics in weakly-coupled waveguides [14], we have recently proposed a series of coupled-waveguide devices using the STA protocols including directional couplers [15][16][17], mode-sorting asymmetric Y-junctions [18], and polarization rotators [19].These devices are efficient and broadband, and have large fabrication tolerance and short length.While these earlier works are focused on weakly-guided waveguide platforms with small refractive index contrasts, there is interest to extend the concept of quantum-optical analogy in coupled waveguides and STA into the design of silicon-based high index contrast photonics for the purpose of dense integration of optical components on chip.In this paper, we apply the STA theory to the design of ADC-based mode (de)multiplexers on silicon-on-insulator (SOI) technology, which can be easily integrated with CMOS circuitry for on-chip data communication. Asymmetrical directional coupler (ADC) and shortcuts to adiabaticity (STA) In an ADC consisting of a single-mode access waveguide and a multimode bus waveguide as shown in Fig. 1, the evolution equation for the changes in the guided-mode amplitudes in the individual waveguides |Ψ = [A 0 , A m ] T (A 0 denotes the amplitude of the fundamental mode in the access waveguide, and A m denotes the amplitude of the m-th propagating mode in the bus waveguide) is [20] where Ω is the coupling coefficient, and Δ = (β 0 − β m )/2 describes the difference between propagation constants of the fundamental mode and the m-th mode of the corresponding waveguides.In conventional ADCs with constant Ω and Δ, the coupling efficiency F is described by Selective mode coupling is achieved by selecting appropriate access and bus waveguide widths W 1 and W 2 such that the resulting Δ for the corresponding modes is zero (phase-matching).The scheme is equivalent to exact resonant coupling between two quantum states by a Rabi π pulse [21], which provides the fastest transition but is not robust against pulse parameter variations.In other words, the conventional ADC mode (de)multiplexers are typically compact but lacks other desired properties such as broadband and large fabrication tolerance.The STA approach [13] provides protocols for the design of Ω(z) and Δ(z), allowing for high coupling efficiency, robustness against variations in fabrication and/or input wavelength, and short device length.The solution of Eq. ( 1) can be described by the following decoupled system state [17] where Different from the traditional adiabatic approaches where the evolution of Ω and Δ are designed to satisfy the adiabatic criterion, the STA approach described here designs the system evolution using Eq. ( 2).In adiabatic designs, the system evolution follows the eigenstates of the matrix in Eq. ( 1) (the adiabatic states) approximately; while in STA, the system evolution follows Eq. ( 2) exactly.The STA protocols provide alternative fast processes which reproduce the same final state as the adiabatic approach in a shorter distance, without the need to satisfy the adiabatic criterion.For example, to describe 100% excitation of the m-th mode in the bus waveguide by the access waveguide using Eq. ( 2) in a (de)multiplexer with length L, the initial and final states of the system are set as guarantee the desired initial and final states.If in addition this guarantees Ω(0) = Ω(L) = 0, meaning no coupling at the beginning and the end of the waveguides.So, the waveguides are well-separated at the beginning and the end of the coupling region.Because the system evolution follows Eq. ( 2) exactly, the above conditions thus ensure perfect excitation of the m-th mode in the bus waveguide.There is still much freedom to design the coupling coefficient and mismatch except for the boundary conditions.The freedom allows one to engineer stable or robust system evolution against different errors.To find the optimal sets of Ω(z) and Δ(z) which are robust against errors, we can nullify the derivatives of the coupling efficiency F at z = L with respect to the considered errors [22][23][24]. Device design and simulation In this paper, we use a SOI wafer with a 340 nm thick top silicon layer for device design.The design wavelength is set at 1550 nm, and the refractive indices of Si and SiO 2 are 3.5 and 1.45.The devices are air-cladded with a refractive index of 1.The effective indices of the first four TM modes in the waveguides are calculated for different widths with a full-vectorial finiteelement method mode solver and shown in Fig. 2. Phase-matching condition for the TM m mode in the bus waveguide can be satisfied by choosing W 1 and W 2 such that n eff0 (W 1 ) = n effm (W 2 ), where n eff0 and n effm are the effective indices of the fundamental TM 0 mode of the access waveguide and TM m mode of the bus waveguide, respectively.The waveguide spacing D and widths W 1 , W 2 are then adjusted along the propagation direction to satisfy the following set of Ω(z) and Δ(z) functions for robustness [16] Ω 6) and (7) are shown in Fig. 3. We design mode (de)multiplexers for TM-polarization operation.The width of the access waveguide W 1 is fixed at 0.3 μm for single-mode operation.We set the device length L at 50 μm, further reduction of L results in large values in Ω, which would lead to small gap between the access and bus waveguides that is difficult to fabricate.Using the exponential relation between Ω and D (details can be found in [16]) and the relation between Δ and the width difference calculated from Fig. 2, we obtain the corresponding (de)multiplexers for TM m (m=1, 2, and 3) modes as shown in Fig. 4. The corresponding design parameters (W 1 , W 2 , and D) for these devices are shown in Fig. 5.A commercial software (FIMMPROP, Photon Design) employing an eigenmode expansion method [25] is used to simulate light propagation in these devices.The calculated light propagation in the designed (de)multiplexers are also shown in Fig. 4. It can be seen that higher-order modes in the bus waveguide are efficiently excited by input light in the narrow access waveguides. Figure 6 shows the simulated wavelength dependence of the coupling efficiency from the access waveguide to TM m (m=1, 2, and 3) modes in the bus waveguide.It can be seen that for a wide range from 1.45 to 1.60 μm, the coupling efficiency is larger than 90 %.We note that the coupling efficiency at 1550 nm is not at the maximum, this can be attributed to the fact that the simple coupled mode theory in Eq. ( 1) only approximately describes light propagation in high index contrast waveguides [26].Our result shows that the approximation works very well and that the concept of quantum-optical analogy in coupled waveguides and STA can indeed be applied to high index contrast photonics.Figure 7 shows the transmission from the access waveguide into the guided modes of the bus waveguide for the designed (de)multiplexers.Clearly, our numerical simulation has shown that the crosstalk into the unwanted modes is lower than -40 dB from 1.45 to 1.60 μm for all three (de)multiplexers. Next, the fabrication tolerance is investigated at the operating wavelength of 1550 nm by changing the waveguide widths to W 1 ± Δw and W 2 ± Δw in the simulation, where Δw is the width deviation due to fabrication error.The simulation result is shown in Fig. 8.It can be seen that for width variation as large as ± 40 nm, the coupling efficiency is greater than 80 %.Our numerical simulation also shows that the crosstalk into the unwanted modes is lower than -30 dB for Δw from -40 nm to +40 nm in all three (de)multiplexers as shown in Fig. 9. We note that the chosen Ω(z) and Δ(z) in Eqs. ( 6) and ( 7) are optimized for broadband operation to the third order variation in Δ [16,23].Optimization to higher-order robustness and the inclusion of Ω variation could further improve the fabrication tolerance and bandwidth [24].While the width variation Δw considered in this work is assumed to be unvaried along the device to account for a uniform deviation in device width, the STA approach in fact allows one to optimize Ω(z) and Δ(z) against various types of fabrication errors, for example, Δw varies with z [27].In other words, STA provides a versatile toolbox for the design of devices depending on the type of error encountered in fabrication.An exhaustive discussion on possible designs is, however, beyond the scope of this work. Conclusion In conclusion, we have demonstrated that the STA approach can be applied successfully to the design of high index contrast silicon mode (de)multiplexers.By engineering the coupling coefficient and propagation constants mismatch variation, we use the approach to design mode (de)multiplexers that are compact, broadband, and have large fabrication tolerance.This opens the door to applying STA protocols to the design of compact and robust waveguide devices for dense integration of optical components on chip. Fig. 4 .Fig. 5 . Fig. 4. Designed mode (de)multiplexers using STA and the corresponding light propagation simulations for coupling in to the TM m mode in the bus waveguide: (a) m=1, (b) m=2, and (c) m=3.White lines indicate the waveguide boundaries. Fig. 6 . Fig.6.Simulated wavelength dependence of the coupling efficiency from the access waveguide to the TM m mode (m=1, 2, and 3) in the bus waveguide. Fig. 7 . Fig. 7. Simulated transmission from the access waveguide into the guided modes of the bus waveguide as a function of wavelength for the mode (de)multiplexers in Fig. 4. (a) m=1, (b) m=2, and (c) m=3.
v3-fos-license
2019-12-28T14:03:07.877Z
2019-12-23T00:00:00.000
209488884
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3921/9/1/12/pdf", "pdf_hash": "740084d4338869b9aeaaf65fc65c03064f4efde6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41469", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "a17523d87fb86d9189d07f77447d2b77e5c25508", "year": 2019 }
pes2o/s2orc
Narrow-Leafed Lupin (Lupinus angustifolius L.) Seeds Gamma-Conglutin is an Anti-Inflammatory Protein Promoting Insulin Resistance Improvement and Oxidative Stress Amelioration in PANC-1 Pancreatic Cell-Line (1) Background: Inflammation molecular cues and insulin resistance development are some of the main contributors for the development and advance of the pathogenesis of inflammatory-related diseases; (2) Methods: We isolated and purified γ-conglutin protein from narrow-leafed lupin (NLL or blue lupin) mature seeds using affinity-chromatography to evaluate its anti-inflammatory activities at molecular level using both, a bacterial lipopolysaccharide (LPS)-induced inflammation and an insulin resistance pancreatic cell models; (3) Results: NLL γ-conglutin achieved a plethora of functional effects as the strong reduction of cell oxidative stress induced by inflammation through decreasing proteins carbonylation, nitric oxide synthesis and inducible nitric oxide synthase (iNOS) transcriptional levels, and raising glutathione (GSH) levels and modulation of superoxide dismutase (SOD) and catalase enzymes activities. γ-conglutin induced up-regulated transcriptomic and protein levels of insulin signalling pathway IRS-1, Glut-4, and PI3K, improving glucose uptake, while decreasing pro-inflammatory mediators as iNOs, TNFα, IL-1β, INFγ, IL-6, IL-12, IL-17, and IL-27; (4) Conclusion: These results suggest a promising use of NLL γ-conglutin protein in functional foods, which could also be implemented in alternative diagnosis and therapeutic molecular tools helping to prevent and treat inflammatory-related diseases. Introduction The outcomes from epidemiological studies have revealed that an increasing number of health problems are affecting all societies around the globe as diabetes, insulin resistance, obesity, metabolic syndrome and cardiovascular diseases [1], where they have been associated to both scarce physical activity and the ingestion of high sugar-high lipid diets in metropolitan areas [2]. In this regard, there Antioxidants 2020, 9, 12; doi:10.3390/antiox9010012 www.mdpi.com/journal/antioxidants Antioxidants 2020, 9, 12 2 of 19 is an increasing demand of plant proteins highly beneficial for human health to be used for foodstuffs development and production and has prompted an increasing body of research covering diverse nutraceutical aspects in a number of crop plants. There is a strong interest focused in legumes, which are an economical important source of high-quality proteins compared to other plant foods [3]. Interestingly, lupin seeds, and particularly seeds from the species encompassing the "sweet lupin" group have been reported to exert beneficial effects in human health [4]. Thus, the dietary consumption of lupin seed proteins might provide preventive and protective effects (also complementing the current treatments for metabolic diseases) for different human inflammatory-related diseases such as metabolic syndrome, obesity, and high blood pressure (lowering capacity), type 2 diabetes mellitus (T2DM) development and triggered by uncontrolled glycemia throughout increasing insulin resistance, familial hypercholesterolemia and cardiovascular disease [5]. Different factors or stressors promote and stimulate immune-response-mediated inflammation leading to the molecular mechanisms underlying many of these diseases including defective insulin secretion and responses, and finally to the insulin resistance which has the pancreatic tissue as the key target for this disease evolution, progressing with an uncontrolled synthesis of pro-inflammatory mediators. Among them, interleukin 6 (IL-6), interleukin 1 (IL-1), interferon gamma (INFγ), tumor necrosis factor (TNF-α), chemokines (i.e., CCL2, CCL5), reactive oxygen species (ROS) as H 2 O 2 , peroxide and superoxide anion, nitric oxide (NO) overproduction, and nitrogen intermediate molecules, as well as adhesion molecules release (i.e., ICAM-1, VCAM-1) facilitating immune system cells attraction and movement through the tissues enhancing the inflammatory response [6]. The most frequently associated stressors are oxidative stress, alterations in gut microbiota that increase lipopolysaccharides (LPS) in blood, lipotoxicity, glucotoxicity and endoplasmic reticulum (ER) stress promoting misfolded proteins that may be deposited in the islets β-cells in form of amyloids [6]; these amyloid deposits enhance inflammatory response mediated by immune cells attracted to the pancreatic tissues [7]. Thus, lowering the synthesis and/or functional role of pro-inflammatory molecules has the advance of potentiate an anti-inflammatory reaction that may also help to the inflammatory-related diseases amelioration. Searching for naturally-occurring compounds with the potential of anti-inflammatory responses has also increased parallel to the risen number of inflammation-related diseases. Only a few studies have described the anti-inflammatory properties of some seeds-derived bioactive hydrolysates of proteins; however, even more scarce are the studies concerning legume seed compounds with these potential functional activities. Interestingly, enzymatic hydrolysates of field pea seeds showed anti-inflammatory properties at molecular level by inhibiting several inflammation mediators' production, i.e., NO and TNFα [8]. Lunasin, a peptide derived of isolated 2S albumin that was found in soybean, as well in some cereal grains displayed great benefits related to cancer amelioration, cardiovascular disease improvement and lowering cholesterol [9]. In soybean, the anti-inflammatory properties of lunasin have been associated to its ability to suppress the NFκB functional pathway [10]. Seed protein hydrolysates from blue lupin were found to have the potential to inhibit phospholipase A2 and cyclooxygenase-2 enzymes that are involved in the inflammatory pathway [11]. Another further study showed the example of bioactive peptides with high homology with Arabidopsis thaliana 2S albumin and Glycine max lectin-like protein, which were associated with genes expression modulation of inflammatory molecules [12]. In this work, we have studied the anti-inflammatory properties of narrow-leafed lupin (NLL) γ-conglutin protein from mature seeds using in vitro human PANC-1 pancreatic cell-line in both, an induced inflammation model using bacteria lipopolysaccharide (LPS), and an induced insulin resistance (IR) cell model, with the aim of assessing the capability of NLL γ-conglutin to improve the oxidative stress homeostasis of cells, the inflammatory induced state and the IR improvement at molecular level by decreasing several pro-inflammatory mediators genes expression and proteins levels, as well as up-regulating of insulin signaling pathway gene expression. Isolation and Purification of γ-Conglutin from NLL Mature Seeds The isolation and purification of γ-conglutin proteins from NLL was accomplished following the Czubiński et al. [13] method. Briefly, NLL seed proteins were extracted using Tris buffer pH 7.5 [20 mmol L −1 ], having 0.5 mol L −1 NaCl/gr defatted seeds. After sample centrifugation at 20,000× g, 30 min at 4 • C, the supernatant was filtered using a 0.45 µm syringe filter of PVDF. Thus, the sample was ready to be introduced in a desalting column of Sephadex G-25 medium. The desalted crude protein sample was applied to a HiTrap Q HP column (GE Healthcare) previously equilibrated with Tris buffer pH 7.5 [20 mmol L −1 ], where the proteins' separation was possible using a linear gradient [0 to 1 mol L −1 ] of NaCl. Under these conditions, the γ-conglutin proteins were not retained on the media contained in the column. Thus, different fractions that contained γ-conglutin proteins were pooled and introduced on HiTrap SP HP column (GE Healthcare) previously equilibrated with Tris buffer pH 7.5 [20 mmol L −1 ]. γ-conglutin proteins retained in this column were eluted with a linear gradient of NaCl [0 to 0.5 mol L −1 ]. The γ-conglutin proteins were collected and directly used in the further SDS-PAGE analysis and fingerprinting characterization. The remaining protein was kept frozen at −80 • C. Analysis of Purified γ-Conglutin Protein by Peptide Mass Fingerprinting The identity proof of the purified γ-conglutin protein was achieved following peptide mass fingerprinting. Briefly, proteins (10 µg) were separated by SDS-PAGE using precast gels of 12% Bis-Tris (Invitrogen) under reduced conditions. Electrophoretic bands corresponding to γ-conglutin protein (bands 1 to 4, Supplementary Figure S1), were cut out from the gel and in-gel trypsin digested. These peptide fragments generated were subjected to desalt and concentration, to be afterward loaded onto the MALDI plate and analyzed. MALDI-MS spectra were generated in a 4700 Proteomics Analyzer (Applied Biosystems, Waltham, MA, USA), and these data were used for proteins ID validation (www.matrixscience.com). SDS-PAGE and Immunoblotting Analysis of protein extracts were made by mixing the samples sample buffer (6× concentrated) and heated during 5 min up to 95 • C. Proteins were separated by SDS-PAGE using gradient TGX gels of 4-20% acrylamide (Bio-Rad). To identify the molecular weight (MW) of separated proteins we used a MW marker for stained gels as Mark12 Unstained Standard (ThermoFisher Scientific), with a MW range between 2.5 to 200 kDa. The resolved protein bands were visualized in a Gel Doc™ EZ Imager (Bio-Rad, Berkeley, CA, USA). For immunoblotting, proteins were transferred to PVDF membranes, which afterward were blocked for 2 h at room temperature (RT) using 5% of non-fat dry milk dissolved in PBST (phosphate-buffered saline, 0.05% Tween-20 All the incubations were made leaving the membranes overnight at 4 • C in constant movement. Next day, membranes were washed for 5 times with PBST, followed by incubation with horseradish peroxidase-conjugated anti-rabbit IgG (Sigma-Aldrich, ref. A9169) at 1:2500 dilution in 2% non-fat dry milk dissolved in PBST 24 for 2 h at RT. The membranes were then washed 5 times with PBST; signal development was achieved for each antibody by incubation with ECL Plus chemiluminescence following the manufacturer's instructions (Bio-Rad). The reactive bands in the membranes were detected by exposure to C-DiGit Blot Scanner (LI-COR). The pancreatic cells were maintained by serial passage in culture flasks and used in the experimental studies when the exponential phase was reached. Cells were grown to confluence and the monolayer culture was washed two times with phosphate-buffered solution (PBS, Sigma). The cells were then treated with trypsin-EDTA (Lonza) at 0.25% for 10 min. After 5 min centrifugation at 1000× g and two times PBS washing, PANC-1 cells were collected. Afterward, cells counting and viability assessment were achieved by using a Countess II FL Automated Cell Counter (Thermo Fisher) at both, the initial and final step of each experiment. Viability of cells was higher than 95%. Cell cultures were stablished at 80% of confluence and treated with LPS (1 µg/mL) for 24 h. PANC-1 cells were challenged with purified γ-conglutin protein for 24 h alone or in combination adding LPS. Aliquots of γ-conglutin protein stored at −20 • C in PBS were thawed just before use and dissolved in culture media to target concentrations and to be added to the cultures. After treatment, cells were harvested for further analyses. MTT Assay for Cell Viability Cell viability was evaluated using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) following the manufacturer's instructions (Roche). Briefly, 96-well microtitre plates were inoculated at a density of 1 × 10 3 PANC-1 cells per well in 300 µL of growth media. Plates were incubated overnight under 5% CO 2 in humidified air to allow the cells to adhere to the wells. After incubation, cells were treated for 24 h with either LPS or γ-conglutin protein, and washed three times with PBS in order to prevent any interfering issue because of the phenolic compounds when making the MTT assay. A volume of 200 µL of free red-phenol DMEM containing 1 mg mL −1 of MTT was added to the cells, and these were incubated for 3 h. Metabolically active viable cells are able to convert MTT into formazan crystals (purple color), and the former compound was solubilized with 200 µL of DMSO to absorb at 570 nm (test) and 690 nm using a iMark microplate reader (Bio-Rad, USA). Insulin Resistance PANC-1 Cell Model and Glucose Uptake Culture PANC-1 control cells were seeded in DMEM supplemented with 10% (v/v) FBS, using 96-well microtiter plates under standard conditions (5% CO 2 and 37 • C in humidified air), and a density of 2 × 10 4 cells per mL in 200 mL. Optimal dose of insulin and treatment time as requisite to establish insulin-resistant IR_PANC-1 (IR-C) cells. Cells display reduced glucose uptake, and this is one of the main feature of the insulin resistance impaired glucose uptake since decreasing cells responses to glucose uptake to increasing levels of insulin. Thus, the cell culture was separated into two groups having six independent replicates per each group: (1) Cultured cells in 200 µL complete medium (control cells, group C); (2) Treated cells with insulin (10 −5 to 10 −9 nmol L −1 ) when the cells became adherent (group IR-C). These PANC-1 cells were then cultured for 24, 48, and 72 h and the concentration of glucose in the media was measured using the glucose oxidase method (Abcam, UK). The concentration required to stablish IR-C PANC-1 cells was 10 −7 nmol L −1 and cultured for 24 h. At this IR stage, it was evaluated whether cells were sensitive to insulin and to evaluate whether γ-conglutin protein can improve the insulin-dependent glucose uptake capacity of IR-C PANC-1 cells. Thus, these cells were separated in three groups, each one with six replicates: The control group (C), IR-C and the IR-C + γ-conglutin groups. After 24 h, 2 µL of culture supernatant was collected from each sample and glucose concentration was determined as described above. Cultures of IR-C cells were stablished to 80% confluence and challenged with γ-conglutin protein for 24 h. After the treatments, the cells were harvested for further analyses. Quantitative Real-Time PCR GLUT-4, IL-1β, iNOS, IRS-1, PI3K and TNFα mRNA expression were assayed by mean of Real-time quantitative PCR for each experimental group. Total RNA was isolated from group C using the RNeasy Tissue RNA isolation kit (Qiagen, Hilden, Germany). First strand cDNA was synthesized using a High-Capacity cDNA Archive Kit (Applied Biosystems, Waltham, MA, USA). cDNA was prepared, diluted and subjected to real-time polymerase chain reaction (PCR), and amplified using TaqMan technology (LightCycler 480 quantitative PCR System, Roche, Basel, Switzerland) for gene expression assays. Primers and probes were used from the commercially available TaqMan Gene Expression Assays [IRS-1: Assay ID Hs00178563_m1, GLUT-4: Hs00168966_m1, PI3K: Hs00898511_m1, TNFα: Hs01555410_m1, IL-1β: Hs01075529_m1, iNOS: Hs00174128_m1, respectively]. Gene expression levels relative changes were assessed using the 2 −∆∆Ct method. The cycle number where the transcripts were detectable (CT) was normalized to the cycle number of β-actin detection as housekeeping gene (Assay ID: Hs99999903_m1, Applied Biosystems), and referred to as ∆CT, where the relative mRNA levels are presented as unit values of 2 ∧ [CT (β-actin)-CT (gene of interest)] , and displaying CT as the threshold cycle value. This parameter was defined as the fractional cycle number at which the target fluorescent signal passes a fixed threshold above baseline. PCR efficiency was assessed by TaqMan analysis on a standard curve for targets and endogenous control amplifications, which were highly similar. ELISA Assays for INFγ and Cytokines Quantification The cell cultured were prepared by cell counting and plated in six-well plates including 10 6 cells per well, and a duplicated well per group. After 24 h incubation, the media from treated culture was eliminated and cells were washed with PBS at 4 • C. To achieve proteins extraction, temperature of the plates was kept closely to 4 • C by placing these on ice, thus avoiding the denaturation of cytokines. One hundred microliters of buffer (150 mM sodium chloride, 1% NP-40, 50mM Tris pH 8) was added to each well and supplemented with 1 µL of protease inhibitor (Sigma) for 15 s. Scraped cells from the bottom of the wells were transferred to microcentrifuge tubes. These tubes were centrifuged at 12,500× g for 15 min at 4 • C. After this step, every supernatant was collected and diluted to a 1:4 ratio used for the ELISA quantification test of INFγ, IL-6, IL12p70, IL-17, and IL-27 (Diaclone). Data were statistically analyzed using the t-test. Antioxidant Enzymatic Activity Assays The cell cultures were prepared and after 24 h of incubation of the treated culture, growing media was removed, and cells washed with PBS at 4 • C. Cells from C, IR-C and IR-C cultures challenged with γ-conglutin protein were collected and used for the enzymatic activity assessment of SOD and catalase, as well as the GSH measurement (Canvax, Córdoba, Spain), following manufacturer's instructions. Data were analyzed by the statistical t-test. Determination of Intracellular ROS and Nitric Oxide (NO) C and IR-C cell cultures, challenged or not with γ-conglutin protein, were used for proteins extraction and following company instructions either for control or treatment samples (EMD Millipore, USA). A total proteins quantity of 25 µg was loaded onto polyacrylamide gels at 12% for proteins separation by SDS-PAGE. Achieved this step, proteins were transferred to PVDF membranes to be used for protein oxidation detection by using The OxyBlot™ Kit (EMD Millipore, Burlington, MA, USA) according to the manufacturer's instructions, This kit was used for the detection of carbonyl groups present into proteins because the proteins reaction with ROS. Measurements were developed at 485 nm and 530 nm excitation and emission wavelengths, respectively. The total amount of NO, including nitrite/nitrate content, was measured using a commercial assay kit [ab65328, Abcam, Cambridge, UK] from C and IR-C culture cells before and after γ-conglutin protein challenges. Briefly, samples including every experimental group were deproteinized according to the manufacturer's instructions. An equal amount of sample (30 µL) and standards were loaded into 96-well microtiter plates. Nitrate reductase, enzyme cofactor and assay buffer were added following a 1 h of incubation at RT with Enhancer, Griess Reagent R1 and Griess Reagent R2. Just after incubation, samples were used to measure absorbance at 540 nm with an i-Mark microplate reader (Bio-Rad, USA). The value of the blank control (medium without cells) was subtracted to the samples' values. Total nitrite/nitrate concentrations were calculated by using a standard curve. Statistical Analysis Data obtained from each experimental were expressed as means ± standard deviation (SD). Experimental assessment was developed at least three times. The one-way variance analysis was implemented using SPSS statistical software (SPSS Inc., Chicago, IL, USA). Statistical significance of differences (p < 0.05) in the analyzed data was evaluated with the use of SPSS software by analysis of variance and Dunnett analysis afterward. Isolation and Purification of the NLL Anti-Inflammatory γ-Conglutin Protein The γ-conglutin protein extraction, isolation and purification were accomplished following the methodology from Czubinski et al. [13] using mature NLL seeds as starting material. A representative SDS-PAGE is shown in supplementary Figure S1. The sample from γ-conglutin purification went to the electrophoretically separation under reduced conditions; several different electrophoretic bands were found for this protein. The most abundant forms were the separated α and β subunits, followed by the unreduced γ-conglutin (α + β subunits) and the uncleaved γ-conglutin precursor [14]. The γ-conglutin monomer is integrated by two subunits (α + β) linked by a single disulphide bridge, which is highly resistant to be broken under reducing conditions due to the structure of the monomeric protein [15]. The expected MW of the γ-conglutin monomer from these sequences is ∼45 kDa. After reduction of the disulphide bridge, two electrophoretic bands of 30 kDa (α-subunit) and 17 kDa (β-subunit) were detected, in addition to a ∼56.0 kDa band corresponding to the uncleaved γ-conglutin precursor (Supplementary Figure S1, Supplementary Table S1). The purity of this isolated protein assayed by SDS-PAGE under reducing conditions (Supplementary Figure S1) reached a 95%. In order to identify the different bands showed in the SDS-PAGE gel corresponding to the isolated and purified γ-conglutin (Supplementary Figure S1), we performed an in-gel tryptic digestion of the cut bands, and these were subjected to separation of the peptides and MS-based analysis. The peptide mass data generated was searched against the MS protein sequence database enabled the unambiguous identification by mass peptide fingerprinting as γ-conglutin (NLL 7S-basic globulin) (Supplementary Table S1). Cell Viability Assessment of the PANC-1 Cells Treated with γ-Conglutin Protein In this study, we assessed the viability of PANC-1 cells under treatment of the γ-conglutin protein and the potential cytotoxicity of this protein. In order to evaluate whether inflammation inductor LPS and γ-conglutin produce cell cytotoxicity effects, the viability MTT assay was achieved on PANC-1 cells under separate treatments with LPS adding γ-conglutin, at increasing concentrations to complete the conditions of DMEM culture medium + FBS + antibiotic for 24 h. The LPS plus γ-conglutin had no significant (p > 0.05) effects on cell viability (Supplementary Table S2), when compared with the control (untreated) group. The cell cultures used as positive control lacked LPS and γ-conglutin protein. In order to complete the usefulness of the γ-conglutin protein study, trypan blue staining was also used for assessing PANC-1-pancreatic cells viability after treatment with LPS (1 µg/µL) and increasing concentrations (from 10 to 50 µg) of γ-conglutin for 24 h, finding significant differences (p < 0.05) in cell viability after 24 h of incubation only at 50 µg compared to the control (Supplementary Table S2). Furthermore, a parallel study was made to assess the cell viability and cytotoxicity of increasing concentrations of insulin in order to know whether an insulin resistance model could be performed in PANC-1 pancreatic cells and to know the actual insulin concentration that should be used to stablish the model. An MTT assay was developed on PANC-1 cell finding that an important change in the percentage of viability was induced for insulin concentrations higher than 10 −7 nmol L −1 (Supplementary Table S3). Afterward, IR-C cells were assayed for viability using MTT kit when performed the addition of γ-conglutin protein for 24 h. No significant (p > 0.05) effect on cell viability (when treated with 25 µg of γ-conglutin protein) (Supplementary Table S4) was found after comparison with unchallenged IR-C group. When insulin was added alone (in the absence of γ-conglutin), these samples were used as a positive control. We also performed the cell viability assessment using trypan blue exclusion in IR-C pancreatic cells treated with increasing concentrations of this protein for a period of 24 h. No cell viability differences were found after 24 h of incubation in the presence of γ-conglutin. These results suggest that γ-conglutin do not affect to the PANC-1 pancreatic cell integrity in both, the induced (LPS treatment) inflammation and the IR-C cell models. Effect of γ-Conglutin Protein on the Inflammatory Process Inflammatory-related illnesses as metabolic syndrome, T2DM, obesity and cardiovascular diseases are well known to be developed and chronically associated to a continuously sustained inflammatory state. Among different mechanisms hidden in the inflammatory-based diseases, different molecules namely stressors affect functional pancreatic tissues physiology, particularly β-islets, promoting the course of pathology, which also of course mainly depend of particular genetic backgrounds and environmental factors [16]. Nowadays, there is an increasing number of diabetes associated to obesity named "Diabesity epidemic", which is frequently coincidental with a pancreatic islet cells failure unable to generate enough amount of insulin and/or a developed decreasing sensitivity to insulin by tissues able to metabolize glucose. During the establishment of T2DM, sustained high levels of glucose may lead to organ damage, which is mediated by pancreatic β-cells tissue damage, and the enhancement of immune system inflammatory response because the synthesis and release of pro-inflammatory mediators as cytokines and chemokines (cells chemotactic factors). These processes create feed-forward progressive steps that further increases immune system cell content, promoting a chronic inflammatory state [17]. Thus, increasing levels of multiple factors as IL-1β, TNFα, and iNOS are important contributors for the development of inflammation since IL-1β-mediates β-cell dysfunction during the development of T2DM, while are able to activate the expression of iNOS with the result of an exacerbate synthesis of NO, promoting the up-regulation of pro-inflammatory genes [18]. In this regard, we evaluated the ability of γ-conglutin protein to modulate the mRNA levels of genes of pro-inflammatory mediators as potential anti-inflammatory targets (TNFα, IL-1β, and iNOS mRNA) in PANC-1 cells (Figure 1). Induced inflammatory state by LPS was significantly inhibited (p < 0.05) by γ-conglutin proteins at mRNA expression level in PANC-1 [−694, −2733, and −4208-fold, respectively, versus LPS treated culture cells] ( Figure 1A). No statistically significant differences were observed in IL-1β cytokine, TNFα, and iNOS mRNA levels (p > 0.05) when challenges were performed with γ-conglutin + LPS as compared to the control group ( Figure 1A). These results highlight the potential implications of γ-conglutin to decrease the pro-inflammatory capacity in PANC-1 cells by decreasing cytokines and iNOS genes expression levels, thus supporting the inflammatory process amelioration at molecular level. In this study, this lowering in the cellular pro-inflammatory capacity could be the result of the antioxidant capacity of γ-conglutin since changes in GSH levels, SOD and catalase activities was shown, helping to keep redox homeostasis in T2DM and other inflammatory-dependent diseases also affected by the oxidative stress [19]. On this line, the above results on PANC-1 pancreatic cells are in agreement with previous studies that shown a similar reduction in the expression levels of iNOS and IL-1β mRNA in T2DM blood culture [20]. synthesis and NO production [21]. We have also demonstrated that NLL γ-conglutin can reverse this state by decreasing the levels of TNFα, IL-1β and iNOS functional protein levels in PANC-1 [−158, −144, and −164-fold, respectively, versus LPS treated culture cells] (Figure 1B, Supplementary Table S5), while no statistically significant differences (p > 0.05) were observed in TNFα, IL-1β and iNOS protein levels when challenges were accomplished with γ-conglutin (LPS + γ) when compared to the control group ( Figure 1B). γ-Conglutin Protein Inhibits the Production of Different Cytokines and Pro-Inflammatory Mediators Physiological circulating levels of cytokines have important implications in the functional regulation of pancreatic β-cells, although these produce different cytokines itself in response to physio-pathological states, playing also important roles in its own β-cells function [18]. When insulin resistance is stablished, increasing production of dangerous pro-inflammatory circulating mediators is also stablished. During the T2DM state progression, this non-physiological condition is It is well established that systemic production of IL-1β at local tissues plays a fundamental role in the progression of pancreatic dysfunction as β-cell apoptosis in T2DM. The advance of this disease is facilitated by a continued production of inflammatory molecular mediators that would have an initial development stage and further progression promoted by TNFα-/IL-1β-mediated iNOS synthesis and NO production [21]. We have also demonstrated that NLL γ-conglutin can reverse this state by decreasing the levels of TNFα, IL-1β and iNOS functional protein levels in PANC-1 [−158, −144, and −164-fold, respectively, versus LPS treated culture cells] ( Figure 1B, Supplementary Table S5), while no statistically significant differences (p > 0.05) were observed in TNFα, IL-1β and iNOS protein levels when challenges were accomplished with γ-conglutin (LPS + γ) when compared to the control group ( Figure 1B). γ-Conglutin Protein Inhibits the Production of Different Cytokines and Pro-Inflammatory Mediators Physiological circulating levels of cytokines have important implications in the functional regulation of pancreatic β-cells, although these produce different cytokines itself in response to physio-pathological states, playing also important roles in its own β-cells function [18]. When insulin resistance is stablished, increasing production of dangerous pro-inflammatory circulating mediators is also stablished. During the T2DM state progression, this non-physiological condition is characterized by an imbalance pro-inflammatory cytokines and mediators profile, led by the β-cell dysfunction and T2DM sustainable situation, which on the other hand, is based on the crosstalk among cytokines in pancreatic β-cells and immune tissues [22]. Thus, restoring the balance back to the increased levels of protective plasma circulating and β-cells cytokines could prevent and promote the treatment of this β-cell dysfunctional statement, and for extension the T2DM progression. In this regard, we evaluated by ELISA method the potential anti-inflammatory effects of γ-conglutin protein through its capacity to modulate the amount of important pro-inflammatory mediator as INF-γ and cytokines (IL-6, IL-12p70, IL-17A, and IL-27) in both, an induced inflammation model ( Currently, scarce studies have showed results concerning the anti-inflammatory effects of plant peptides, usually promoted by the modulation of the balance regulation of pro-inflammatory interleukins, INFγ, TNFα and NO. In the case of studied soybean peptides, these inhibited mRNA iNOS expression levels and TNFα and NO production, while also reduced the pro-inflammatory enzymatic activity of COX-2 in LPS-induced macrophages [8]. Moreover, lunasin was shown to reduce the ROS production in macrophages induced by LPS while inhibiting the release of IL-6 and TNFα [11,12]. In this regard, we demonstrated that NLL γ-conglutin protein lowered the pro-inflammatory mediators' levels assayed. This anti-inflammatory capacity would be capable to manage the diseases developmental states promoting feed-forward process for the establishment of these chronic inflammatory-derived diseases as T2DM. Thus, lupin γ-conglutins may be capable to promote the improvement from the detrimental effects of several inflammatory molecular developments as follows: (i) Lipotoxicity as a sustained high lipid diet induces the production of IL-1β, IL-6, which β-cells continued exposure induces exacerbate synthesis and release of ROS, while secretion of insulin is also inhibited. This combination promotes the apoptosis of the pancreatic β-cells [23]. Tables S5 and S6), avoiding islet β-cell apoptosis and the recruitment of immune cells to local tissues, enhancing feed-forward mechanism of inflammation progression in islets [32] as preventive action for inflammation based T2DM progression. (ii) Apoptosis of islets β-cells prompted by IL-1β and INFγ is stimulated by endoplasmic reticulum stress [24]. In this regard, β-cell apoptosis is also activated by the join action of INFγ and TNFα, together with the activation of Ca 2+ channels. This situation induces the NO synthesis and consequently the endoplasmic reticulum stress pathway activation [25], leading to caspases activation and mitochondrial dysfunction [26]. In this concern, γ -conglutin may be able to prevent these mechanisms by suppressing the TNFα, IL-1β and INFγ mRNA and protein levels (Figures 1-3 (iii) The synergistic action of IL-1β + INFγ, or even IL-1β + INFγ + TNFα cytokines in pancreatic tissues increases NO production as consequence of direct increasing of iNOS, resulting in islet β-cell destruction [27]. We have shown that mRNA expression levels of TNFα and IFNγ (apoptosis mediated molecules) were lowered after treatment with γ-conglutin (Figures 1-3, Supplementary Figure S2, Supplementary Tables S5 and S6), which may have a positive effect on the survival of islet β-cells [28]. γ-Conglutin Reverses the Insulin Resistance through Inflammation Amelioration while Improving Insulin Signalling Pathway in Pancreatic IR-C Cells Insulin resistance is another consequence of a sustained inflammation, which has been observed in several pathophysiological processes, including metabolic disorders as hyperinsulinemia, hyperglycemia, and hypertriglyceridemia, being IR also an important cause of pre-diabetes establishment and T2DM development and obesity [33], affecting to different insulin target organs. Thus, amelioration of IR by NLL γ-conglutin may constitute a major approach to prevent and treat these metabolic disorders. In this study, it was established an in vitro insulin-resistant (IR-C) cell model using PANC-1 cells to evaluate the insulin effects on glucose uptake and metabolism in IR-C cell. To evaluate glucose uptake, control cells were incubated with a range of insulin concentrations (between 10 −5 to 10 −9 nmol L −1 ) for 24 h (Figure 4). Following an insulin concentration of 10 −7 nmol L −1 , we found the (v) Important inflammatory cytokine, IL-17A, involved in the T2DM progressing, is able to induce ROS production, which also greatly affects to insulin resistance. A join action from IL-17 and INFγ acts as diabetes chronic state development [30]. Overall, IL-17A has pleiotropic functional effects comprising synthesis of IL-6 and TNFα, and chemokines (chemotaxis effect) on a diversity of cells [31]. Tables S5 and S6), avoiding islet β-cell apoptosis and the recruitment of immune cells to local tissues, enhancing feed-forward mechanism of inflammation progression in islets [32] as preventive action for inflammation based T2DM progression. γ-Conglutin Reverses the Insulin Resistance through Inflammation Amelioration while Improving Insulin Signalling Pathway in Pancreatic IR-C Cells Insulin resistance is another consequence of a sustained inflammation, which has been observed in several pathophysiological processes, including metabolic disorders as hyperinsulinemia, hyperglycemia, and hypertriglyceridemia, being IR also an important cause of pre-diabetes establishment and T2DM development and obesity [33], affecting to different insulin target organs. Thus, amelioration of IR by NLL γ-conglutin may constitute a major approach to prevent and treat these metabolic disorders. In this study, it was established an in vitro insulin-resistant (IR-C) cell model using PANC-1 cells to evaluate the insulin effects on glucose uptake and metabolism in IR-C cell. To evaluate glucose uptake, control cells were incubated with a range of insulin concentrations (between 10 −5 to 10 −9 nmol L −1 ) for 24 h (Figure 4). Following an insulin concentration of 10 −7 nmol L −1 , we found the most statistically significant reduction in the extracellular glucose depletion (p < 0.05) in comparison to control cells (without insulin treatment) ( Figure 4A). The addition of 10 −7 nmol L −1 of insulin promoted a time-dependent lowering (p < 0.05) of glucose consumption between 24-48 h when compared to control cells ( Figure 4B). These results clearly showed the maintenance of the insulin resistance by IR-C cells for a period of 48 h after insulin treatment. Following 48 h, cells acquired a normal condition as control cells (C). These results are consistent with the increasing glucose uptake shown in Figure 3B after 72 h, while no statistically significant differences (p > 0.05) in glucose consumption was observed when compared to control cells without insulin treatment. Furthermore, the molecular mechanisms leading to glucose homeostasis and/or IR are still uncertain. However, NLL γ-conglutin might be able to contribute in this process of glucose homeostasis, as we have demonstrated in the current study that glucose uptake by IR-C cells is clearly induced by treatment with γ-conglutin protein, reaching higher glucose uptake levels after IR-C cells challenged with 25 µg of γ-conglutin protein, which glucose uptake increased more than 60% in comparison to IR-C cells (p < 0.05), which were assayed without γ-conglutin protein challenge ( Figure 4C). most statistically significant reduction in the extracellular glucose depletion (p < 0.05) in comparison to control cells (without insulin treatment) ( Figure 4A). The addition of 10 −7 nmol L −1 of insulin promoted a time-dependent lowering (p < 0.05) of glucose consumption between 24-48 h when compared to control cells ( Figure 4B). These results clearly showed the maintenance of the insulin resistance by IR-C cells for a period of 48 h after insulin treatment. Following 48 h, cells acquired a normal condition as control cells (C). These results are consistent with the increasing glucose uptake shown in Figure 3B after 72 h, while no statistically significant differences (p > 0.05) in glucose consumption was observed when compared to control cells without insulin treatment. Furthermore, the molecular mechanisms leading to glucose homeostasis and/or IR are still uncertain. However, NLL γ-conglutin might be able to contribute in this process of glucose homeostasis, as we have demonstrated in the current study that glucose uptake by IR-C cells is clearly induced by treatment with γ-conglutin protein, reaching higher glucose uptake levels after IR-C cells challenged with 25 μg of γ-conglutin protein, which glucose uptake increased more than 60% in comparison to IR-C cells (p < 0.05), which were assayed without γ-conglutin protein challenge ( Figure 4C). The treatment of pancreatic IR-C cells with γ-conglutin was also accomplished to determine whether this protein had effects on insulin resistance improvement throughout recovering the control-like associated mRNA expression levels of IRS-1, GLUT-4, and PI3K, key upstream and glucose transport mediators in the insulin signaling pathway [20], which would also be the reflect of a potential improvement in the glucose uptake and the inflammatory state on IR-C cells. The (A) Increasing concentrations of insulin from 10 −9 to 10 −5 nmol/L showed that cell culture did uptake the lower level of glucose at 10 −7 nmol/L in comparison to C cell culture, taking this concentration as the level of insulin where cells acquired the resistance state. (B) C cells were cultured for 24, 48 and 72 h, testing the glucose uptake of cultures including 10 −7 nmol/L (white bars), in comparison to control C cells (black bars). In these assays were showed that insulin resistance state is preserved for 48 h. p* < 0.05 IR-C versus C. (C) Glucose consumption by IR-C cells promoted by γ-conglutin at 0, 10, 25 and 50 µg was assayed after 24 h of culture. Values are shown as the mean ± SD from three independent experiments. p < 0.05 represents statistically significant differences associated with each figure. p* < 0.05 treated cells (µg) versus control. The treatment of pancreatic IR-C cells with γ-conglutin was also accomplished to determine whether this protein had effects on insulin resistance improvement throughout recovering the control-like associated mRNA expression levels of IRS-1, GLUT-4, and PI3K, key upstream and glucose transport mediators in the insulin signaling pathway [20], which would also be the reflect of a potential improvement in the glucose uptake and the inflammatory state on IR-C cells. The analysis of IRS-1, GLUT-4 and PI3K showed their up-regulation in their mRNA expression after γ-conglutin treatment in IR-C cells ( Figure 5) [IRS-1: +70; GLUT-4: +97%; and PI3K: +90-fold, respectively], which differences were statistically significant compared to IR-C untreated cells (p < 0.05) (Figure 5A), as well as the mRNA expression level reduction of IRS-1, GLUT-4 and PI3K in IR-C cells [IRS-1: −93; GLUT-4: −84%; and PI3K: −89-fold, respectively] compared to control cells PANC-1 ( Figure 5A). analysis of IRS-1, GLUT-4 and PI3K showed their up-regulation in their mRNA expression after γ-conglutin treatment in IR-C cells ( Figure 5) [IRS-1: +70; GLUT-4: +97%; and PI3K: +90-fold, respectively], which differences were statistically significant compared to IR-C untreated cells (p < 0.05) ( Figure 5A), as well as the mRNA expression level reduction of IRS-1, GLUT-4 and PI3K in IR-C cells [IRS-1: −93; GLUT-4: −84%; and PI3K: −89-fold, respectively] compared to control cells PANC-1 ( Figure 5A). We have also demonstrated that NLL γ-conglutin can reverse this state by up-regulating the IRS-1, GLUT-4 and PI3K functional protein levels in IR-C cells [IRS-1: +266; GLUT-4: +185; and PI3K: +144-fold, respectively] ( Figure 5B), after decreased proteins levels showed when PANC-1 control cells acquired the IR-C statement compared to the control group [IRS-1: −302; GLUT-4: −310; and PI3K: −166-fold, respectively] ( Figure 5B). These results confirm that γ-conglutin protein would be capable to reduce significantly the blood glucose level by promoting glucose uptake by insulin sensitive tissues while ameliorating hyperglycemia via increasing GLUT-4 glucose transporter protein level and plasma membrane recruitment [34], and insulin signaling pathway upstream mediators IRS-1 and PI3K [20]. Furthermore, at the same time we also evaluated the capability of γ-conglutin protein to regulate the mRNA and protein levels of pro-inflammatory molecules as potential mechanism Figure 5B). These results confirm that γ-conglutin protein would be capable to reduce significantly the blood glucose level by promoting glucose uptake by insulin sensitive tissues while ameliorating hyperglycemia via increasing GLUT-4 glucose transporter protein level and plasma membrane recruitment [34], and insulin signaling pathway upstream mediators IRS-1 and PI3K [20]. Furthermore, at the same time we also evaluated the capability of γ-conglutin protein to regulate the mRNA and protein levels of pro-inflammatory molecules as potential mechanism helping to reverse the IR-C cell statement. TNFα, IL-1β and iNOS were analyzed in IR-C culture (Figure 3). These pro-inflammatory mediators were significantly lowered in γ-conglutin protein treated IR-C cells, at the mRNA expression levels [TNFα: −158; IL-1β: −144; and iNOS: −164-fold, respectively, versus IR-C untreated cells] (Figure 3A), and at the protein levels [TNFα: −189; IL-1β: −146; and iNOS: −97-fold, respectively, versus IR-C untreated cells] ( Figure 3B, Supplementary Table S6). No statistically significant differences (p > 0.05) were found for TNFα, IL-1β and iNOS levels in IR-C cells treated with γ-conglutin in comparison to the PANC-1 control group (Figure 3). These results highlight the potential implications of γ-conglutin to improve insulin resistance through inflammation amelioration at molecular level in PANC-1 pancreatic cells by decreasing cytokines and iNOS levels [20]. In this study, we have demonstrated for the first time that NLL γ-conglutin protein is able to help improving the insulin resistance state in PANC-1 cell line targeting two major molecular signaling cross-roads, restoring functional levels of insulin activation pathway mediators while decreasing several pro-inflammatory mediators' levels that worthwhile reinforces the first effect on PANC-1 cells. These outcomes are vital knowledge to be considered for successful anti-inflammatory insulin sensitizing new alternative therapies from natural plant sources. Oxidative Stress Modulation by γ-Conglutin Protein as Anti-Inflammatory and Insulin Resistance Improvement Mechanism Oxidative stress, understood as the cellular statement of excess reactive oxygen species (ROS) production, is a main factor in the T2DM development [35], through promoting IR development. Afterward, high amounts of blood glucose sustained long time causes damage on the enzymes superoxide dismutase (Cu/Zn-SOD), catalase (CAT), and glutathione molecule as the most important elements of the cell antioxidant defense system [36]. Thus, an excessive ROS production contributes to oxidative stress, a pro-inflammatory state, and mitochondrial dysfunction that in turn exacerbates IR [37]. It would be necessary a comprehensive knowledge about the relationship between oxidative stress and T2DM risk factors (inflammation and IR) in order to improve diabetes prevention and its associated complications. In this regard, signaling molecules as nitric oxide (NO) play a critical role of the inflammation pathogenesis acting as a pro-inflammatory molecule, together with cytokines and chemokines (e.g., TNFα, IL-6, IL-12), under oxidative stress situations because of the excessive NO and ROS production, i.e., IR [38], promoting islet β-cell apoptosis [39] and the progression of diseases concomitant with inflammation [40]. In the present study, we evaluated the oxidative homeostasis in inflammatory LPS-induced PANC-1 cells, as well as in IR-C cell model, after treatment with γ-conglutin protein. In both cases, we assessed the ROS production by measuring the levels of protein carbonylation, the covalent modifications of proteins induced by ROS, i.e., H 2 O 2 or other derived molecules from the oxidative stress process by using an OxyBlot protein oxidation detection and immunoassay [41], and comparing them with control cells, LPS treated cells and IR-C cells, respectively, without any challenge with γ-conglutin. Very low levels of protein oxidation, generated through normal metabolic activity, were observed in untreated (control) cells with LPS (Supplementary Figure S3A), as well as in control PANC-1 cells before IR-C statement induction ( Figure 6A). However, ROS production was significantly increased (p < 0.05) after LPS cells treatment (+677-fold, Supplementary Figure S3A), and in IR-C cells (+445-fold, Figure 6A), as significant (p < 0.05) increased levels of proteins carbonylation was detected. Treatments of these type of cells with γ-conglutin protein restored oxidative balance in both situations (LPS-induced cells: −423-fold, and IR-C cells: −445-fold, respectively; Supplementary Figure S3A, Figure 6A), in comparison to their respective inflammatory induced stages. These results suggest that γ-conglutin protein efficiently avoid at certain levels the ROS production (oxidative stress) in PANC-1 cells after inflammatory statement incensement, and that γ-conglutin exhibited strong anti-oxidant effect since this protein ameliorated the oxidative stress induced by LPS and in IR-C cell model. Interestingly, the present and future related studies would benefit from the comparative further analyses using other types of cell cultures, like primary islets and/or pancreatic β-cells and/or adipocyte cells to determine actions related to insulin secretion and islet inflammation. Furthermore, we analyzed the NO production again in both induced inflammation cell models treated with γ-conglutin protein for 24 h. Statistically significant decreased levels of NO were found (p < 0.05) in the LPS-induced cells (−351-fold, Supplementary Figure S3B) and IR-C cells (−91-fold, Figure 6B), in comparison to inflammation induced cells without γ-conglutin protein treatment, showing again how γ-conglutin is able to ameliorate the inflammatory state of cells promoting lowering NO [42] and iNOS expression levels, showing potential uses in the improvement of T2DM GSH and NO production, as well as SOD and catalase activities were measured. Data represent mean ± SD from three independent experiments. p < 0.05 represents statistically significant differences associated with each figure. p* < 0.05 IR-C versus control PANC-1 cells; p** < 0.05 IR-C + γ-conglutin versus IR-C. Challenges were made with 25 µg of γ-conglutin. Therefore, removal of free radicals is strongly dependent of enzymatic activities as superoxide dismutase (Cu/Zn-SOD), catalase (CAT) and glutathione (GSH) levels, representing crucial indicators of the cellular anti-oxidant capacity, and the oxidative stress cell state [35]. In the current study, we assessed the modulation of these antioxidant factors by γ-conglutin in the inflammatory LPS-induced PANC-1 cells, as well as in IR-C cell model, by measuring SOD and catalase activities, GSH levels and NO production, before and after the treatment with γ-conglutin (Supplementary Figure S3B, Figure 6B). We found a statistically significant (p < 0.05) decreased levels of GSH (LPS-induced inflammation cells: −660-fold; IR-C cells: −949-fold, respectively) (Supplementary Figure S3B, Figure 6B). Furthermore, the levels of SOD and catalase activity were strongly reduced after the same treatments with γ-conglutin protein in LPS-induced inflammatory statement (SOD: −677-fold; catalase: −142-fold, respectively) (Supplementary Figure S3B) and IR-C cells (SOD: −183-fold; catalase: −33-fold, respectively) ( Figure 6B). These data showed that high GSH and low SOD levels and catalase activities might be regulated by γ-conglutin protein through direct or indirect marked effects in avoiding lipids and protein oxidative modifications, which is also supported by the concomitant large reduction of oxidative carbonylation (Supplementary Figure S3B, Figure 6B), and an overall oxidative stress balance improvement, translated also to an inflammation molecular cellular statement amelioration by γ-conglutin protein as an anti-oxidant protein. Furthermore, we analyzed the NO production again in both induced inflammation cell models treated with γ-conglutin protein for 24 h. Statistically significant decreased levels of NO were found (p < 0.05) in the LPS-induced cells (−351-fold, Supplementary Figure S3B) and IR-C cells (−91-fold, Figure 6B), in comparison to inflammation induced cells without γ-conglutin protein treatment, showing again how γ-conglutin is able to ameliorate the inflammatory state of cells promoting lowering NO [42] and iNOS expression levels, showing potential uses in the improvement of T2DM and other inflammatory-based diseases. These novel results clearly indicated that oxidative stress is a major point targeted by NLL γ-conglutin protein effects causing an improved stress balancing through reduced ROS-related pro-inflammatory mediators and increased anti-oxidative molecules. Indeed, such data can be helpful for the development of future antioxidant and new anti-inflammatory therapeutics avoiding the oxidative stress activation of inflammatory mediators involved in several chronic diseases, with the advantage of being a natural product from lupin seeds that can be implemented as a functional food. Conclusions In this study, treatment with NLL γ-conglutin protein to inflammation LPS-induced and IR-C in the PANC-1 pancreatic cell-line promoted: (i) Lowering expression of mRNA and proteins levels of key pro-inflammatory mediators as TNFα, IL-1β, and iNOS; ii) the up-regulation mRNA expression and increasing protein levels of IRS-1, and p85-PI3K, and GLUT-4 transporter, which are crucial biomarkers of the insulin signaling pathway activation. This up-regulation makes possible the recovery of the physiological condition of the cells as control cell-like situation from an induced inflammatory statement; (iii) glucose uptake in IR-C cells; (iv) a significant decrease (p < 0.05) in proteins levels of pro-inflammatory mediators INFγ, IL-6, IL-12, IL-17 and IL-27; (v) significant dropping oxidative stress in inflammation LPS-induced and IR-C pancreatic cells, as indicated by a reduced levels of protein carbonylation, improved glutathione (GSH) levels and lower SOD and catalase antioxidant enzymatic activities; (vi) reduction of NO production and down-regulation of iNOS in both, LPS-induced inflammation and IR-C pancreatic cells. This study is the first describing the anti-inflammatory effects at molecular level of the legume protein family 7S basic globulins or γ-conglutin, constituting strong evidences that NLL γ-conglutins play a crucial role in the development of novel functional foods and therapeutic options for the prevention and treatment of inflammatory-related diseases.
v3-fos-license
2016-05-12T22:15:10.714Z
2014-05-14T00:00:00.000
1050236
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0095920&type=printable", "pdf_hash": "e2dc13fd263a30960159501cd799076ca9388063", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41472", "s2fieldsofstudy": [], "sha1": "e2dc13fd263a30960159501cd799076ca9388063", "year": 2014 }
pes2o/s2orc
MicroRNA-150 Is a Potential Biomarker of HIV/AIDS Disease Progression and Therapy Background The surrogate markers of HIV/AIDS progression include CD4 T cell count and plasma viral load. But, their reliability has been questioned in patients on anti-retroviral therapy (ART). Five microRNAs (miRNAs) - miR-16, miR-146b-5p, miR-150, miR-191 and miR-223 in peripheral blood mononuclear cells (PBMCs) were earlier found to assign HIV/AIDS patients into groups with varying CD4 T cell counts and viral loads. In this pilot study, we profiled the expression of these five miRNAs in PBMCs, and two of these miRNAs (miR-146b-5p and miR-150) in the plasma of HIV/AIDS patients, including those on ART and those who developed ART resistance, to evaluate if these are biomarkers of disease progression and therapy. Results We quantified miRNA levels by quantitative reverse transcription polymerase chain reaction (qRT-PCR) using RNA isolated from PBMCs and plasma of healthy persons or HIV-infected patients who were (1) asymptomatic; (2) symptomatic and ART naïve; (3) on ART; and (4) failing ART. Our results show miR-150 (p<0.01) and to a lesser extent miR-146b-5p (p<0.05) levels in PBMCs to reliably distinguish between ART-naïve AIDS patients, those on ART, and those developing drug resistance and failing ART. The plasma levels of these two miRNAs also varied significantly between patients in these groups and between patients and healthy controls (p values <0.05). Conclusions We report for the first time that PBMC and plasma levels of miR-150 and miR-146b-5p are predictive of HIV/AIDS disease progression and therapy. Introduction The pathogenesis of HIV/AIDS involves dynamic host-virus interactions, leading to a state of immune activation [1]. The median interval between HIV infection and the development of AIDS is 5-10 years in adults. Though not fully understood, variations in viral strains, host immune responses, microbial contact and environmental cofactors may contribute to this broad window. To assess disease progression and the efficacy of ART in patients with HIV-1 infection, three classes of surrogate markers have been used. These include HIV viral load, CD4+ T-cell numbers and plasma concentrations of soluble markers of immune activation, including Neopterin, Tumor Necrosis Factor alpha (TNFa), interleukins, beta 2-microglobulin, soluble CD8, etc. [2]. The best predictor for onset of AIDS is the percentage or absolute numbers of circulating CD4+ T cells in peripheral blood [3]. But, while T cell counts and viral loads are important predictors of disease progression, this has been questioned in patients on highly active anti-retroviral therapy (HAART) [4]. The CD4+ T-cell counts of HIV patients on HAART do not reliably identify individuals with virological failure [5]. A recent review of all the current biomarkers for HIV disease progression concluded that their clinical utility remained debatable [3]. Therefore, it is important to discover newer classes of biomarkers of early detection, disease progression and therapy. Micro ribonucleic acids (miRNAs) are an abundant class of small RNAs of 18-25 nucleotides that post-transcriptionally regulate over 30% of the protein coding genes in humans [6]. In the most recent release, 2578 mature human miRNA sequences have been identified (Sanger miRBase release 20; http://www. mirbase.org/). During virus infection and its replication, host and viral RNAs and miRNAs interact in various ways, mutually regulating their levels and translational competence. Several reports on the differential expression of host and viral miRNAs and their roles in HIV infection were published recently [7][8][9][10][11][12]. While several small RNAs complementary to the HIV env, nef and/ or LTR sequences directly inhibit viral replication in vitro [12] other miRNAs target critical host factors. For example, miR-17-5p and miR-20 inhibit HIV by reducing expression of the PCAF histone acetyltransferase [13]. Increased expression of miR-28, miR-150, miR-223 and miR-382 was recently credited for the inhibition of HIV-1 replication in monocytes [9]. A profiling study identified 62 differentially regulated miRNAs from the peripheral blood mononuclear cells (PBMCs) of HIV/AIDS patients with different CD4 counts and viral loads [10]. Of these, 59 miRNAs were down regulated and 3 were upregulated. Among the down regulated miRNAs, miR-16, miR-146b-5p, miR-150, miR-191 and miR-223 are abundantly expressed in B and T lymphocytes, and their levels correlated broadly with disease status [10]. Since imperfect complementarity allows a single miRNA to potentially target multiple mRNAs [14] and cellular mRNAs involved in the differentiation of hematopoietic cells and the regulation of immune cell function are major targets of miRNA-mediated regulation [15], we sought to assess the status of these five miRNAs during the progression of HIV infection and ART. MicroRNAs are far more stable than mRNAs and have more plasticity in their cellular effects [14]. These have also been identified in the plasma and sera of healthy individuals and those with pathologic conditions, opening up the possibility of exploring miRNAs as disease biomarkers [16]. The evaluation of tissuespecific miRNAs in plasma has shown good promise as biomarkers in leukemia [17,18], liver injury [19], viral hepatitis [20,21] and cardiac disease [22]. With information from earlier studies on miRNA expression in HIV/AIDS [7][8][9][10][11] and other diseases [18][19][20][21][22], we hypothesized that differential regulation of miR-16, miR-146b-5p, miR-150, miR-191 and miR-223 [10] in PBMCs and plasma may be predictive of the status of HIV disease progression and response to therapy. In this pilot study, we have quantified the levels of these five miRNAs in PBMCs and that of miR-146b-5p and miR-150 also in the plasma of patients at different stages of HIV infection, on ART and those who showed ART resistance. To our knowledge, this is the first report of simultaneous miRNA measurements in PBMCs and plasma from HIV/AIDS patients on ART and those displaying resistance to ART. Our results show PBMC and circulating plasma miR-150 and to a lesser extent miR-146b-5p to be novel candidate biomarkers of HIV infection and disease. Study subjects A total of 37 HIV-infected subjects (28 male, 9 female; average age 33 yr) at different stages of disease and treatment were included in this study. These were divided into Groups 1 to 4, as detailed in Methods. The distribution across groups and their median CD4 counts (number/ ml) and viral loads (copies/ml), was as follows: Differential miRNA expression in PBMCs Five miRNAs (miR-16, miR-146b-5p, miR-150, miR-191 and miR-223) were quantified in the PBMCs of HIV/AIDS patients at different stages of disease using TaqMan miRNA assays and compared to healthy controls. The miRNA expression levels were normalized using the stably expressed small nucleolar RNA44 (RNU44), and are shown as fold change 6 standard error mean (6SEM) (Fig. 1). Though the five miRNAs were down regulated to various extents in HIV-infected persons when compared to healthy controls, only three of these reached statistical significance (Fig. 1). Though no significant change was observed in asymptomatic persons (Group 1) compared to healthy controls, miR-146b-5p and miR-150 were down regulated significantly to 0.5160.08 fold (p,0.05) and 0.4860.10 fold (p,0.01), respectively, in ART-naïve AIDS patients (Group 2). Following at least 6 months on ART (Group 3), the expression levels of both these miRNAs attained levels similar to healthy controls (Fig. 1). However, in patients who became resistant to first-line ART (Group 4), the expression levels of miR-146b-5p and miR-150 again fell to 0.6260.15 fold (p.0.05) and 0.5060.06 fold (p, 0.01) respectively, compared to healthy controls. There were no significant differences in the mean cycle threshold (Ct) values of the endogenous control RNU44 (ANOVA; p-value 0.52). Relative expression of plasma miR-150 and miR-146b-5p Since miR-150 and miR-146b-5p are also reported as circulating miRNAs in plasma, we evaluated their plasma levels in the four groups of patients and in healthy controls. The levels of these miRNAs were normalized using the expression levels of miR-16, a widely used endogenous reference for the measurement of plasma miRNAs; the samples were also spiked with synthetic cel-miR-39 as an external control. Though the magnitudes of expression level changes were different, the same trend was observed regardless of whether miR-16 or cel-miR-39 was used as the normalizer (Fig. 2). The plasma levels of miR-150 (Fig. 2a,b) and miR-146b-5p (Fig. 2c,d) were significantly upregulated in the asymptomatic and symptomatic groups compared to healthy controls; in patients on ART these were similar to the levels in healthy controls. While miR-146-5p levels were significantly increased in the ART resistance group compared to patients on ART, we observed a further reduction in miR-150 levels in the ART resistance group. Absolute quantitation of miR-150 and miR-146b-5p in plasma We then quantified the absolute amounts of these two miRNAs in the plasma from various groups of HIV/AIDS patients and healthy controls [27,28]. The absolute amount of each miRNA was calculated with respect to standard curves based on serial dilution (10 9 to 10 3 copies) of spiked synthetic miR-150 or miR-146b-5p and analyzed using the same Taqman microRNA assay. The Ct values for each sample reaction were converted to absolute copy number based on these standard curves. All reactions were carried out in duplicate and the absolute quantities are presented as Mean 6 SEM copies/ng total RNA (Fig. 3). Compared to healthy controls with 530761846 copies, the plasma levels of miR-150 were higher in asymptomatic (20138612418), symptomatic (31310611696) and ART (1070563300) groups of patients; however, this reached significance only in the symptomatic group (p,0.01). Significantly lower levels of miR-150 were observed in the ART resistance (6336192) group compared to either healthy controls or patients on ART (p,0.05). The miR-150 levels were also significantly lower in the ART group when compared with the symptomatic group (p,0.05) (Fig. 3a). Compared to healthy controls with 1029765069 copies, the plasma levels of miR-146b-5p were elevated in asymptomatic (1864968982) and symptomatic (1210663776) patients, but the changes were not significant. Patients on ART showed significantly lower plasma levels of miR-146b-5p (9256382) when compared to healthy controls and symptomatic patients. The patients with ART resistance had miR-150 in HIV/AIDS Correlation between PBMC and plasma miRNAs to CD4 cell counts and viral loads To determine whether any of the miRNA measurements could be utilized for monitoring HIV/AIDS patients with or without ART, we determined the correlation between miRNA levels and the existing surrogate markers -CD4+ T-cell counts or viral loads. Only 2 DDCt value (or fold-change relative to healthy controls) for miR-150 in PBMCs positively correlated with CD4 cell counts across 37 samples belonging to different disease stages, with a Pearson correlation of 0.64 and p,0.01 (Fig. 4a). Thus, the expression levels of miR-150 increase with increasing CD4 counts. In the same manner, we found the relative expression of miR-146b-5p (Pearson correlation = 20.36; p,0.05) (not shown) and miR-16 (Pearson correlation = 20.34, p,0.05) (Fig. 4b) to correlate negatively with viral loads. The plasma levels (relative or absolute) of miR-150 and miR-146b-5p showed no significant correlation with CD4+ T-cell counts or viral loads. The miRNA correlations that were observed in PBMCs with CD4+ T-cell counts and viral loads were lost in plasma due to a further reduction in miRNA expression in the ART resistance group. Diagnostic accuracy of miR-150 and miR-146b-5p in PBMC and plasma as biomarkers We were interested in knowing whether the measured miRNA levels, especially for miR-150 and miR-146b-5p in PBMCs and plasma can differentiate between different stages of HIV disease. The expression value of each miRNA in each sample was calculated by dividing the Ct value of the target miRNA by the Ct value of the endogenous controls -RNU44 for PBMCs and miR-16 for plasma [29]. The study subjects were initially segregated into two sets. Set A included healthy controls, HIV-infected asymptomatic subjects (Group 1) and HIV-infected persons on ART (Group 3); Set B included HIV-infected symptomatic ARTnaïve subjects (Group 2) and those failing first-line ART (Group 4). The diagnostic accuracy for predicting subjects belonging to Set A or Set B was evaluated by Receiver Operating Characteristic (ROC) curve analysis. The results showed the area under curve (AUC) value for PBMC miR-150 to be 0.82 (SE = 0.06, p,0.001) (Fig. 5a). In a similar analysis, PBMC miR-146b-5p showed an AUC value of 0.50 (SE = 0.10, p.0.05) (Fig. 5b). Since PBMC miR-16 levels showed significant correlation with viral loads, we also carried out ROC analysis for its ability to differentiate between Set A and B. This gave an AUC value of 0.60 (SE = 0.08, p,0.05) (Fig. 5c). The diagnostic accuracies of other individual miRNAs in PBMC as well as the combination of all five miRNAs were found to be poor (data not shown). The ROC analysis for copy numbers of plasma miR-150 and miR-146b-5p in these two sets showed AUC values of 0.50 and 0.60, respectively (p.0.05). To further test whether these miRNAs could differentiate HIVinfected asymptomatic and symptomatic ART-naïve subjects (Set C) from patients on ART (Set D), another ROC analysis was carried out. This showed the AUC values for PBMCs and plasma miR-150 to be 0.94 (SE = 0.04, p,0.001) and 0.82 (SE = 0.09, p, 0.001), respectively (Fig. 6a,b; left panels). When we compared the patients on ART (Set D) with those failing first-line ART (Set E), the AUC values of PBMC and plasma miR-150 were 0.98 (SE = 0.02, p,0.001) and 0.90 (SE = 0.08, p,0.001) (Fig. 6a,b; right panels). The PBMC and plasma levels of miR-146b-5p showed high enough accuracy to differentiate between Set C and Set D with AUC values of 0.75 (SE = 0.11, p,0.05) and 0.98 (SE = 0.02, p,0.001), respectively; however, these did not accurately differentiate between patients in Set D and Set E (Fig. 6c,d). Together these results show PBMC and plasma miR-150 levels, and to a lesser degree miR-146b-5p levels to be good predictors of HIV infection and disease in pre-and post-ART patients. Discussion We assayed five miRNAs in HIV/AIDS patients, which are expressed mainly in B and T lymphocytes, the major constituents of PBMCs, and were previously shown to correlate with CD4+ Tcell counts and viral loads in HIV infected persons [9,10,12]. Later, we also checked the plasma levels of two of these miRNAs (miR-150 and miR-146b-5p) that were differentially expressed in patients' PBMCs. Analysis of only selected miRNAs was a proofof-principle study to assess their utility as alternative biomarkers, and to develop correlations between miRNA levels and disease status. The results presented in this report show miR-150 to potentially be a new biomarker for disease progression, therapy and resistance to therapy. We found that miR-150 levels decreased in the PBMCs of HIV/AIDS patients; these were restored with ART but were further reduced in patients who developed drug resistance. On the other hand, miR-150 levels increased in patients' plasma and were reduced following ART and drug resistance. We also show that HIV positive individuals can be classified on the basis of the absolute quantities of miR-150 and miR-146b-5p, either in PBMCs or in plasma. Further, the ART status of patients can be determined from the levels of these miRNAs, especially that of miR-150 in PBMCs and plasma. The diagnostic accuracy was determined from a ROC analysis. It is generally accepted that AUC values of 0.70-0.90 represent medium accuracy and 0.90-1.00 signify high accuracy [30]. With AUC values of 0.94 and 0.82 respectively, miR-150 levels in PBMC and plasma appear to determine HIV disease progression with good precision. The PBMC and plasma levels of miR-146b-5p also provide decent correlation, except between patients on ART and those failing therapy. Down regulation of miR-150 during HIV infection was reported earlier. Among elite controllers and viremic HIV patients, miR-150 levels in PBMCs showed positive correlation with CD4+ T-cell counts [7]. Swaminathan et al. reported lower levels of miR-150 in CD4+ T cells of chronic HIV patients compared to healthy controls [8]. It has been suggested that suppression of miR-150, which is usually expressed at high levels in monocytes, might facilitate HIV-1 infection [9]. It is a key regulator of immune cell differentiation and activation and is expressed in mature, resting B and T cells, but not in their progenitors. Ectopic expression of miR-150 in hematopoietic stem cell progenitors decreased the numbers of mature B cells by blocking the pro-B to pre-B cell transition [31]. MiR-150 controls c-Myb expression and affects lymphocyte development, with knockout mice showing increased naïve B cell expansion and antibody production [32]. Interestingly, during HIV infection, hyperactivated naive B cells are a major source of abnormal IgG production leading to hypergammaglobulinemia [33]. A recent report provides important insights into miRNA and mRNA deregulation during HIV infection, and showed the cellular transcriptome to be significantly modulated by HIV-1 through miRNAs [34]. However, unlike our and other studies, these authors did not observe changes in miR-150 or miR-14b-5p between uninfected healthy controls, infected persons with low viral load (,40 copies/ml) or high viral load (.50,000 copies/ml). These differences might be due to classification of patients based on the extremes of viral load [34] and not CD4 counts as in other studies. Since our aim was to discover new miRNA biomarkers for HIV infection, we preferred to study this in PBMCs, which is an accessible cellular component, relates to immune responses and includes cells that are major targets of HIV [7]. Although the frequencies of different cell populations in PBMCs vary across individuals, it generally includes CD4+ T lymphocytes (25-60%), CD8+ T lymphocytes (5-30%), monocytes (10-30%), B cells (5-20%), NK cells (5-20%) and dendritic cells (1-2%). As miR-150 is present in CD4+ T lymphocytes [35], CD8+ T lymphocytes [36], monocytes [9] and B cells [37], we assume that the miR-150 that we measured mostly came from these cells, which constitute around 90% of PBMCs. Although the numbers of cells in PBMCs infected with HIV-1 varies from 0.1 to 13.5% [38], pronounced down regulation of miR-150 in this compartment reinforces the earlier speculation of bystander effects resulting from systemic changes in cellular activation, cytokine levels, etc. following HIV infection [11]. Elevated circulating lipopolysaccharide (LPS), which is correlated with depletion of CD4 cells during chronic HIV infection and AIDS [39], may also reduce miR-150 levels in leukocytes [40]. It was shown earlier that miRNAs are stably expressed in animal serum/plasma and that their unique expression patterns may serve as ''fingerprints'' for a number of diseases, especially various cancers [41]. Therefore, we examined whether the levels of miR-150 and miR-146b-5p showed similar trends in plasma as in PBMCs. Contrary to PBMCs, we found increased levels of these two miRNAs in the plasma of symptomatic patients, with the levels reducing on ART and ART resistance. Though most studies found the same trend of alteration between circulating miRNAs and tissue miRNAs [42,43], the opposite is also true [44]. With the initiation of ART, miR-150 levels in PBMCs returned to normal, and on ART resistance these again reduced significantly. There could be multiple reasons for this, including epigenetic changes during prolonged ART administration. We have also observed gene expression patterns and cytokine levels in ART resistant patients to sometimes be opposite of what is expected (S. Munshi, et al; unpublished data). With reduced levels of these miRNAs in the PBMCs of HIV/ AIDS patients, we expected to see that in plasma as well. But since circulating miRNAs are derived not only from circulating peripheral blood cells but also from other tissues affected by the infection/disease [41], we assume that the increase in circulating miR-150 is due to cellular sources other than PBMCs. Circulating miRNAs are either released due to cytolysis or tissue injury, in apoptotic bodies or actively secreted from cells in small membranous shedding vesicles, called exosomes and microvesicles, or as RNA-protein complexes [45]. While miR-150 and other miRNAs are synthesized in different cells and tissue, it would be interesting to define the sources of these circulating miRNAs in HIV/AIDS. The high levels of Ceramide in HIV-infected cells [46], which promote exosome secretion [47], or ongoing apoptosis in bystander cells could be possible sources of increased circulating miR-150 and miR-146b-5p in HIV patients. Ceramide is a bioactive sphingolipid that is increased when cultured neurons are exposed to HIV gp120 and Tat proteins, as well as in the brain tissue and cerebrospinal fluid of patients with HIV-associated dementia [48]. Proinflammatory cytokines like tumor necrosis factor-a (TNF-a) and interleukin-1b, which are produced in high quantities during HIV infection, can also induce the generation of Ceramide and apoptosis in brain cells [49], and play a role in increased secretion of miRNA containing exosomes or apoptotic bodies. Study subjects Whole blood was collected from HIV/AIDS patients recruited from the National AIDS Control Organization ART Clinics at Dr. Ram Manohar Lohia Hospital and Maulana Azad Medical College Hospital in New Delhi, India. Ethics committees at the participating institutions and the National AIDS Control Organization (NACO), New Delhi, India, approved the study. Written informed consent was obtained from each participant before obtaining the samples. All subjects were HIV seropositive; those on anti-tubercular therapy were excluded from the study. They were divided into four groups based on their CD4 counts and ART status, as follows: Group 1, CD4 .350/ ml and ART naïve (asymptomatic); Group 2, CD4 ,200/ ml and ART naïve (symptomatic); Group 3, CD4 .250/ ml and receiving ART for at least 6 months (on ART); and Group 4, CD4 ,200/ ml and resistant to first-line ART (ART resistance). Patients in different groups were selected to ensure no overlap in the CD4 counts to keep the groups distinct. The clinical, immunological and ART data on subjects was collected from the participating hospitals. RNA preparation from PBMC Peripheral blood mononuclear cells were isolated using Ficoll-Hypaque from 5 ml of blood, which was collected in K+EDTAcoated vacutainers (Becton Dickinson, Franklin Lakes, NJ, USA). The isolated PBMCs were stored at 280uC till further use. Total RNA was extracted from PBMCs with the RNeasy Mini Kit (Qiagen, Germany) according to the manufacturer's instructions. The RNA concentration was estimated on a NanoDrop 1000 (Thermo Scientific, Wilmington, DE, USA); representative samples were also randomly checked for RNA integrity and concentration by capillary electrophoresis on an Agilent 2100 Bioanalyzer (Agilent Technologies, Inc, Santa Clara, CA). RNA preparation from plasma Total RNA was isolated from plasma using the miRNeasy kit (Qiagen, Germany) with few minor modifications. In brief, 600 ml of QIAzol reagent was added to 100 ml of plasma sample. The sample in the tube was mixed, followed by addition of 3.5 ml (1.6x10 8 copies/ ml) of C. elegans miR-39 (Qiagen, Germany), and 140 ml of chloroform. After vigorous mixing for 15 seconds, the plasma sample was centrifuged at 12,000xg for 15 min and the upper aqueous phase was transferred to a new tube. To this 1.5 volumes of ethanol were added and the sample was applied directly to the column. The immobilized RNA was eluted in 50 ml of RNAse-free water and was quantified using a NanoDrop 1000 spectrometer (Thermo Scientific, Wimington, DE, USA). The efficiency of small RNA isolation was monitored by the amount of spiked-in C. elegans miR-39 recovered and was used as internal control for normalizing the expression of miR-146b-5p and miR-150. miRNA assay The miRNAs were assayed individually in each sample using TaqMan MicroRNA Assays (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's protocol. For synthesis of each miRNA-specific cDNA, 10 ng of total RNA was reverse transcribed using TaqMan miRNA reverse transcription kit (Applied Biosystems, Foster City, CA, USA) in a 15 ml reaction volume containing 1X RT buffer, 0.15 ml of 100 mM dNTPs (with dTTP), 0.19 ml of RNase inhibitor (20 units/ml), 1 ml of MultiScribeTM Reverse Transcriptase (50 units/ml) and 3 ml of each of the miRNA specific stem-loop primers. The primers used were: hsa-miR-16, 000391; hsa-miR-146-5b, 001097; hsa-miR-150, 000473; hsa-miR-191, 2299; hsa-miR-223, 000526; and cel-miR-39, 000200 (Applied Biosystems, Foster City, CA, USA). The mixture was incubated at 16uC for 30 min, 42uC for 30 min and 85uC for 5 min. Quantitative real-time PCR was then carried out on the StepOne Plus cycler (Applied Biosystems). Briefly, each 20 ml reaction consisted of 2.5 ml of the reverse transcription product, 10 ml TaqMan 2X Universal PCR Master Mix No AmpErase UNG, 1 ml TaqMan MicroRNA Assay (20X) containing the TaqMan primer-probe mixture. Reactions were initiated with a 10 min incubation at 95uC followed by 40 cycles of 95uC for 15 sec and 60uC for 60 sec. Small nucleolar RNA 44 (RNU44) was used as an endogenous control to normalize miRNA expression in PBMC as used by others [17,23]. In the plasma cDNA we also measured the levels of endogenous miR-16 and spiked-in synthetic cel-miR-39 and these were used as internal controls to normalize the expression of miR-150 and miR-146b-5p [18]. To determine the absolute copy number of miR-150 and miR-146b-5p, standard curves was prepared for each miRNA by serial dilution of synthetic miR-150 or miR-146b-5p (Sigma Aldrich, Bangalore, India), taking 10 3 , 10 5 , 10 7 and 10 9 copies for each miRNA specific reverse transcription reaction in separate tubes followed by its real-time PCR reaction using respective TaqMan MicroRNA assays. All the experiments were performed in duplicates for each sample and reverse transcriptase-negative controls were included in each batch of reactions. Viral load measurement HIV-1 plasma viral loads were quantitated by an in-house reverse-transcriptase TaqMan real-time PCR assay using specific primers and probes from a conserved region of the gag gene [24]. Briefly, viral RNA was isolated from 100 ml of plasma by QIAamp Viral RNA Mini Kit (Qiagen, Germany) and subjected to reverse transcription using SuperScript III reverse transcriptase as per manufacturers protocol. The real-time PCR was performed on a StepOne Plus cycler using HIV-1 primers and an internal TaqMan probe (Applied Biosystems). A plasma sample that contained 150,000 copies/ml (NIH AIDS Reagent Bank) was similarly treated and used to obtain the standard curve. Statistical analysis The relative expression levels of miRNAs were calculated using the comparative DDCt method as described previously. The fold changes in miRNAs were calculated by the equation 2 2DDCt [25,26]. Expression data were presented as mean 6 standard error of mean (SEM) with 2-tailed p values. Correlation analysis was performed using two-tailed Pearson correlation test. Sensitivity, specificity, and the area under the curve (AUC) for specific miRNAs were estimated using Receiver Operator Characteristic (ROC) analysis using ''ROC Analysis'' software (Watkins, M. W. 2002, State College, PA: Ed & Psych Associates). Data were analyzed using Student's T or ANOVA tests, and p-values of 0.05 or lower were considered to be significant. Conclusions With the aim of identifying new biomarkers of HIV infection and disease, we evaluated the levels of select miRNAs in PBMCs and plasma of HIV/AIDS patients. An ideal biomarker should not only indicate disease progression, response to therapy and failure of therapy, but should also have a role in the natural history of infection and disease. From that point, this pilot study showed expression levels of miR-150 in PBMCs and plasma could be a good indicator of the status of HIV disease.
v3-fos-license
2020-08-20T10:12:26.997Z
2020-08-20T00:00:00.000
221178037
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2020.00262/pdf", "pdf_hash": "1254c0baef69ed0d96837ce3b746dc1c7288a930", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41474", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "2c816c1098ddd0d7719255b9a3fef379e0077ec3", "year": 2020 }
pes2o/s2orc
The Role of Cold-Sensitive Ion Channels in Peripheral Thermosensation The detection of ambient cold is critical for mammals, who use this information to avoid tissue damage by cold and to maintain stable body temperature. The transduction of information about the environmental cold is mediated by cold-sensitive ion channels expressed in peripheral sensory nerve endings in the skin. Most transduction mechanisms for detecting temperature changes identified to date depend on transient receptor potential (TRP) ion channels. Mild cooling is detected by the menthol-sensitive TRPM8 ion channel, but how painful cold is detected remains unclear. The TRPA1 ion channel, which is activated by cold in expression systems, seemed to provide an answer to this question, but whether TRPA1 is activated by cold in neurons and contributes to the sensation of cold pain continues to be a matter of debate. Recent advances have been made in this area of investigation with the identification of several potential cold-sensitive ion channels in thermosensory neurons, including two-pore domain potassium channels (K2P), GluK2 glutamate receptors, and CNGA3 cyclic nucleotide-gated ion channels. This mini-review gives a brief overview of the way by which ion channels contribute to cold sensation, discusses the controversy around the cold-sensitivity of TRPA1, and provides an assessment of some recently-proposed novel cold-transduction mechanisms. Evidence for another unidentified cold-transduction mechanism is also presented. The detection of ambient cold is critical for mammals, who use this information to avoid tissue damage by cold and to maintain stable body temperature. The transduction of information about the environmental cold is mediated by cold-sensitive ion channels expressed in peripheral sensory nerve endings in the skin. Most transduction mechanisms for detecting temperature changes identified to date depend on transient receptor potential (TRP) ion channels. Mild cooling is detected by the menthol-sensitive TRPM8 ion channel, but how painful cold is detected remains unclear. The TRPA1 ion channel, which is activated by cold in expression systems, seemed to provide an answer to this question, but whether TRPA1 is activated by cold in neurons and contributes to the sensation of cold pain continues to be a matter of debate. Recent advances have been made in this area of investigation with the identification of several potential cold-sensitive ion channels in thermosensory neurons, including two-pore domain potassium channels (K2P), GluK2 glutamate receptors, and CNGA3 cyclic nucleotide-gated ion channels. This mini-review gives a brief overview of the way by which ion channels contribute to cold sensation, discusses the controversy around the cold-sensitivity of TRPA1, and provides an assessment of some recently-proposed novel cold-transduction mechanisms. Evidence for another unidentified cold-transduction mechanism is also presented. INTRODUCTION All biological processes are affected by temperature, so to maintain optimal function in the face of external thermal challenges it is crucial for animals to detect the temperature both of their bodies and the environment, and to react appropriately. The somatosensory neurons that sense external temperature are pseudo-unipolar cells whose cell bodies are located in the dorsal root ganglia (DRG), located alongside the spinal cord. Activation of thermal transduction mechanisms in the sensory nerve endings of these neurons leads to depolarization and consequently firing of action potentials along the axons that carry information about the intensity and duration of the stimulus to the spinal cord. This information is then relayed to the brain via the spinothalamic tract (Palkar et al., 2015). Cold activation of peripheral nerves can produce one of two sensations: moderate innocuous cold produces a sensation of pleasant coolness, while noxious cold produces a sensation of pain that triggers reflexes allowing animals to avoid tissue damage. Information about the cold is conducted by thinly myelinated Aδ-fibers and unmyelinated C-fibers. There are two types of cold-sensitive C-fiber: low-threshold fibers that are activated around 28 • C and fire action potentials at a high rate, producing a sensation of coolness; and high-threshold fibers, also typically heat-and mechano-sensitive, that are activated around 5 • C and fire action potentials at a slow rate, producing a sensation of pain (Grossmann et al., 2009). Several recent reviews have covered aspects of the molecular basis of cold sensation in mammals (Himmel and Cox, 2017;Lamas et al., 2019;MacDonald et al., 2020). In this review, we focus on the detection of external cold temperature mediated by peripheral somatosensory nerve fibers. Figure 1 outlines the main mechanisms that have been proposed to date. COLD-SENSITIVE TRP CHANNELS Most sensory transduction is mediated by ion channels. For example, mechanosensation is mediated by Piezo ion channels (Coste et al., 2010) and sour taste is sensed by acid-sensing ion channels (ASICs; Lingueglia, 2007). The main sensors of temperature in the nervous system are ion channels of the transient receptor potential (TRP) family. TRP channels have six transmembrane domains and are permeable to cations, such as Na + , K + , and Ca 2+ (Julius, 2013). When thermosensitive TRP channels are activated by temperature, they open to allow cations into the cell, which depolarizes the membrane, leads to the generation of action potentials that signal the sensation to the CNS, and also causes a rise in the intracellular calcium concentration, either by direct Ca 2+ influx through the calciumpermeable TRP channels themselves, or by triggering activation of voltage-gated Ca 2+ channels (Palkar et al., 2015). The temperature-sensitivity of ion channels can be determined by measuring their Q 10 value, which is defined as the change in current passing through the ion channel resulting from a 10 • C change in temperature. Most ion channels have a Q 10 value between 1 and 3, while thermosensitive ion channels are defined as those having a Q 10 value greater than three. According to this definition, the cold-sensitive TRP channels are TRPM8, TRPA1, and TRPC5 (Wang and Siemens, 2015). Transient Receptor Potential Melastatin 8 (TRPM8) TRPM8 is a non-selective cationic channel that can be activated by cold between 25 • C and 18 • C, with a Q 10 value of 24 (McKemy et al., 2002;Peier et al., 2002;Brauchi et al., 2004). TRPM8 may be gated directly by cold, but is also weakly voltagegated and cold could activate the channel by shifting its voltagedependent activation curve in a positive direction relative to the level of the resting membrane potential (McKemy et al., 2002;Brauchi et al., 2004;Voets et al., 2004;Karashima et al., 2007;Zakharian et al., 2010). Direct activation of TRPM8 by cold in neurons was confirmed via overexpression in mouse hippocampal neurons, which renders them cold-sensitive (de la Peña et al., 2005). The TRPM family of ion channels also contains the warmth-sensitive TRPM2 ion channel, which is closely related to TRPM8, but other members of this family are thermally-insensitive (Togashi et al., 2006;Tan and McNaughton, 2016). Consistent with its range of thermal activation in vitro, TRPM8 also plays an important role in the sensation of non-noxious cool temperatures in vivo. For example, TRPM8 deficient mice have attenuated responses to the evaporative cooling assay, However, these mice can still sense noxious cold (Bautista et al., 2007;Dhaka et al., 2007). Even TRPM8/TRPA1 double KO mice display no deficits in noxious cold sensation (Brenner et al., 2014). Other studies, on the other hand, have reported that TRPM8 is required for both neural and behavioral responses to noxious cold in mice (Knowlton et al., 2010(Knowlton et al., , 2011. On balance, TRPM8 plays a clear role in the sensation of non-noxious cool temperatures but is not the only cold transduction mechanism. Which other candidates have been proposed? Transient Receptor Potential Canonical 5 (TRPC5) TRPC5 is a nonselective cationic channel that is activated in expression systems by a fall in temperature in the range between 25-37 • C, with a Q 10 value of ∼10 (Okada et al., 1998;Zimmermann et al., 2011). The channel was found to contribute to cold responses of DRG neurons in vitro, but TRPC5 KO mice displayed no difference in temperature preference compared to WT mice (Zimmermann et al., 2011), suggesting that the ion channel is not involved to any significant extent in physiological cold-sensation. Transient Receptor Potential Ankyrin 1 (TRPA1) TRPA1 is a nonselective cationic channel that can be activated by cold below 17 • C in some expression systems, with a Q 10 value of ∼10 (Story et al., 2003;Sawada et al., 2007;Karashima et al., 2009). TRPA1 can be activated by various other noxious stimuli, including toxic bacterial products and environmental irritants (Viana, 2016). hTRPA1 inserted in lipid bilayers is activated by both cold and heat, conferring a U-shaped thermal sensitivity (Moparthi et al., 2014(Moparthi et al., , 2016. However, recombinant rat and human TRPA1 when overexpressed in human embryonic kidney (HEK293) cells failed to respond to a 5 • C cold stimulus, suggesting that the cold-sensitivity of TRPA1 was not intrinsic but resulted from a variable factor present in some cells but not others (Jordt et al., 2004;Nagata et al., 2005). hTRPA1 can be directly activated by [Ca 2+ ] i (Doerner et al., 2007;Zurborg et al., 2007;Moparthi et al., 2020), so some of these contradictory findings could be explained by indirect activation of TRPA1 via a background Ca 2+ influx caused by cooling. Such a background cold-activated [Ca 2+ ] i increase has been observed in most cell lines (Caspani and Heppenstall, 2009). Experiments addressing the cold-sensitivity of TRPA1 in neuronal cell cultures also remain inconclusive. One study found that TRPA1 was expressed by 46.5% of rat DRG neurons, but only 3.6% of DRG neurons responded to a noxious cold stimulus (Jordt et al., 2004). Similarly, 96% of TRPA1-expressing rat trigeminal ganglion (TG) neurons did not respond to a 5 • C cold stimulus, as measured by in vitro Ca 2+ imaging (Jordt et al., 2004). These experiments indicate that the expression of TRPA1 alone is not sufficient to generate cold responses. Additionally, neurons of TRPM8 KO mice that remained cold-sensitive in the absence of TRPM8, did not respond to the TRPA1 agonist mustard oil, showing that TRPA1 does not underlie their cold responses (Bautista et al., 2007). Only a small percentage of cold-sensitive mouse trigeminal neurons were activated by TRPA1 agonists (Madrid et al., 2009). In another study, only 8% of TRPA1-expressing DRG neurons responded to a 4 • C cold stimulus in mice, showing that TRPA1 expression alone does not render neurons cold-sensitive, but the amplitude of the cold-activated Ca 2+ influx in these neurons was reduced after application of the selective TRPA1 antagonist HC030031, suggesting that TRPA1 does contribute to their cold responses (Memon et al., 2017). Unfortunately, the authors did not provide data regarding the size of this decrease, which makes it difficult to conclude that TRPA1 was solely responsible for the observed Ca 2+ influx. Therefore, these results do not prove that TRPA1 is directly cold-sensitive and the results could still be explained by indirect activation of TRPA1 via a background cold-activated Ca 2+ influx, as suggested previously (Caspani and Heppenstall, 2009). Behavioral studies using TRPA1 KO mice also report conflicting results. TRPA1 KO mice were indistinguishable from WT littermates in the two-plate thermal choice test (Bautista et al., 2006(Bautista et al., , 2007. Others have reported that TRPA1 did contribute to cold-sensation, but only in female mice (Kwan et al., 2006), or only in male mice (Winter et al., 2017). Another study found no significant difference between the responses of both male and female TRPA1/TRPM8 double KO mice when compared to their TRPM8 KO littermates (Brenner et al., 2014). Others report that TRPA1 KO mice did exhibit a partial deficit in cold thermo-sensation, but only during some cold stimulus protocols (Karashima et al., 2009). TRPA1 may contribute, not to cold-sensation, but to cold hypersensitivity (del Camino et al., 2010;Tsagareli et al., 2019).it may not be required for any neural or behavioral responses to cold (Knowlton et al., 2010). An in vivo Ca 2+ imaging study of the DRG showed no difference in responses to mild or intense cooling between TRPA1 KO and WT mice (Ran et al., 2016). In rats, the TRPA1 agonist cinnamaldehyde applied to the skin did not sensitize noxious cold-evoked hind limb withdrawal but did sensitize noxious heat withdrawal mediated by C-fibers (Dunham et al., 2010). In agreement, TRPA1 in mammalian somatosensory neurons has recently been found to be heat-sensitive in at least some neurons (Vandewauw et al., 2018), echoing its heat-activation in the exquisitely heat-sensitive thermal sensors of pit vipers (Gracheva et al., 2010;Chen et al., 2013). Furthermore, TRPA1 mediates itch-related heat hyperalgesia in mice (Tsagareli et al., 2019). Taken together, these studies indicate that TRPA1 is not a key contributor to physiological cold-sensation. EVIDENCE FOR ALTERNATIVE COLD TRANSDUCTION MECHANISMS There are two populations of cold-sensitive neurons in the rat DRG responding to decreases of temperature between 32-12 • C. One population (>70% of cold-sensitive neurons) is sensitive to the TRPM8 agonist menthol and therefore presumably expresses TRPM8, and a second population is not sensitive to menthol (Babes et al., 2004). Furthermore, knockout of TRPM8 in mice caused only a partial reduction in the number of DRG neurons that responded to cooling (14.9% to 7.6%; Dhaka et al., 2007). Therefore, another cold-sensitive mechanism must be present in mouse somatosensory neurons. These observations cannot be explained by the presence of TRPA1, because a third of cold-sensitive mouse DRG neurons lack both TRPM8 and TRPA1 channels (Munns et al., 2007). The sympathetic nervous system is also intrinsically cold-sensitive. Postganglionic sympathetic neurons of the mouse superior cervical ganglion (SCG) can be directly activated by cold <16 • C (Smith et al., 2004;Munns et al., 2007). A physiological function for this sensory mechanism has not yet been identified. Sympathetic nerves trigger cold defense mechanisms, such as cutaneous vasoconstriction, shivering thermogenesis, and brown adipose tissue thermogenesis (Morrison, 2016). Perhaps the thermo-sensitive properties of sympathetic neurons serve to enhance these functions. Around 60% of mouse SCG neurons express a cold-sensitive ion channel whose activation results in an influx of Ca 2+ . The Ca 2+ influx is not activated by the TRP channels agonists menthol or mustard oil and so cannot be either TRPM8 or TRPA1 (Munns et al., 2007). Similarly, the cold-sensitivity of TG neurons that innervate the dental pulp is not mediated by either TRPM8 or TRPA1 (Michot et al., 2018). These studies suggest that there are at least two mechanisms of cold transduction: TRPM8, and an unknown Ca 2+ influx mechanism that is not TRPA1. Apart from TRP channels, a few other types of ion channel have been proposed to function as cold sensors, including two-pore domain potassium channels (K2P; Maingret et al., 2000;Kang et al., 2005), epithelial sodium channels (ENaC; Askwith et al., 2001), GluK2 glutamate receptors (Gong et al., 2019), and CNGA3 cyclic nucleotide-gated ion channels (Feketa et al., 2020). The possible contribution of these ion channels to cold sensation will now be discussed. A comparison of the thermosensitive properties of these ion channels is provided in Supplementary Table S1. Epithelial Sodium Channels (ENaC) The constitutively active Na + current of human cationic epithelial sodium channels (ENaC) can be potentiated by cold lower than 23-25 • C with a Q 10 value of 4.4 in heterologous cells and mouse DRG neurons, but only in the presence of protons (Askwith et al., 2001). Therefore, these channels could potentially play a role in cold transduction by depolarizing the membrane and triggering action potential firing. However, this hypothesis is challenged by the finding that the ENaC antagonist amiloride did not affect cold-activated currents in rat DRG neurons (Reid and Flonta, 2001). Two-Pore Domain Potassium Channels (K2P) The K2P family consists of 15 genes that together make an important contribution to the native K + background leak currents observed in many neurons. The main function of K2P channels is to control membrane excitability by setting the resting membrane potential (Enyedi and Czirják, 2010). These channels may form a cold transduction mechanism as follows: K2P channels are normally open at physiological temperatures but some close when exposed to cold, thus causing net depolarization of the cell; the membrane potential may thus reach the threshold, opening Na V and Ca V channels, and generating action potentials. KO mice have been generated to establish the function of these channels in vivo. TREK1 and TRAAK KO mice have no obvious phenotype (Heurteaux et al., 2004;Alloui et al., 2006;Noël et al., 2009), but TREK1/TRAAK double KO mice are more sensitive to cold in the cold plate assay and temperature preference test (Noël et al., 2009). Similarly, both TREK2 and TRESK KO mice have a somewhat enhanced sensitivity to moderately cool temperatures (Pereira et al., 2014;Guo et al., 2019;Castellanos et al., 2020). The studies summarized above suggest a role for K2P channels in thermosensation. Whether the inhibition of the background K + current mediated by these channels solely modulates neuronal excitability or constitutes a cold-transduction mechanism in isolation, needs further study. Glutamate Ionotropic Receptor Kainate Type Subunit 2 (GluK2) A recent genetic screen has put forward another candidate cold sensor by showing that the glutamate receptor GLR-3 is necessary for cold-avoidance behavior in Caenorhabditis elegans (Gong et al., 2019). GLR-3 is both an ionotropic and metabotropic receptor, coupled to the inhibitory protein G i/o . The mammalian homolog GluK2 also responds to cold <18 • C when overexpressed in Chinese Hamster Ovary (CHO) cells as measured by Ca 2+ imaging, and this Ca 2+ -influx is reduced in mouse DRG neurons treated with mGluk2 siRNA in vitro. Curiously, the observed cold-activated Ca 2+ -influx was shown to be independent of the ionotropic function of GluK2 itself. Furthermore, it was abolished in the absence of extracellular Ca 2+ , indicating that this Ca 2+ increase is not mediated by Ca 2+ -release from intracellular stores. Therefore, it must be mediated by another unidentified membrane channel present in all cell types tested, including CHO, COS-7, Hela, and DRG neurons (Gong et al., 2019). The molecular pathway by which the activation of an inhibitory G i/o -coupled receptor such as Gluk2 can lead to activation of a Ca 2+ channel needs further study. Cyclic Nucleotide-Gated Channel Alpha 3 (CNGA3) Another novel candidate for the unidentified cold sensor is the cation channel CNGA3. CNGA3 is not directly gated by cold, but its activation by cGMP can by potentiated by cold below 22 • C, with a Q 10 value of 6.5 (Feketa et al., 2020). The cation channel CNGA3 was first discovered to be cold-sensitive in Grünenberg ganglion neurons, located in the vestibule of the murine nose (Stebe et al., 2014). These neurons transduce coolness via a cGMP cascade (Mamasuew et al., 2010). Additionally, CNGA3 is responsible for cold responses of a subpopulation of neurons in the thermoregulatory center of the hypothalamus in mice (Feketa et al., 2020). Interestingly, CNGA3 is also enriched in a subpopulation of cold-sensitive DRG neurons (Luiz et al., 2019). Further research is needed to determine whether CNGA3 contributes to cold responses in these neurons. Modulation of Cold Responses by Background Potassium Currents The temperature threshold of cold-sensitive neurons is determined not only by the expression of cold transduction channels but can also be modulated by a cold-insensitive ''excitability brake current,'' I KD , carried by K + ions, which controls neuronal excitability (Viana et al., 2002). I KD has been reported to play a role in cold-sensitive mouse neurons expressing either TRPM8 or TRPA1 (Madrid et al., 2009;Memon et al., 2017). It is thought to be mediated by K V 1 channels, as it can be blocked by dendrotoxins (Madrid et al., 2009;Teichert et al., 2014). When activated, I KD causes hyperpolarisation of the sensory nerve, making it less sensitive to depolarization by cold-sensitive ion channels. Mechanisms Transmitting Cold Responses Transmission of the sensation of extreme cold to the CNS has been the subject of two interesting studies. Zimmermann et al. (2007) found that while many mechanisms mediating exonal excitability were inhibited by strong cold, activation of the sodium channel Na V 1.8 was maintained, with the result that conduction of action potentials, and hence the sensation of noxious cold, was maintained in normal mice but was lost in mice in which Na V 1.8 had been deleted (Zimmermann et al., 2007;Abrahamsen et al., 2008). A more recent article found, in contrast, that Na V 1.8 was in general not expressed in the same neurons as the cold sensor TRPM8, nor in neurons responding to cold down to 5 • C. Moreover, the deletion of Na V 1.8 had little effect on cold responses until the temperature had reached strongly noxious levels (<5 • C; Luiz et al., 2019). Both articles agree, though, that responses to extreme cold (<5 • C) are ablated by the deletion of Na V 1.8. CONCLUDING REMARKS Cold is sensed by specialized sensory nerve endings in the periphery. In these nerve endings, a combination of ion channels is responsible for transducing the sensation of cold. The role of TRPM8 in innocuous cold sensation has been well established, but which combination of cold transduction molecules is responsible for the sensation of noxious cold remains unclear. The vexed question of whether TRPA1 accounts for any fraction of noxious cold sensation has entertained the field for the last decade or more. It is clear at least that TRPA1 is not the only noxious cold sensory mechanism, because: (a) cold-sensitive neurons remaining after deletion of TRPM8 do not in general express TRPA1; (b) most TRPA1-expressing neurons are not cold-sensitive; and (c) mice still exhibit strong cold-aversive responses after deletion of both TRPM8 and TRPA1. On balance, it seems clear that any contribution of TRPA1 to noxious cold sensation is small, and other mechanisms must exist. Several studies provide evidence for the presence of an unidentified Ca 2+ influx mechanism activated by noxious cold in vitro (Smith et al., 2004;Munns et al., 2007;Gong et al., 2019). Novel candidates for this mechanism have been proposed recently, none of which belong to the TRP ion channel family. Each of these discoveries raises further questions: (a) K2P channels make a major contribution to the neuronal resting potential, and action potentials can be initiated when the activity of these channels is suppressed by cold. Do K2P channels form an independent cold transduction mechanism, or do these channels function exclusively to modulate cold responses? (b) GluK2, which appears to act both as an ionotropic glutamate receptor and as a metabotropic cold sensor, has recently been suggested as a novel cold-sensitive mechanism. But how can activation of an inhibitory G protein-coupled receptor such as GluK2 lead to action potential firing? and (c) The cGMP-gated receptor CNGA3 has also recently emerged as a candidate cold sensor in the vomeronasal organ of rodents. By which pathway might CNGA3 mediate cold transduction in somatosensory nerves? Answering these questions is crucial for completing our understanding of the mechanisms that underlie cold sensation. Future research may provide a much-needed target for pharmaceutical intervention in patients that suffer from hypersensitivity to cold, such as those with fibromyalgia, chemotherapy-induced cold hypersensitivity, and dental cold hypersensitivity. AUTHOR CONTRIBUTIONS TB and PM wrote the manuscript. FUNDING This work was supported by a grant from the Wellcome Trust to PM (205006/Z/16/Z). ACKNOWLEDGMENTS We thank Bruno Vilar, and Larissa Garcia Pinto for critically reading the manuscript.
v3-fos-license
2018-11-10T00:08:39.617Z
2016-07-20T00:00:00.000
138301910
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/51271", "pdf_hash": "621ff3a521e5ff4922062116e6f6ae29153e3654", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41476", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "27dc6ddc03335604ed394d38f484ec3109af6119", "year": 2016 }
pes2o/s2orc
Carbon Nanotube–Polymer Composites: Device Properties and Photovoltaic Applications This chapter provides an in-depth coverage of recent advances in the areas of the development and characterization of electro-optically active, device-grade carbon nanotube (CNT)–polymer blends. These new organic–inorganic multifunctional nanocomposites share many advanced characteristics which make them ideally suited for industrial scale, high-throughput manufacturing of lightweight, flexible electronic, light switching and emitting as well as energy harvesting devices of extremely low cost. The fundamental aspects and the physical mechanisms controlling light–matter interaction, photo-conversion, and photo-generated charge-carrier transport in these nanotube–polymer composites as well as the influence of the processing conditions on the electronic properties and device-related performances are further reviewed and discussed. Introduction Blends of conjugated polymers and high performance carbon-based nanosemiconductors are an emerging class of easy-to-fabricate organic-inorganic nanocomposite materials with the potential to profoundly influence many electronic device market segments, including optoelectronics. Extraordinary characteristics of carbon nanotubes (CNTs) and prevalence of interfacial regions and the nanoscopic phase are a source of drastic change and gain in the optoelectrical response of the polymer matrix that typically falls outside of classical scaling behavior of conventional polymer composites. These novel nanocomposites and their based devices can be fabricated using roll-to-roll techniques that makes them ideally suited to industrial scale, high-throughput manufacturing of lightweight, flexible electronic, light switching and emitting as well as energy harvesting devices at extremely low cost [1][2][3][4][5][6]. Conjugated polymers exhibit electronic and light emission properties that are similar to those of crystalline semiconductors and have been already implemented in organic optoelectronic devices such as organic light-emitting diodes (OLEDs), switches, and organic photovoltaic (OPV) cells [7]. Incorporating n-type dopants in the form of metallic CNTs into p-type polymer matrix has been shown to greatly enhance performance of such OPV cells by increasing the rates of non-radiative dissociation of excitons as well as charge-carrier collection efficiency. The formation of optimally loaded networks of electrically conductive nanotube network in turn entails detailed the consideration of the influence of the process parameters on the physical characteristics and interaction of the polymer with the nanotubes in a liquid phase. As the absorption coefficient of photosensitive polymers remains large, light is typically absorbed within a very thin layer, which drastically benefits the efficiency-to-cost ratio for these cells [8,9]. The π-conjugation in polymers results in an energy separation of ~1-3 eV between the lowest unoccupied molecular orbital (LUMO) and the highest occupied molecular orbital (HOMO). As a result, the light absorption-emission spectrum falls in the visible nearinfrared (NIR) spectral range that complements that of single-walled carbon nanotubes (SWNTs), that is, the near IR-UV [2,[10][11][12]. An abrupt, type-II band alignment between polymer matrix and carbon phase is required and can be realized for many nanotube-polymer composites to achieve sufficiently fast interfacial charge separation and pronounced photovoltaic effect [13,14]. Among different classes of nanomaterials including semiconductor quantum dots and fullerenes, SWNTs have been proven particularly suitable for uses in OPV, photodetector, and light-emitting diode applications based on conjugated polymers because of their large aspect ratio and remarkable optoelectronic properties including bandgap tunability, strong optical absorptivity, ballistic transport, solution-processability, and excellent chemical stability [15,16]. Owing to their quasi-one-dimensional structure and improved transport characteristics, a class of SWNTs has been confirmed to exhibit many favorable device functionalities which make them attractive for application in a variety nano-electronic and mechanical devices and systems, among which are interconnects, rectifies, field-effect transistors, analyte, and light sensors. Compared with other nanostructures, SWNTs are also known to exhibit strong multirange absorption in part associated with resonance-type interband electronic transitions (e.g., S 11 , M 11, S 22 ) as well as free carrier and plasmonic excitations. Recent experiments further confirmed on the presence of a strong photoconduction response in the infrared (IR) which can in turn afford many new opportunities in engineering nanophotovoltaic and optoelectronic organic polymer-SWNT-based devices operating over multiple spectral ranges, including IR [17][18][19]. High conversion efficiencies of ~5 and 9% were extracted in case of polymer-based OPV cells featuring C 60 molecules and CNTs, respectively. Yet, unlike to C 60 , polymers incorporating aligned CNTs demonstrate much larger intrinsic charge mobility at lower percolation threshold/limit. At the same time, the increased photo-generated charge transport and in turn collection efficiency facilitate the development of OPV cells featuring larger light absorption (thicker active device layer) and electrical power output, which translates into overall higher efficiency-to-cost ratio for these cells. Combining SWNTs with electrooptically active polymers thus provides an attractive route to creating a new generation of multifunctional device-grade organic-inorganic electronic materials for uses as sensors, OLEDs, PV cells, electromagnetic absorbers, and other electronic devices [20][21][22]. In this chapter, we review the progress while focusing on the fundamental aspects behind the light-matter interaction, photo-conversion, and photo-carrier generation as well chargecarrier transport in SWNT-polymer composites. The fabrication, structural-mechanical, and transport characteristics of various nanotubes-polymer-based composites are reviewed in Section 2. Key photo-physical processes that take place at the interface between SWNT and polymer molecules including energy transfer, exciton dissociation, charge transfer, and related effects are reviewed in Section 3. Section 4 discusses the electronic and optoelectronic devices built based on SWNT/polymer composites including OPV cells, light-emitting diodes, and IR sensors. Carbon nanotubes/polymer composites: synthesis and properties Recent studies involving fabrication and characterization of structural and underlying device characterizations have identified several processing-related challenges pertaining to producing polymer/nanotubes composites of high purity, structural anisotropy/alignment, and uniform dispersion [23,24]. Because of the π-orbitals of the sp 2 -hybridized C atoms, CNTs show a tendency for strong intermolecular interaction and spontaneous aggregation (van der Waals interaction) into large diameter bundles that are not readily dispersible in organic solvents or polymer matrix. To address the dispersion-related and mixing challenges, the use of surfactants [25][26][27][28][29][30][31], performing shear mixing [32][33][34], sidewall chemical modification [35,36], and in situ polymerization [37][38][39][40][41] were proposed. Among all these strategies, covalent chemical functionalization and introduction of defects into SWNT surfaces have been proven highly effective in achieving stable SWNT suspensions in polar solvents as discussed below. Defect functionalization In defect functionalization, nanotubes are treated by oxidative methods that also help remove metal particles and amorphous carbon deposits, that is, raise purity. The resultant SWNTs oftentimes gain in localized surface defect density most of which are in the form as carboxyl, that is, -COOH attachments. Mawhinney et al. [42] studied surface defect site density of oxidatively treated SWNTs by probing the amounts of CO 2 (g) and CO(g) released during heating to up to 1273 K. The results indicated that as much as ~5% of the carbon atoms in such SWNTs can be defect-associated. Acid-base titration method [43] yielded similar results, that is, 1-3% of acidic sites in purified SWNTs. The density of defective sites created at the surfaces by this method is viewed generally insufficient for good nanotubes dispersion in the polymer matrix. However, the strategy can be used for covalent attachment of organic groups by first converting them into acid chlorides that can be next linked to amines to form amides. Such modified CNTs show significantly higher solubility in organic solvents as compared with unprocessed nanotubes [44]. Covalent functionalization Despite the fact that sp 2 -hybridized C atoms form a chemically stable backbone, a number of strategies were developed to covalently link chemical groups to CNTs [62][63][64]. In the case of covalent functionalization, the translational symmetry of nanotubes is disrupted by changing sp 2 carbon atoms to sp 3 carbon atoms that were reported to affect the electronic and transport properties of nanotubes [65,66]. This route is highly effective in increasing solubility as well as dispersion of nanotubes in many organic solvents as well as polymers. Covalent functionalization can be accomplished either by the modification of surface-bound carboxyl groups on the nanotubes or by the direct elemental reaction with carbon atoms such as in the case of CH x -modified nanotubes. Poly (ε-caprolactone) [67,68], poly(L-lactide) [69,70], poly(methyl methacrylate) [71][72][73], Polystyrene [74][75][76], poly(N-isopropyl acrylamide) [77][78][79][80], polyimide [81,82], polyvinyl (acetate-co-vinyl alcohol) [83], have been used to covalently attach to CNTs. From the standpoint of device application, non-covalent functionalization remains preferred over the covalent approach, as the latter has the propensity to induce strong structural damage [24,84]. The dispersion of nanotubes in polymer matrices is one of the most critical bottlenecks in the preparation of CNTs/polymer composites. Additional strategies to enhance dispersion of nanotubes included melt mixing and in situ polymerization, whereas Ni et al. confirmed considerable improvement in the dispersion of multi-walled CNTs in poly(vinyl alcohol) (PVA) matrix through gum Arabic treatment [24]. Energy transfer in carbon nanotube-polymer composites Absorption of a photon by aromatic polymers leads to a formation of bound electron-hole pair known as exciton, which can dissociate radiatively by emitting a lower energy photon. The presence of semiconducting SWNTs has been shown to strongly affect the rate of radiative recombination by inducing the transfer of either holes or electrons to the nanotubes which depends on the electronic band alignment between SWNTs and polymer [85]. Alternatively, resonant energy transfer from polymers to SWNTs has been confirmed experimentally [86,87]. In Umeyama et al. [86] study, a conjugated polymer, poly [(p-phenylene-1,2-vinylene)-co-(p-phenylene-1,1-vinylidene)] (coPPV), was synthesized and used to study the influence of SWNTs on the light emission characteristics of the former. UV-vis-NIR absorption and AFM measurements revealed that SWNTs were dispersed well in organic solvents likely via π-π interaction. The composite solution of coPPV-SWNTs exhibited a strong NIR emission originating from SWNT when the polymer was subject to a direct optical excitation with the light source operating at ~400-500 nm. The efficiency and rate of the energy transfer from polymers to SWNTs have been shown to be strongly dependent on the polymer concentration/aggregation on SWNTs [22,88]. Further studies point to the polymer π-conjugation chain that governs the energy transfer in the polymer-SWNT system to remain more extended compared with that of the pure polymer system [85]. Massuyeau et al. [89] studied energy transfer between the polymer and nanotubes by examining steady state PL spectra of a series of composite films containing both metallic and semiconducting nanotubes. The results of these studies show that there is a substantial spectral overlap of PL and optical absorption of SWNTs, which favors the Förster energy transfer between polymer chains and CNTs. Charge transfer in carbon nanotube-polymer composites Combining CNTs with polymers offers an attractive route not only to mechanically reinforcing polymer films but also to enhancing polymers' charge transport properties and modifying electronic properties through a morphological modification or electronic coupling between the two [90]. The effect of nanotube doping has been systematically investigated by embedding nanotube powders in the emission, electron transport, and hole transport layers of OLEDs [91]. Such polymer/nanotube composites have been successfully exploited for various applications including OPV [92][93][94][95], OLEDs [96], and organic field-effect transistors [97,98]. Among different transport models [99][100][101][102][103][104], percolation of the nanotube network within the polymer matrix has been suggested to play a primary role behind improved charge mobility of up to two orders of magnitude compared with that in the pristine polymer. This provides a technologically simple pathway to improving the performance of organic electronic and optoelectronic devices, while keeping their fabrication costs as low as possible [90]. The low dielectric constant of conjugated polymers results in large Coulomb interactions between charge carriers, increasing exciton binding energy and photo-response characteristics. The majority of OPV devices operate based on exciton dissociation at the interface formed by two dissimilar materials with a type-II band alignment that favors interfacial charge separation and formation of free polarons. If the rate of bound electron-hole pair separation is low, other, that is, radiative and non-radiative recombinations will prevail, which is a primary reason behind efficiency loss. Internal electric fields at the polymer-metal interfaces (interface dipoles) or dissociation centers, for instance, oxygen impurities that can act as electron traps (monopoles) promote fast exciton dissociation. As the electron affinity remains smaller for conjugated polymers [105], percolated CNTs act as high mobility electron extraction paths or excitonic antennas. Even at low doping levels, highly conductive pathways can be still established due to a large aspect ratio and propensity of SWNT to bundling. While photo-generated electrons will tend to transfer to SWNT, the photo-generated holes are to remain in the polymer matrix that helps to lower the rate of internal recombinations and to mitigate charge-carrier losses [13,106]. The first solid evidence of the charge transfer between SWCNTs and conjugated polymers (MEH-PPV) was provided by Yang et al. [107] by performing photoinduced absorption spectroscopy. In their study, photoinduced charge transfer was deduced by observing a reduction of the emission from the polymer accompanied by an increase of the polaron peak in the MEH-PPV-SWCNT hybrids. Bindl et al. [108] examined exciton dissociation and charge transfer at s-SWCNT heterojunction formed with archetypical polymeric photovoltaic materials including fullerenes, poly(thiophene), and poly (phenylenevinylene) using an exciton dissociation-sensitive photo-capacitor measurement technique that is advantageously insensitive to optically induced thermal photoconductive effects. It was found that fullerene and polythiophene derivatives induce exciton dissociation resulting in electron and hole transfer away from optically excited s-SWCNTs. Significantly weaker and almost no charge transfer was observed for large bandgap polymers largely due to insufficient energy band offsets. In another study, Ham et al. [109] fabricated a planar nano-heterojunction comprising wellisolated millimeter-long SNWTs placed underneath a poly(3-hexylthiophene) (P3HT) layer. The resulting junctions displayed photovoltaic efficiencies per nanotube in the range of 3-4%, which exceeded those of polymer/nanotube bulk heterojunctions by almost two orders of magnitude. The increase was attributed to an absence of aggregates in case of the planar device geometry. It was shown that the polymer/nanotube interface itself can be responsible for the exciton dissociation with the best efficiency realized for ~60 nm thick P3HT layer. Among different classes of nanomaterials, semiconducting CNTs remain the primary candidates to enhance the charge separation when interfaced with conjugated polymers. The difference in the behavior of semiconducting and metallic CNTs in polymer was studied theoretically by Kanai et al. [110] who employed a density functional theory. Case studies involving poly-3-hexylthiophene (P3HT) interfaced with semiconducting and metallic CNTs were carried out. In case of semiconducting nanotubes, the theory predicts a formation of type-II heterojunction, critical to photovoltaic applications. In contrast, in case of the metallic nanotubes, substantial charge redistribution occurred and the built-in-potential was quite small, whereas P3HT became electrostatically more attractive for electrons. These observations confirm that in case of mixed single-walled nanotubes, a majority of interfaces would be made by metallic components to compromise the device performance. Similar conclusions were drawn by Holt et al. [111] in his study of P3HT-polymer/SWNT blends containing varying ratios of metallic to semiconducting SWNTs. Electronic and optoelectronic applications of carbon nanotube/polymer composites 4.1. Organic photovoltaic devices OPV devices based on π-conjugated polymers have been suggested as low-cost alternatives to silicon-based solar cells [106,112]. Unlike to energy conversion devices based on semiconductors, in organic solar cell devices, a donor/acceptor (D/A) interface is required to break free photo-generated excitons into free charges carries before they can be collected by the electrodes [113,114]. The list of the requirements for the materials for application in bulk PV devices includes the following: (1) strong light absorption over the whole solar emission spectrum; (2) sufficient separation between HOMO and LUMO; (3) large electron and hole mobilities within the device active layer; and (4) low device fabrication cost [22,115]. In addition to a detailed consideration of intrinsic electronic aspects of the constituent components, geometric aspects and chemical stability play equally important role. For example, the dimensions of active layer should not exceed the exciton diffusion length, reportedly on the order of ~10 nm [22,113]. In CNTs/polymer photovoltaic devices, the dissociation of excitons can be accomplished through the formation of a staggered gap donor/acceptor, type-II heterojunction formed between the s-SWCNTs and the polymer in which the energy offsets at the hetero-interface exceed the exciton binding energy, E B . Recent experimental and theoretical studies by Schuettfort [116] and Kanai [110], respectively, demonstrate that a type-II band alignment only exists for certain interfaces, such as between small diameter semiconducting SWNTs and P3HT. Even for such blends, energy transfer from the polymer to SWNTs remains one of the fastest deexcitation channels that compete with the charge transfer processes, with the former facilitated by larger surface area and electron affinity of the nanotubes vs. polymers [105,117]. Kymakis et al. [118] examined both dark and photocurrent-voltage (J-V) characteristics of poly(3-octylthiophene) (P3OT)/SWNT composite photovoltaic cells as a function of SWNT concentration. An open-circuit voltage (V OC ) as high as 0.75 V was obtained for 1% doped SWNTs/ P3OT composite which served as a device active layer. An almost 500-fold increase in the photo-response was partly attributed to a 50-fold increase in the hole mobility due to a reduction in the density of the localized states in P3OT matrix, and in part due to enhanced exciton extraction at the polymer/nanotube junctions. Despite the improvement in the rate of the charge separation, the power conversion efficiency was only 0.04% under 100 mW/cm 2 illumination conditions. A poor dispersion of SWNT and the presence of a mixture of metallic and semiconducting tubes were believed the primary factors behind the low efficiency numbers. In 2011, the same group investigated the use of spin-coated SWNTs as a hole transport layer (HTL) in organic bulk heterojunction photovoltaic devices shown schematically in Figure 2 to raise the conversion efficiency [119]. Varying thickness SWNT films were repetitively spin coated with dichloroethane and next evaluated as the HTL in P3HT and 1-(3-methoxy-carbonyl)-propyl-1-phenyl-(6,6)C61 (PCBM) photovoltaic devices. It was shown that insertion of ~12-nm-thick SWNT layer led to power conversion efficiencies as high as 3.0%, compared with 1.2 and 2.8% for the devices without and with the traditional PEDOT:PSS acting as the HTL. The improved efficiency was attributed to improved hole transport in the polymer matrix due to a higher degree of crystallinity provided with SWNT. In another study, June et al. implemented homogeneously dispersed CNTs using alkyl-amide groups to chemically modify nanotubes to improve their dispersion in organic medium [16]. The resultant composites and their based OPV cells exhibited gain in their optical and electrical properties with the device efficiency approaching ~4.4%. The schematic of the fabricated solar cell is shown in Figure 3. Figure 3. Schematics of the functionalization of nanotubes with the alkyl-amide group for a homogeneous dispersion in organic solvent and the PV devices fabricated in [16]. In most OPV cells that host nanotubes, the open-circuit voltage (V oc ) generally stayed below 1 V, another performance limiting factor. Rodolfo et al. [120] was able to raise V oc by ~20% by inserting continuous polymer layer between the electrode and SWNTs, which helped address problems with electrical shorting and shunts by the metallic tubes. Some prior studies pointed out that uncontrolled interactions at the CNT-polymer interface can not only reduce the ability of the tubes to transport charge but also interfere with the photophysical processes, which act as a source of recombination centers for excitons (metallic tubes) and energy quenchers (polymer-s-SWCNT), or by electrically shorting the circuit (long tubes). From the standpoint of device engineering practices, a more rational design of the CNTspolymer interface across different length scales, that is, nano to meso and careful consideration/ control of intermolecular level interactions via dispersion will be required [107,121,122]. On this front, Arranz-Andres and Blau [122] investigated the influence of the nanotube dimensions (length and diameter) and concentration on the performance of a CNT-polymer device. They found that adding 5% of nanotubes by weight increased the power conversion efficiency (PCE) by three orders of magnitude compared with that of the native polymer. The incorporation of nanotubes into the P3HT matrix favorably affected the energy levels of the P3HT and the morphology of the active layer. They also found that the nanotubes can act as nucleation sites for P3HT chains, improving charge separation and electron transport. The three-component architectures based on nanotubes-fullerene-conjugated polymer composites were proposed to achieve better photovoltaic efficiencies. Li et al. [125] suggested using C 60 as an electron acceptor and nanotubes for the photo-generated charge transport. Two types of chemically functionalized nanotubes were tested: carboxylated and octadecylamine functionalized multi-walled nanotubes, in short c-MWNT and o-MWNT. All three photovoltaic parameters, namely short-circuit current density, open-circuit voltage, and fill factor of the P3HT:c-MWNT/C60-based cells showed improvements over those of the P3HT:o-MWNT/ C60 cell as a result of a faster electron transfer from C 60 to the nanotube backbone. Derbal-Habak et al. [94] reported organic PV cells with power conversion efficiency of 3.6% by incorporating functionalized SWNTs within P3HT:PCBM layer that helped improve both the current density J sc and open-circuit voltage, V oc attributed to a partial crystallization of the RR-P3HT as revealed by XRD studies. Nismy [126] probed the optical and electronic response of the composite devices comprising donor polymer and localized MWNTs also featuring triple heterojunction architecture/scheme. A significant improvement in photoluminescence quenching was observed for the devices with nanotubes embedded into the polymer matrix, with the former facilitating the formation of the trap states. The triple scheme is generally confirmed to yield a lower dark current and hence a significantly improved photovoltaic performance with the PCE approaching ~3.8%. Relatively high PCEs of ~7.4% were demonstrated by introducing copper-phthalocyanine derivative (TSCuPc)/SWNT layer into the series-connected inverted tandem devices featuring front P3HT-ICBA and back PCBM-PCDTBT active layers, Figure 4. Table 1, repeated results from studies on CNTs/polymer OPV devices reveal that the performance of nanotubes incorporated OPV cells is dependent on several factors such as the device architecture, treatment or functionalization method of nanotubes, type of CNTs, concentration of nanotubes as well as thickness of the nanotube-incorporated active layer. To overcome problems with poor performance of bi-layer devices that stem from short exciton diffusion length in polymers, poor exciton dissociation and absence of a percolated network required for improved photo-generated charge transport, the devices incorporating polymerfullerene-based donor-acceptor (D-A) material have been reconsidered. Comparative studies on bulk heterojunction devices vs. those with a nanotube-incorporated active layer formed by sequential deposition show that the latter architecture is prone to a higher recombination of carriers due to the introduction of trap states associated with the nanotubes. Photo-generated excitons are also quenched at the D/A material interface due to these additional energy levels and render lower J sc values. On the other hand, the heterojunction scheme yields lower dark currents and better photovoltaic performance confirming a very critical role of the heterojunction in devices with organic/hybrid architectures. For the nanotube/polymer-based OPV cells, the nanotube type is also to influence the performance of such devices. While there is no clear link between the number of walls or the diameter of the nanotubes and the performance of the OPV device, the semiconducting nanotubes were concluded to form a needed, type-II heterojunction. In contrast, in case of metallic nanotubes, a substantial charge redistribution is to take place at the interface. As a result, the built-in-potential is quite small and unlikely to contribute significantly to the subsequent charge separation at this interface, leading to an inefficient PV device. The photovoltaic characteristics of the PV cells are also to depend on the concentration of nanotubes. In particular, the incorporation of low concentrations of nanotubes in the photoactive layer leads to an increase of the current density J sc . The functional groups as well as the preparation methodology are among the other factors that were found to influence the performance of OPV cells. Organic light-emitting diodes (OLED) OLEDs are indispensible to flexible light displays because of their excellent properties: They are lightweight and feature low power consumption, wide angle of view, fast response, low operational voltage, and excellent mechanical flexibility [127,128]. Light-emitting polymers demonstrate excellent quantum efficiencies and can be solution processed to build electroluminescent devices of very low cost. OLEDs are generally considered as "dual-injected" devices as holes and electrons are injected from the anode and cathode, respectively, into active molecular/macromolecular medium, where they form excitons that recombine radiatively [128,129]. Recent progress in OLEDs stems not only from the advancement of the polymer science but also from achieving better control over the charge transport in the electroluminescent layers and doping of the emissive materials [22]. A proper layer sequence in OLEDs ensures that the injected charges are properly balanced within the emissive layer to achieve high external efficiency. SWNTs introduced into conducting polymers lower the charge injection barrier formed at the electrode-organic interface and hence favorably affect the device performance [130]. One of the first studies to combine SWNT with conjugated polymer-based OLEDs was attempted by Curran et al. [131]. The observed increase in the quantum yield was attributed to intermolecular π-π stacking interactions that take place between the polymer and nanotubes. A polymer stiffening is another factor that can lead to an increase in the luminescence output. Moreover, when SWNTs are added the strength of the polymer-polymer interaction becomes weaker, which is a source of self-quenching effects. The concentration of SWNTs of 1% (by weight) is considered optimal/sufficient for the polymer strands to experience inter-action with the nanotubes. Excess concentrations of SWNTs lead to a drop in the luminescence. Woo et al. [132] prepared double-emitting OLEDs (DE-OLEDs) based on SWNTs-PmPV. A low bias I-Vs obtained on the devices made from the composites were quadratic, while in the devices with pure PmPV, the dependence was significantly more nonlinear: I ~ V 5 ; the result was explained by the presence of structural and chemical defects in the PmPV composite that favors continuous trap-limited charge transport. In a recent study, Gwinner et al. [134] investigated the influence of small amounts of semiconducting SWNTs on characteristics of ambipolar light-emitting field-effect transistors (LEFETs) comprising polyfluorenes such as poly (9,9-di-n-octylfluorene-alt-benzothiadiazole (F8BT) and poly(9,9-dioctylfluorene) (F8)-conjugated polymers, Figure 5. Incorporating SWNTs within a semiconducting layer at the concentrations below the percolation limit significantly augments both hole and electron injections, even for a large bandgap semiconductor such as F8, without invoking a significant luminescence quenching. In general, owning to lower contact resistance and threshold voltage, larger ambipolar currents and in turn higher output/light emissions can be realized. Divya et al. [134] investigated the use of a diketone ligand, 4,4,5,5,5-pentafluoro-3-hydroxy-1-(phenanthren-3-yl)pent-2-en-1-one (Hpfppd), containing a polyfluorinated alkyl group, by covalently immobilizing it onto the multi-walled CNT host via carboxylic acid functionalization pathway. The resultant nanocomposite displayed intense red emissions with an overall quantum yield of 27% under a wide excitation range from UV to visible (~330-460 nm), making it prime candidate for application in OLEDs. Indium tin oxide (ITO) features a high transmittance at a low sheet resistance [127] and is ubiquitously employed as an OLED anode but not without drawbacks. ITO is brittle and can suffer from cracks that lead to electrical shorting; it can serve as a source of oxygen that diffuses into emissive layers, while it has insufficiently high work function of ~4.7 eV [129,135]. On this front, SWNT sheets have been considered as viable alternative and were studied for possible use as anodes in OLEDs, Figure 6 [136]. Some recent prototypes exhibited brightness of ~2800 cd m −2 that was comparable to that of OLED featuring ITO anodes. Zhang et al. [137] showed arc-discharge nanotubes were overwhelmingly better electrodes than HiPCO-nanotube-based films in all of the critical aspects, including surface roughness, sheet resistance, and transparency. Arc-discharge nanotube films that were PEDOT passivated showed high surface smoothness and featured sheet resistance of ~160 Ω/sq at 87% transparency. Parekh et al. [138] was able to improve the conductivity of transparent SWNT thin films by treating the samples with nitric acid and thionyl chloride. Geng et al. [139] was able to achieve a fourfold sheet conductance improvement by exposing SWNT films produced by spray technique to a nitric acid with the treated samples demonstrating sheet resistance of ~40 and 70 Ω/sq, at 70 and 80% transmittance, respectively. To break interdependence of the sheet conductance and the transparency, a magnetic field was applied during drop-casting of SWNT-polymer films onto ITO-coated glass and ITO-coated PET substrates [140]. This led to sample de-wetting and enhancement in the electrical conductivity of the films. For a functionalized SWNT-PEDOT:PSS film formed on an ITO-coated PET substrate, a sheet resistance of 90 Ω/sq at 88% transmittance was obtained. SWNT-PEDOT:PSS composite devices formed on PET substrate were proposed as a way to combat the problem, with the films featuring a sheet resistance of 80 Ω/sq, and having a transmittance of 75% at ~550 nm. The ratio of DC to optical conductivity was higher for composites with mass fractions of 5560 wt% than for nanotubes only films. For ~80-nm-thick composite filled with 60 wt% arc discharge nanotubes, this conductivity ratio was maximized at σ DC /σ 0P = 1, with the transmittance (at 550 nm) and sheet resistance of 75 and 80 Ω/sq, respectively. These composites also have excellent electromechanical stability, with <1% resistance change over 130 bend cycles. As outlined above, CNTs/polymer composites could be incorporated into conducting polymers as the buffer layer, or in the form of plain sheets as flexible anode electrode in OLEDs. The characteristics exhibited by the CNTs/polymer composite as the transport layer in OLEDs have been observed to change with the polymer system as influenced by the nature of the polymer-nanotube interactions. Additionally, nanotube sheets can serve as transparent electrodes in OLEDs which make them a viable alternative to the conventional ITO electrodes. Infrared sensors "Infra" from Latin means "below"; thus, IR refers to a spectral range beyond the red boundary of the visible electromagnetic spectrum, which corresponds approximately to ~0.8 μm. Since all objects emit IR radiation, the effect is known as a black body radiation, seeing in the dark or through obscured conditions, by detecting the IR energy emitted by objects is possible. IR imaging has therefore become a cornerstone technology for many military and civilian applications including night vision, target acquisition, surveillance, and thermal photovoltaic devices. Biomedical imaging and light-activated therapeutics represent another critical area that particularly benefits from high tissue transparency to IR light. Despite a recent progress in the field of IR sensing and imaging, high cost, requirement for cryogenic cooling, and spectrally limited sensitivity still remain the main disadvantages of this technology today. Two primary methods of IR detection exist: energy and photon detection. Energy detectors respond to temperature changes generated from incident IR radiation through changes in material properties. Energy detectors, the well-known examples of which are bolometers, pyroelectric, and thermopile detectors, are normally low cost and primarily used in singledetector applications; such applications include fire and motion detection systems as well as automatic light switches and remote thermometers. In contrast to energy detectors, light interacts directly with the semiconductors in photon detectors to generate electrical carriers. More specifically, incident light with energy greater than or equal to the energy gap of the semiconductor drives the semiconductor out of equilibrium by generating excess majority electrical carriers. This translates into a change in the net resistance of the detector. The wellestablished examples of photon detector materials are lead sulfide (PbS), lead selenide (PbSe). Since these detectors do not function by changing temperature, they respond much faster than energy detectors and in principle can be sensitive to a single photon, if used, for instance, in conjunction with the emerging class of single electron devices. Both, increased sensitivity and reduced response time provided with the use of small bandgap semiconductor materials, have recently led to the development of advanced and very sophisticated IR detection systems, which are of high technological relevance today. The higher the temperature of an object, the larger the amount of thermal radiation it emits, while its peak intensity also shifts to a shorter wavelength. The demonstrated strong spectral dependence of thermal radiation on the temperature, also known as a Wien's law, necessitates the use of materials with optimized sensitivity at multiple wavelengths for two primary reasons: (1) to increase sensitivity and (2) to enable highly selective military/civilian target identification and acquisition. Until recently, the problem was addressed through simultaneous use of several materials with peak sensitivity corresponding to different wavelengths. As fabrication and processing change dramatically from one material system to another, engineering of wavelength-specific and ultra-sensitive IR detectors currently remains uneconomical. A recent progress in the field of nanotechnologies, and in particular, in the area of nonlithographic fabrication of multi-functional nanomaterials such as quantum wells, wires, dots and CNTs opens new opportunities for advancing IR sensing technology beyond today's confines. Unlike semiconductor alloys, the effective energy bandgap of nanomaterials and particularly CNTs can be easily tailored by simply changing their size which enables engineering of future IR-devices with expected spectral range of operation: from ~15 to ~0.6 μm (i.e., from ~0.1 to 2eV). Furthermore, as electron scattering is suppressed in materials featuring one-dimensional electronic configurations, nanotube-based IR photo-detectors are expected to demonstrate orders of magnitude improved sensitivity at room temperature as compared with the detectors operating on thin films or quantum wells. This property could potentially mitigate the requirement for cryogenic cooling currently implemented in most IR photon-type sensing devices. For IR-detection application, aligning of many nanotubes would be highly critical from two points of view: to increase packing density of nanotubes and thus device sensitivity and to realize polarization sensitive IR optical devices. In contrast to conventional semiconductors, conjugated polymers provide dramatic benefits for engineering active optical nano-electronic and photonic devices; this includes reduced processing cost, excellent physical flexibility, and large area coverage. Until now, application of polymers in electronic devices was primarily limited to a visible range of electromagnetic spectrum [142,143]. While stability of most polymers represents a barrier to their use as UV sensors, extending their use in the IR range becomes possible by implementing CNTs for both light absorption and free carrier generation. The exciton dissociation rate can be increased by introducing heterojunctions or applying external electric fields. The former can be realized by incorporating p-type nanotubes into ntype polymer matrix, such as PPy (pyridine-2,5-diyl) conjugated polymer, which is also known to exhibit relatively high resistance to oxidation. Composites of CNTs/polymer feature relatively high absorption in a wide spectral range of 0.2-20 μm and an emissivity coefficient close to unity while. Moreover, such composites are resistive to hard radiation damages and can work in high magnetic fields [144]. Unlike MWNTs and graphene which possesses featureless visible/NIR absorption, semiconducting SWNTs in particular exhibit strong and discrete absorption in the visible/NIR region owing to first optically active interband transition (S 11 ) with its energy scaling inversely proportional to the nanotube diameter. Lu et al. [148] reported a very large photocurrent in the device comprising semiconducting single-walled carbon nanotube (s-SWCNT)/polymer with type-II interface, Figure 7. The detector featured significantly enhanced NIR detectivity of ~10 8 cm·Hz 1/2 /W, which is comparable to that of the many conventional uncooled IR sensors, Figure 8. Among other composites, polyaniline-CNTs composite thin film sensors showed an IR photosensitivity enhancement of more than two orders of magnitude under ambient conditions [144]. The attained enhancement in the sensitivity (bolometric effect) is attributed to a higher heat generation by CNTs and large temperature dependence of the resistance of polyaniline. In another study, Aliev [143] built an uncooled bolometric sensor based on SWNTs/polymer composite with voltage responsivity of ~150 V/W. Another, all-printed NIR sensor was engineered by Gohier et al. [146] by depositing multi-walled CNTs on a flexible polyimide substrate; the sensor showed ultra-high responsivity of ~1.2 kV/W. A strong dependence of the device response on the surrounding atmosphere was though noted and attributed to desorption of water molecules that negatively affected the photosensitivity. Glamazda et al. [147] reported on a strong bolometric response in SWNT-polymer composite featuring higher degree of internal alignment. A better alignment dramatically increased the temperature sensitivity of the resistance explained within the framework of fluctuationinduced tunneling theory. A spectrally flat mid-IR responsivity of 500 V W −1 was observed and is among the highest reported for nanotube-based bolometers.
v3-fos-license
2019-01-02T02:02:12.406Z
2002-01-01T00:00:00.000
166837293
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.irrodl.org/index.php/irrodl/article/download/64/132", "pdf_hash": "aba2ba82a3ffc1fa0a9f0330f4ad2de5185ce91b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41477", "s2fieldsofstudy": [ "Education", "Computer Science" ], "sha1": "714743edb731ce1aa94660c73ff3167d0f91cd8d", "year": 2002 }
pes2o/s2orc
The Hybridisation of Higher Education in Canada Canada’s postsecondary institutions are becoming increasingly involved with technology enhanced learning, generally under the rubric of distance education. Growth and activity in distance education stems from rapid developments in communication and information technologies such as videoconferencing and the Internet. This case study focuses on the use of new technologies, primarily within the context of higher education institutions operating in Canada’s English speaking provinces. Capitalising on the interactive capabilities of ”new” learning technologies, some distance education providers are starting to behave more like conventional educational institutions in terms of forming study groups and student cohorts. Conversely, new telecommunications technologies are having a reverse impact on traditional classroom settings, and as a result conventional universities are beginning to establish administrative structures reflective of those used by distance education providers. When viewed in tandem, these trends reflect growing convergence between conventional and distance learning modes, leading to the hybridisation of higher education in Canada. More and more postsecondary institutions in Canada have become involved with technology enhanced learning, generally under the rubric of distance education. Undoubtedly, a major reason for the growth of interest and activity in distance education is a result of rapid developments in what Agre (2000, pg 5) calls “radically improved technologies of information” – essentially computer based telecommunications technologies (in particular, the Web/ Internet and to a lesser extent, interactive video-conferencing). In the case of the Web /Internet, the appeal of the technology is its ubiquity, affordability, ease of use and vast capacity for accessing and facilitating the flow of information. In the case of interactive video-conferencing, the apparent similarity to the traditional lecture format undoubtedly facilitates the involvement of teaching faculty (Shale and Kirek, 1997). In addition to extending access to geographically or circumstantially isolated students, the real-time interactive capabilities of the new technologies allow for increased interaction between teachers and students – a widely recognised limitation to the older distance delivery modes, such as correspondence. Interestingly, capitalising effectively on this kind of inter-active capability in the “new” learning technologies is leading some distance education providers The Hybridisation of Higher Education in Canada 2 to behave more like conventional educational institutions. For example, the three newest universities in British Columbia have incorporated mechanisms for creating cohorts of students to act as study groups – and for ensuring that group inter-action can indeed occur. This implies set start and completion times for assignments, as well as for the course. Sometimes the group learning activity occurs through online sessions that occur at set times. However, such social aspects of learning will often take the form of some kind of face-to-face contact on a campus. This, of course, is contrary to the early philosophy of open learning systems that sought, inter alia, to minimise constraints of time and place. The capabilities of the new telecommunications technologies to support multimediated learning are also having a reverse impact on classroom instruction (Newman and Scurry, 2001). Traditional approaches to using traditional forms of multi-media in classrooms often required specialised skills in the various technologies and specialised, expensive equipment. They were also generally so labour intensive that the time and effort required to use them, limited their use. It is comparatively much easier to use the Web/ Internet and the related software that supports word processing, desktop publishing and so on – and to make the results available to students. Moreover, classrooms are increasingly being equipped to support multi-mediated displays and Internet based information resources. So, one aspect of the hybridisation of conventional institutions of higher education has been an increase in distance education programming (Shale, 1999). Another aspect is the developmental convergence of face-to-face instructional methods (through multi-mediated delivery) and computer based interactive delivery technologies (through the Web, interactive video-conferencing and so on) (Newman and Scurry, 2001). As one result of these developments, higher education institutions also have to develop new structures in budgeting, instructional support, governance, and organisation – features which are analogous to what one finds in dedicated distance education providers. However, there are other ways in which the conventional higher education institutions are responding to the open learning ethos created by distance education. One of these is through participation in virtual universities. Although the concept of the virtual university has often been quite vague, in many instances it involves a coming together of providers of distance education courses and programs for purposes of leveraging the “offering power” of any single provider through an association of some sort with the others. Generally this banding together provides a sort of portal effect (in the Web sense) where students can access a full range of educational programs provided by the institutions in aggregate – whereas any given institution would be constrained by the range of distance education programming each does. To the extent that the virtual university umbrella works in this way, a given conventional institution stands to become a more active and effective participant in distance education. The Open University Consortium of British Columbia (Open International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 3 University Planning Council, 1995) has been an example of this kind of approach. More recently we have the example of the Canadian Virtual University (http://www.cvu-uvc.ca/english.html. ). However, in other instances there have been concerted attempts to develop “shared” programming in the sense of facilitated transfer of course credits and formalised articulation of programs – especially to bridge the college/technical institute and university gap. The Canadian “open universities”, Athabasca University, the Open Learning Agency, and the Tele-universite have long had mechanisms in place to support this kind of activity. The University of Northern British Columbia, although essentially campus based, regionally as well as centrally, is an example of the conventional style of distance education provision derived from on-campus operations, but with strategic efforts to facilitate the transfer of previously earned credits and to articulate programs with colleges in the northern British Columbia region. Finally, it needs to be said that there is a strong current of reactionary bandwagonism and financial opportunism that explicitly and implicitly underlay the interest of higher education institutions in alternative delivery methods. One apocalyptic view has the conventional institutions on the road to obsolescence and put out of business by the “new” technologically based educational enterprises. For example, Katz, 1999, pg 15, states, “Some colleges and universities might disappear. Some might actually acquire other institutions. One might even imagine a Darwinian process emerging with some institutions devouring their competition in hostile takeovers.” There are many reasons to regard this kind of rhetoric as far-fetched, even selfserving. However, this particular quote was taken up verbatim in the report of The Advisory Committee for Online Learning, a joint creation of the Consortium on Public Expectations for Postsecondary Education of the Ministers of Education, Canada, (CMEC) and Industry Canada. The position of the Advisory Committee is not atypical and resonance of their position can be heard in the recent (March, 2000) Report from the Task Force on Learning Technologies, which was an initiative of the Council of Ontario Universities. Others take the view that there is money to be made from the unbounded markets that purportedly are opened up by the remote delivery technologies. As we will see in a later section of the article, these sorts of expectations have lead to some interesting “unintended consequences” for institutions attempting to capitalize on these perceived benefits. The discussion presented here assumes a pedagogic view of the hybridisation of higher education. It should, perhaps, be acknowledged that technology is also substantially affecting the administrative and student support functions available to distant and on-campus students. For example, online access to course and program calendars, course descriptions, online registration, online library/ information services, and so on, are features valued whatever the modality of International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 4 instructional delivery. The Societal and Institutional Context The Canadian constitution assigns responsibility for educational matters to the provinces – and, in the case of higher education, the institutions (within the context of their statutory missions) are free to choose how they will discharge their educational mandates. The federal government of Canada has only an indirect influence on educational matters through grants and the transfer of tax points – as well as by supporting a number of research granting agencies and some student financial support (although the provinces also do this). Funding is provided by each respective provincial government to the postsecondary institutions through the bureaucracy established by each province. In turn, the educational institutions are autonomous with respect to how they fulfill their statutory mandates. Ea Résumé de l'article Canada's postsecondary institutions are becoming increasingly involved with technology enhanced learning, generally under the rubric of distance education. Growth and activity in distance education stems from rapid developments in communication and information technologies such as videoconferencing and the Internet. This case study focuses on the use of new technologies, primarily within the context of higher education institutions operating in Canada's English speaking provinces. Capitalising on the interactive capabilities of "new" learning technologies, some distance education providers are starting to behave more like conventional educational institutions in terms of forming study groups and student cohorts. Conversely, new telecommunications technologies are having a reverse impact on traditional classroom settings, and as a result conventional universities are beginning to establish administrative structures reflective of those used by distance education providers. When viewed in tandem, these trends reflect growing convergence between conventional and distance learning modes, leading to the hybridisation of higher education in Canada. More and more postsecondary institutions in Canada have become involved with technology enhanced learning, generally under the rubric of distance education. Undoubtedly, a major reason for the growth of interest and activity in distance education is a result of rapid developments in what Agre (2000, pg 5) calls "radically improved technologies of information" -essentially computer based telecommunications technologies (in particular, the Web/ Internet and to a lesser extent, interactive video-conferencing). In the case of the Web /Internet, the appeal of the technology is its ubiquity, affordability, ease of use and vast capacity for accessing and facilitating the flow of information. In the case of interactive video-conferencing, the apparent similarity to the traditional lecture format undoubtedly facilitates the involvement of teaching faculty (Shale and Kirek, 1997). In addition to extending access to geographically or circumstantially isolated students, the real-time interactive capabilities of the new technologies allow for increased interaction between teachers and students -a widely recognised limitation to the older distance delivery modes, such as correspondence. Interestingly, capitalising effectively on this kind of inter-active capability in the "new" learning technologies is leading some distance education providers The Hybridisation of Higher Education in Canada 2 to behave more like conventional educational institutions. For example, the three newest universities in British Columbia have incorporated mechanisms for creating cohorts of students to act as study groups -and for ensuring that group inter-action can indeed occur. This implies set start and completion times for assignments, as well as for the course. Sometimes the group learning activity occurs through online sessions that occur at set times. However, such social aspects of learning will often take the form of some kind of face-to-face contact on a campus. This, of course, is contrary to the early philosophy of open learning systems that sought, inter alia, to minimise constraints of time and place. The capabilities of the new telecommunications technologies to support multimediated learning are also having a reverse impact on classroom instruction (Newman and Scurry, 2001). Traditional approaches to using traditional forms of multi-media in classrooms often required specialised skills in the various technologies and specialised, expensive equipment. They were also generally so labour intensive that the time and effort required to use them, limited their use. It is comparatively much easier to use the Web/ Internet and the related software that supports word processing, desktop publishing and so on -and to make the results available to students. Moreover, classrooms are increasingly being equipped to support multi-mediated displays and Internet based information resources. So, one aspect of the hybridisation of conventional institutions of higher education has been an increase in distance education programming (Shale, 1999). Another aspect is the developmental convergence of face-to-face instructional methods (through multi-mediated delivery) and computer based interactive delivery technologies (through the Web, interactive video-conferencing and so on) (Newman and Scurry, 2001). As one result of these developments, higher education institutions also have to develop new structures in budgeting, instructional support, governance, and organisation -features which are analogous to what one finds in dedicated distance education providers. However, there are other ways in which the conventional higher education institutions are responding to the open learning ethos created by distance education. One of these is through participation in virtual universities. Although the concept of the virtual university has often been quite vague, in many instances it involves a coming together of providers of distance education courses and programs for purposes of leveraging the "offering power" of any single provider through an association of some sort with the others. Generally this banding together provides a sort of portal effect (in the Web sense) where students can access a full range of educational programs provided by the institutions in aggregate -whereas any given institution would be constrained by the range of distance education programming each does. To the extent that the virtual university umbrella works in this way, a given conventional institution stands to become a more active and effective participant in distance education. The Open University Consortium of British Columbia (Open International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 3 University Planning Council, 1995) has been an example of this kind of approach. More recently we have the example of the Canadian Virtual University (http://www.cvu-uvc.ca/english.html. ). However, in other instances there have been concerted attempts to develop "shared" programming in the sense of facilitated transfer of course credits and formalised articulation of programs -especially to bridge the college/technical institute and university gap. The Canadian "open universities", Athabasca University, the Open Learning Agency, and the Tele-universite have long had mechanisms in place to support this kind of activity. The University of Northern British Columbia, although essentially campus based, regionally as well as centrally, is an example of the conventional style of distance education provision derived from on-campus operations, but with strategic efforts to facilitate the transfer of previously earned credits and to articulate programs with colleges in the northern British Columbia region. Finally, it needs to be said that there is a strong current of reactionary bandwagonism and financial opportunism that explicitly and implicitly underlay the interest of higher education institutions in alternative delivery methods. One apocalyptic view has the conventional institutions on the road to obsolescence and put out of business by the "new" technologically based educational enterprises. For example, Katz, 1999, pg 15, states, "Some colleges and universities might disappear. Some might actually acquire other institutions. One might even imagine a Darwinian process emerging with some institutions devouring their competition in hostile takeovers." There are many reasons to regard this kind of rhetoric as far-fetched, even selfserving. However, this particular quote was taken up verbatim in the report of The Advisory Committee for Online Learning, a joint creation of the Consortium on Public Expectations for Postsecondary Education of the Ministers of Education, Canada, (CMEC) and Industry Canada. The position of the Advisory Committee is not atypical and resonance of their position can be heard in the recent (March, 2000) Report from the Task Force on Learning Technologies, which was an initiative of the Council of Ontario Universities. Others take the view that there is money to be made from the unbounded markets that purportedly are opened up by the remote delivery technologies. As we will see in a later section of the article, these sorts of expectations have lead to some interesting "unintended consequences" for institutions attempting to capitalize on these perceived benefits. The discussion presented here assumes a pedagogic view of the hybridisation of higher education. It should, perhaps, be acknowledged that technology is also substantially affecting the administrative and student support functions available to distant and on-campus students. For example, online access to course and program calendars, course descriptions, online registration, online library/ information services, and so on, are features valued whatever the modality of International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 4 instructional delivery. The Societal and Institutional Context The Canadian constitution assigns responsibility for educational matters to the provinces -and, in the case of higher education, the institutions (within the context of their statutory missions) are free to choose how they will discharge their educational mandates. The federal government of Canada has only an indirect influence on educational matters through grants and the transfer of tax points -as well as by supporting a number of research granting agencies and some student financial support (although the provinces also do this). Funding is provided by each respective provincial government to the postsecondary institutions through the bureaucracy established by each province. In turn, the educational institutions are autonomous with respect to how they fulfill their statutory mandates. Each respective institution through the particular governance and budgetary processes they have put in place, determines priorities within each institution and their associated budget allocations. With some exceptions, these processes are bicameral in nature, with financial matters the responsibility of a Board of Governors, many of whom are appointees drawn from the general public -and academic matters, responsibility for which is vested in a committee comprised of ex-officio administration representatives and members elected from the wider academic community. Historically, universities have essentially been differentiated from other kinds of postsecondary institutions by virtue of their statutory authority to grant degrees. However, there has been a recent trend for provincial governments to authorise colleges and technical institutes to offer what are usually referred to as "applied degrees." In addition, a number of colleges, particularly (but not exclusively) in British Columbia, have been designated as "university colleges," which allows them to grant some baccalaureate degrees equivalent to those granted by the respective provincial universities. Most postsecondary institutions in Canada are publicly funded and are accountable through duly constituted boards. As a result, postsecondary programming is of a uniform good quality and there has been no need for the kind of accreditation process and associated bodies that one sees in the United States. However, several provinces are now permitting private institutions (largely faith-based institutions) and business enterprises (such as DeVry) to offer similar kinds of programs and credentials. Generally there is some kind of licensing review (which may involve a form of quality appraisal) to ensure that there is some credibility to the programs offered and to provide some guarantee that students will receive what they pay for. Historically, the long established universities have chosen to address their public International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 5 service responsibilities for outreach through extension services. Distance education courses and programs were typically added later and on an ad hoc basis. Some distance education initiatives did relatively well and have come to be well known (for example, at Queens University and the University of Waterloo). Although it is difficult to say conclusively, it would seem that in no case did distance education emerge as a part of a strategic institutional initiative within the longer standing higher education institutions. This ad hoc approach has generally resulted in an awkward fitting of distance education within conventional institutional organisational frameworks -and especially within the universities. Academic regulations and approvals have been restrictive, funding and staffing commitment (i.e., teaching faculty) was problematic, often tenuous. As a result any concerted distance education enterprise of the sort mounted by, say, Simon Fraser University and Waterloo, has relied on some form of protective organisational and budgeting mechanisms to support their development and continuation. Some institutions were better or more fortunate in how they set up such arrangements. As a result, some have developed more extensive and longer-lived distance education operations than have others. In any event, hybridisation in this context was an almost subversive activity. It was an "add-on" to the traditional institutional mandates and existed as a function separate from the core institutional business of on-campus teaching. Hybrids With A Difference Of course, the statutory missions of any postsecondary institution over a decade or so old, would have been formulated in the absence of a full appreciation of the capabilities of the "new" computer based telecommunications technologies. In the relatively recent past, however, four new universities have come on the scene in Canada that have deliberately shaped their mandates to incorporate technology supported delivery of education. Interestingly their strategic views of technology-supported learning are quite different. As a collective, they represent quite different faces of the hybridisation of higher education. Perhaps the most unique of these is the Technical University of British Columbia (TechBC). There are many aspects to this uniqueness -all of them related to Tech BC's strategic view of technology-supported learning. One of these is the unicameral governance structure adopted by Tech BC. Another is its use of term definite appointments in lieu of tenured academic appointments. Tech BC is also unique in its pedagogic philosophy, which takes the view that technology should be used in those teaching/learning circumstances where it is warranted. This view does not force a distinction between distance education and traditional on-campus education. If the pedagogic requirement is compatible with technologically supported delivery, then that is the determining consideration. If there is a pedagogic requirement for students to work collectively (either supported International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 6 by technology or face to face meetings), then that requirement is addressed through the instructional design of the course and students accommodate to it. Notwithstanding this epistemological seamless-ness, Tech BC applies a guideline that fifty per cent of each course be available on the Web. In a sense, at the other end of the continuum is the University of Northern British Columbia (UNBC). Technology supported learning is a strategic consideration for UNBC, but within their over-arching mandate of educational delivery to the northern British Columbia region (UNBC, Planning for Growth, 1997). Distance education is just one of the means they use to reach their geographically dispersed target population -but within a pedagogical view that face-to-face instruction is the preferred instructional format. As a result, distance education at UNBC is quite similar to the external studies format so commonplace in conventional institutions. Notwithstanding UNBC's institutional commitment through its mandate to distance delivery, UNBC experiences the same kinds of tensions that can be observed in the classical "external studies" style of distance delivery. In particular, there is the usual competition that naturally arises when there are the two educational delivery modalities to support. Historically, the institutional politics and budgetary structures of bi-modal institutions have resulted in a systemic biasing to the conventional on-campus operations -and a concomitant organisational mechanism to address this imbalance of emphasis and power. The third institution in British Columbia is Royal Roads University, which is also distinctive in a variety of ways. It, too, has adopted a unicameral governance structure and largely term definite academic appointments (i.e., no tenure). Moreover, the university is meant to be self-sufficient with respect to funding. Programmatically, Royal Roads University "specializes in degree programs aimed at mid-career professionals who want to advance their careers, while balancing the commitments of work and family" (www.royalroads.ca). Royal Roads uses distance delivery in different ways and to different extents depending on the program involved. Some programs require students to be on-site for instruction -other programs are essentially web delivered. At the Masters level, all programs require some "residencies" to be spent on-campus. Canada's newest university, The Ontario Institute of Technology, is still starting up and has yet to take on an operational persona. However, the language around its establishment speaks of being "on the leading edge of e-learning" through "the most advanced learning technology solutions in the country." Exactly what this implies is yet to be made clear. On the basis of the stated expectations, it would sound as though The Ontario Institute of Technology is positioned to be like the Technical University of British Columbia. International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 7 Intended and Unintended Consequences Intentions and perceived consequences are very much in the eyes of the beholder -and it is usually very difficult to identify a beholder in the bureaucratic world of educational institutions. If one can imagine a collectivity of administrators or a Board of Governors being the beholder, then there are different kinds of responses that one sees. Any amount of distance education can be touted as a success -but often the public affairs effect is amplified if the effort can be viewed as collaborative -as, for example, would be the case if a course were team taught as a way of sharing expertise among different institutions (or if a program was jointly delivered with one institution providing some courses and another institution providing others thus making up a full program). However, when the expectations have been inclined towards rhetoric of "transforming" the educational experience, "opening new markets of learners," making money from distance education and teaching more students more cheaply -then the experience over-all to date has been a substantial disappointment. Other, associated unintended consequences have been: a covert and overt resistance to any attempts to force the implementation of technology-based learning in support of making education cheaper while solving the dilemma of growing demand, and as a money making proposition. The cause celeb in this regard has been the case of York University, where the faculty negotiated a provision in their contract with the university that faculty members would not be forced to teach through the mediation of technology. This has been an issue elsewhere (i.e., Acadia University), so it is more than just a local idiosyncratic development. An issue somewhat related to this is the matter of intellectual ownership/copyright of courses and associated materials -particularly in the context of making a business out of distance education. Distance education is proving to be far more expensive and labour intensive than most people imagined and this has had substantial implications for what programming is offered and for whatever cost recoveries are aimed for. From the point of view of the teaching faculty, the informal consensus has been: (1) It is a much more effective mode of teaching and learning than most instructors would have acknowledged before becoming involved -to the extent that many become active advocates of distance education. (2) Distance teaching requires far more work and advance planning than classroom lecturing. (3) The ease of interaction supported by computers and telecommunications technologies and the apparent immediacy of communication has resulted in a volume of email difficult to cope with and expectations with respect to responding that simply cannot be met in practice. (4) Individual instructors interested in "doing" distance education are finding it difficult to obtain requisite infrastructure support from their institutions, whether it be computers, software or high speed internet access -moreover, services such as instructional design, media production, and technical support are typically not made available to teaching faculty. International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 8 Perhaps the most surprising unintended consequence -if only because it is typically felt to be so mundane and even boring -has been the matter of copyright and intellectual ownership. Although this fire has always smoldered in the background, the prospect of making money from courseware has fuelled the fire. A large part of the debate concerns the legitimacy of the concept of commercialisation in higher education and its subverting effects. In addition, there is the matter of who gets paid what for courses and material developed by faculty members. Implications As with consequences, implications depend on the lens through which one views the situation. If we are to believe the futurists, then the implications of not becoming hybridised quickly enough, is that the conventional higher education institutions as we know them will become "obsolete" -their functions having been taken over and discharged more responsively and effectively by those institutions that have adopted technological bases for teaching and learning. Effectively virtual universities would supplant conventional campus based institutions. Millions, if not billions, of dollars are to be made, and are being made by proprietary operations such as the University of Phoenix -the unstated implication being, again, that these kinds of enterprises will eventually put the conventional institutions out of business. As the bandwagon of technologically based teaching learning has rolled along, it has become apparent that there is an important niche market for distance education -and the successful enterprises are effectively creaming this off -the University of Phoenix operation is an organisational example. One can also regard the burgeoning of moneymaking Executive MBA programs as another manifestation. However, it seems more than a little hyperbolic to claim that conventional higher education institutions will cease to exist or will be so transformed as not to be recognizable. Certainly there is no evidence of this in the current state of affairs -and the "dot.com" transformational view has been touted for at least the past half decade. At the level of the teacher and student, the new technologies can only improve the prospects of both distance delivered teaching and on-campus teaching -distance delivery because of the enhancement of the quality of interaction between teacher and student and among students, on-campus teaching because a multimediated, systematic approach can only improve classroom teaching (and some would argue that the facilitation of electronic interaction among on-campus students also makes instruction more effective). However, the conventional institutions are going to have to change the way they currently organise themselves to deliver distance education, the way the International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada function is budgeted, the quality of the infrastructure support provided, the advisory and instructional support made available, the reward structures offered to faculty -as well as resolving the very substantial issues of technology adoption and intellectual property rights. Historically, this has proven to be very difficult because, as noted above, the governance and budgeting structures (as well as faculty reward systems) in conventional institutions are stacked against alternative delivery. To some degree Tech BC and Royal Roads University have attempted to address this tension through their particular approaches to statutory authority and governance structures. Another aspect to the organisational challenges faced by conventional institutions is the administrative framework needed to contend with the kind of integration required by technology-based teaching. In dedicated distance providers one finds subsystems dealing with: courses and course production; students; a regulatory subsystem; a logistical subsystem (which remains critical even though more material is being put online and distributed electronically. All of this is this is a substantial problem for the conventional institutions -especially if they do not have a strategic view of the use of technology in education, as is the case for almost all of them -because without an appropriate supporting technological framework, the institutions and their faculty will be substantially constrained in the extent to which they can implement seamless multi-mediated instruction or even stand alone distance delivered programs. The issue of intellectual ownership is difficult and many faceted. In almost every issue of the Canadian Association of University Teachers Bulletin and The Chronicle of Higher Education there are one or more items pertaining to this matter. Where unionised or union-like associations are involved in terms and conditions of employment, ownership of intellectual property can be negotiated. To the extent that ownership is wholly or partly vested in the faculty member, the institution is potentially highly constrained in the extent to which it can offer technology-based education. For example, revision of course materials would require permission of the original author -failing this permission, a course would have to developed de novo based on a different faculty member but with the same prospect to follow after that. This is a very difficult basis on which to build an abiding program -let alone one that would be cost effective or a revenue generator. Limitations to Hybridisation? One could argue that the logical extension of the kind of hybridisation mentioned here would be a complete merging of the two worlds of distance delivered and oncampus education -in essence, a "goodbye distance education, hello distributed learning." This may be possible in carefully configured environments like that created by Tech BC. However, it would take an entirely unrealistic amount of International Review of Research in Open and Distance Learning The Hybridisation of Higher Education in Canada 10 change (whether attitudinal, organisational, strategic, financial or whatever) to fully convert a traditionally styled institution. What seems more likely is that we will see more of the same kind of adaptation that we have had to date. Some institutions will continue to have good reason to offer conventionally configured distance education -even though technology may be used more often and more effectively in the delivery. In other cases, we will see more examples of certain programs and/ or courses that are fully hybridised. Moreover, the issue of intellectual ownership of courses and programs, as noted above, has the potential for being the ultimate limiting condition. Finally, change of all kinds and all levels is almost always of an incremental type. Very few innovations of any kind have resulted in fundamental, revolutionary change. There are those proponents who would maintain that in fact, the advent of the telecommunications based technologies is having a revolutionary impact on society. Perhaps in some sense that is true. However, the educational process remains much as it has been for centuries. The easy access to "all the information in the world" should not be confused with a general advancement of individuals' education. As Katz (2001) argues, we should not confuse a tool for technology for a goal. As useful as the technology can be, institutions of higher education must realise that their critical role is to provide for the basic things that all learners need: "access to communities where information can be shared and knowledge created, resources for access to local and distance communities, and widely accepted system for warranting the learner" (Brown and Duguid, 2000). At the end of it all, education is a social activity and not just a matter of information and its manipulation.
v3-fos-license
2021-09-09T20:44:24.791Z
2021-01-01T00:00:00.000
237460559
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/4047/4131", "pdf_hash": "e032854b9d3e8379fd99fbdc52afa57880e4acbb", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41478", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "e072b0350f6c33db4650d589eb921ae6614093c8", "year": 2021 }
pes2o/s2orc
EFFECT OF TIME AND IBA CONCENTRATION ON THE PERFOR- MANCE OF BAY LEAF LAYERING The experiment was conducted in a factorial Complete Randomized Design (CRD) with six levels of IBA viz. 1000, 2000, 3000, 4000, 5000ppm and control (without IBA)with ive times of layering in themiddle (15th) of each April, May, June, July andAugust at theRegional SpicesResearchCentre, BARI, Gazipur duringMay2017 to September 2018. Bay leaf layeringwas foundverymuchunsuccessful with or without IBA treatment. Layering time and IBA concentration signi icantly in luence on the success and rooting of layers under Bangladesh condition. May to July layering with 4000 ppm IBA treatment found better for successful air layering for vegetative propagation of bay leaf in Bangladesh. INTRODUCTION Bay leaf is a high valued spices having valuable aroma, higher nutritive and medicinal value widely used in Bangladesh for preparation of many kinds of foods, beverage, cosmetics and medicines. The demands are usually meet up by importing from other countries which required a huge expenditure. The quality of imported bay leaf is poor due to admixture, sub-standard processing and long storage and transport duration. Production of bay leaf can be increased using BARI released Bay leaf (BARI Bay leaf-1) variety having better aroma, higher leaf yield and stress tolerant habit. Homesteads, hilly and highlands are suitable for their cultivation. Lack of quality saplings are the major constraints for expanding the growing area. Standard vegetative propagation technology of Bay leaf has not yet been standardized in Bangladesh. Development of appropriate vegetative propagation techniques of Bay leaf for rapid dissemination of high yielding varieties (BARI Bay leaf-1) is essential to ensure quality bay leaf production. To increase bay leaf production, intensive research on propagation is very much essential in our country. However the study is therefore, undertaken to standardize the propagation techniques using optimum strength of rooting hormone (IBA) concentration and layering time for rapid multiplication of bay leaf to ensure quality sapling production. MATERIALS AND METHOD The experiment was conducted in a factorial Complete Randomized Design (CRD) with six levels of IBA viz. 1000, 2000, 3000, 4000, 5000ppm and control (without IBA) with ive times of layering in the middle (15 th ) of each April, May, June, July and August at the Regional Spices Research Centre, BARI, Gazipur during May 2017 to September 2018. Proximal slanting cut end of 30 cuttings for each treatment were placed in the hormone solutions for ive minutes then kept 15 minutes to discard extra solution. After treating with hormone, the cuttings were planted in the 15 cm raised beds of soil, sand and compost mixture at a spacing of 15×10 cm. For better water holding capacity and root development in layering, soil mixture was prepared with 50% loamy soil and 50% well decomposed cow dung and kept open for 2 weeks. Treatment wise hormone solution was taken with a small glass dropper pot applied on the cut surface (from where bark was removed) of the shoot. No hormone was applied for control treatment. Each replication of a single treatment consisting 10 layering shoots and a total of 30 for 3 replications was used and tagged properly. The stool or gooti (air layering) was made by covering the cut portion with 160-180g of moist soil mixture, covered by polythene and tied tightly with jute rope. When a number of roots are established and visible through polythene, the air layering seems suitable to separate from the mother plant. A half cut was given at 1-2 cm below the stool of air layering. After one week, the layering was separated by gentle full cut from the previously cut place and extra branches and leaves were trimmed out. The trimmed layering shoot was planted in previously prepared polybag after removing the ploythene and kept one week under shade then 3 weeks in partial shade for establishment. Air layering seems suitable to plant in the ield when a number of roots and shoots were established in polybag. Data on length and number of roots per layering was count breaking the stool just after detachment of layering. The samples of three live stools of layer were broken and the numbers of roots were counted and lengths of roots were measured with a digital slide calipers at 60 days after separation. After separation from the mother plants, success of detached layers and number of leaves were counted at 60 days of planting in the polybag. Data is taken on success rate, days to bud break, leaf and shoot growth, vigour of the saplings, establishment rate, disease and insect pest reaction. EFFECT OF TIME ON SUCCESS OF AIR LAYERING The time of layering had signi icantly affected the success of layering (Table 1 ). Early rooting was observed in May layering (51.8 days) which was statistically similar with April, June and July while August layering took more (68.1 days) for root visibility. Similar result was observed in separation of layers from the mother plant. Rooting success was similar in all ive months but establishment rate was declined after June (Table 1 , Figure 1 ). Means having same letter(s) or without letter are not signi icantly different by DMRT. 'ns' '*' and'**' means not signi icant, signi icant at 5% and 1% probability level, respectively. Successful layer was slightly higher (5.56) in July layering ( Figure 1 ) where higher percentage of rooting (55.56%) also initiated in July layering but establishment rate was higher (81.82%) in April as well as lower establishment rate was recorded International Journal of Research -GRANTHAALAYAH (51.04%) in August layering (Table 1 ). Successful layer was lower (2.72) in August layering ( Figure 1 ) Death rate of detached layer was higher (47.94%) in August layering and it was minimum in April (17.09%). The lower success in August layering might be due to fall of temperature and lower humidity in October delayed and hampered rooting as well as shoot initiation. The effect of time of layering was signi icantly affected on the death of detached layers, number and length of roots, and leaves per layers (Table 1.1). May and June layering gave more number (≥4) of longer (≥8 cm) roots, and leaves (≥5) compared to May, July and August layering. Hot humid weather favors rooting and leaf initiation that caused more rooting and leaves in May and June than April and August layering. EFFECT OF IBA CONCENTRATION ON BAY LEAF LAYERING IBA concentration had signi icant effect on rooting and success of air layering in bay leaf (Table 2 , Figure 2 ). Control treatment and lower dose of IBA took more times to initiate roots compared to higher doses of IBA concentrations. Days to separation of layers from the mother plant was not signi icantly affected by IBA concentration. The number of successful layer was signi icantly higher (7.13) and highest rooting (71.33%) found in 4000ppm IBA concentration followed by 5000ppm where number of successful layer was 6.87 and the highest (75.91%) establishment at 60 days was recorded and it was lower(10%) in control. The number of established layer was signi icantly higher (78.69%) in 4000ppm of IBA application followed by 5000 ppm IBA (75.91%) and it was lower in control (16.67%). These indings are resembled with the indings of Sharma and Aier (1989) the highest rooting percentage in plum was obtained with IBA treatment of cuttings with 2000 mg l −1 during summer. Means having same letter(s) or without letter are not signi icantly different by DMRT. 'ns' '*' and'**' means not signi icant, signi icant at 5% and 1% probability level, respectively. Southworth and Dirr (1996) obtained the maximum success (87.5%) of plum cuttings from 1500 ppm K-IBA solution. Neto et al. (2006) and Canli and Sefer (2009) obtained the highest success using 1000 ppm IBA in cherry cutting and layering. Indole-butyric-acid (IBA) enhanced root development and root growth by enhancing cell division resulted the maximum success of layering compared to control and lower level of IBA concentration (Mozumder et. al., 2014). There was signi icant variation of death of layers after detaching from the mother plant, rooting and leaf production with various levels of IBA treatment (Table 2 ). A number of air layering shoots were died in the poly bag after separation from the mother plant. Application of IBA resulted more number (≥4) of longer (≥8 cm) roots compared to control. Numbers of leaves were increased with increasing IBA concentration. The highest number of leaves (5.82/layer) was recorded from the application of 4000ppm IBA which was statically similar in all IBA levels and it was the lowest (1.07/layer) in control. IBA helps to accelerate cell division and root initiation in upper parts of the cut portion of the plant resulted more rooting and leaves with higher doses of IBA. COMBINED EFFECT OF TIME AND IBA CONCENTRATION IN BAY LEAF LAYERING Layering time and IBA concentration showed signi icant effect on rooting and success rate of layers (Table 3a and 3b) in bay leaf. August layering with lower concentration of IBA or control treatment took more 2-3 days compared to April, May, June and July month's layering for root initiation and separation of layers from the mother plant. Early rooting (45.3 days) was found in May layering with 4000ppm while it was signi icantly delayed (83.7 days) in August layering without IBA treatment. Bay leaf layers took about 8 weeks to separation that was slightly affected due to layering time but greatly for hormone application. Singh and Ray (2011) opined that IBA concentration and layering time in luence on success of layering in Ficus sp. The number of successful layer was signi icantly higher in July layering (7.67, 76.67%) with 4000ppm IBA closely followed by May and August (7.33, 73.33%)with same level IBA application and the success was almost nil (0.33, 3.33%) in April and August layering without IBA. The lower success in August layering with low IBA is due to fall of temperature and lower hormonal activity hampered rooting. No layer was survived inally from April, July and August layering without IBA application. Signi icant variations on survivability of layers after detaching from mother plant, rooting and leaf production due to layering time with various IBA concentrations (Table 1.3.a and 1.3.b). The maximum number and rate of survive layers (6.33, 83.31%) was found in May layering with 4000 ppm IBA treatment and it was statistically similar with April Layering (5.67, 89.61%) with 5000 ppm IBA while it was nil (0%) in April, July and August layering without IBA application. Higher concentration IBA results more number of roots in early June-July layering compared to control and lower concentration of IBA treatment in later layering might be the cause of such variation. The number and length of root did not differ signi icantly with the range of 2.53 to 4.67 and 7.17-9.33 cm per layers at 60 days with the combination of different IBA concentration with time of layering (Table 3 b). Number of branches and leaves were increased with increasing IBA concentration in all months of layering. Kakon et al. (2008) showed that among different varieties BARI guava-1 showed the best performance with different concentrations of growth regulators had signi icant effect on almost all parameters. IBA at 1200 ppm showed the best performance among the treatments. The maximum number of leaves (6.31/layer) per layer was recorded from the application of 4000ppm IBA in May layering and lowest (2.0/layer) was found from control in the same time of layering. There was no successful layering in April, July and August without IBA that had no roots or leaves. IBA accelerate cell division and root initiation high temperature and humidity resulted more rooting and leaves with higher doses of IBA in May and June layering. Sing (2001) found that use of IBA was bene icial in enhancing the callus formation, number, length and diameter of both primary and secondary roots and survival of air-layered twigs. This indings are almost similar with the report of Rymbai and Reddy (2010) ) that air layers of guava have been successfully achieved by exogenous application of IBA at 4000 ppm.The result from these observations were partially resembled with some indings such as Sharma and Aier (1989) get maximum success with 2000 ppm IBA, Southworth and Dirr (1996) obtained from 1500 ppm IBA while Canli andSefer (2009) andNeto et al. (2006) get the maximum success with 1000 ppm IBA concentration in plum. AS (1989) found that highest concentration of IBA (5,000 ppm) proved signi icantly bet- Means having same letter(s) or without letter are not signi icantly different by DMRT. 'ns' '*' and'**' means not signi icant, signi icant at 5% and 1% probability level, respectively. Means having same letter(s) or without letter are not signi icantly different by DMRT. 'ns' '*' and'**' means not signi icant, signi icant at 5% and 1% probability level, respectively. ter for rooting and survival of air layers of Kagzi lime. All the indings were varied because those experiments were conducted in different plants species, environment, soils, climates and times. CONCLUSION Bay leaf layering was found very much unsuccessful with or without IBA treatment. Layering time and IBA concentration signi icantly in luence on the success and rooting of layers under Bangladesh condition. May to July layering with 4000 ppm IBA treatment found better for successful air layering for vegetative propagation of bay leaf in Bangladesh.
v3-fos-license
2018-04-03T05:27:31.154Z
2001-11-23T00:00:00.000
42939945
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/276/47/43980.full.pdf", "pdf_hash": "fa444e31403780558c539734bcba9532f7b2e5b5", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41480", "s2fieldsofstudy": [ "Biology" ], "sha1": "713057185111a5149743c4764a2303059f2c548a", "year": 2001 }
pes2o/s2orc
Alanine scanning mutagenesis of a type 1 insulin-like growth factor receptor ligand binding site. The high resolution crystal structure of an N-terminal fragment of the IGF-I receptor, has been reported. While this fragment is itself devoid of ligand binding activity, mutational analysis has indicated that its N terminus (L1, amino acids 1-150) and the C terminus of its cysteine-rich domain (amino acids 190-300) contain ligand binding determinants. Mutational analysis also suggests that amino acids 692-702 from the C terminus of the alpha subunit are critical for ligand binding. A fusion protein, formed from these fragments, binds IGF-I with an affinity similar to that of the whole extracellular domain, suggesting that these are the minimal structural elements of the IGF-I binding site. To further characterize the binding site, we have performed structure directed and alanine-scanning mutagenesis of L1, the cysteine-rich domain and amino acids 692-702. Alanine mutants of residues in these regions were transiently expressed as secreted recombinant receptors and their affinity was determined. In L1 alanine mutants of Asp(8), Asn(11), Tyr(28), His(30), Leu(33), Leu(56), Phe(58), Arg(59), and Trp(79) produced a 2- to 10-fold decrease in affinity and alanine mutation of Phe(90) resulted in a 23-fold decrease in affinity. In the cysteine-rich domain, mutation of Arg(240), Phe(241), Glu(242), and Phe(251) produced a 2- to 10-fold decrease in affinity. In the region between amino acids 692 and 702, alanine mutation of Phe(701) produced a receptor devoid of binding activity and alanine mutations of Phe(693), Glu(693), Asn(694), Leu(696), His(697), Asn(698), and Ile(700) exhibited decreases in affinity ranging from 10- to 30-fold. With the exception of Trp(79), the disruptive mutants in L1 form a discrete epitope on the surface of the receptor. Those in the cysteine-rich domain essential for intact affinity also form a discrete epitope together with Trp(79). The insulin-like growth factors I and II are essential for normal fetal and post-natal growth (1). They were originally identified as circulating polypeptides with potent mitogenic activity, which mediated many of the actions of growth hormone, and were later shown to be structurally homologous to proinsulin. It is now apparent that these growth factors are produced by many cell types and have paracrine and autocrine as well as endocrine functions. Targeted disruption of the gene for IGF-I 1 in transgenic mice results in both embryonic and post-natal growth retardation (2). In contrast, the effects of disruption of the IGF-II gene are confined to growth retardation during the embryonic period (2). In addition to being mitogens, it is now evident that these peptides play a crucial role in cell survival (3) and contribute to transformation and the maintenance of the malignant phenotype in many tumor systems (4). However, despite extensive study, the signal transduction mechanisms underlying the biological effects of these peptides remain to be elucidated. The mitogenic effects of these growth factors appear to be mediated by receptors belonging to the insulin receptor subclass of receptor tyrosine kinases (for review see Ref. (5)). The type 1 IGF receptor binds both peptides with high affinity; the affinity for IGF-I being greater than that for IGF-II. Transgenic experiments indicate that the growth-promoting effects of both peptides can be mediated by this receptor (2,6). Such studies also point to the role of a second receptor in mediating the mitogenic effects of IGF-II (2,6), and recent in vitro studies indicate that this is the A isoform of the insulin receptor (7); this receptor binds IGF-II with high affinity and can mediate the growth-promoting effects of the peptide (8). The receptors in this family are dimeric protein-tyrosine kinases with significant homology (5). In higher vertebrates there are three known members, the insulin receptor (9,10), the type 1 IGF receptor (11), and the orphan insulin receptorrelated receptor (12). They are dimeric M r 350,000 glycoproteins composed of two disulfide-linked monomers. Each monomer is in turn composed of an N-terminal ␣ subunit and a C-terminal ␤ subunit, which are linked by a single disulfide. The ␤ subunit is both extracellular and intracellular with a single ␣ helical transmembrane domain. The intracellular portion contains the tyrosine kinase catalytic domain. The structure of this domain of the insulin receptor has been determined at high resolution in both the basal and active state (13,14). Comparative homology modeling suggests that the extracellular portion of the receptors is composed of seven distinct structural domains (15)(16)(17). At the N terminus there are two homologous globular domains flanking a cysteine-rich domain. The remainder is formed from three fibronectin III repeats, the second of which contains a 100-amino acid insert of undetermined structure. Recently a high resolution crystal structure of an N-terminal fragment of the insulin-like growth factor I receptor (amino acids 1-460) has been reported (18). The molecule is composed of an extended bilobed structure composed of the two globular L domains flanking the cysteine-rich domain with dimensions * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 of 40 ϫ 48 ϫ 105 Å. The N-terminal globular domain contacts the cysteine-rich domain along its length. In contrast there is minimal contact between the C-terminal domain and the cysteine-rich domain. In the crystal structure, L1 and L2 occupy very different positions relative to the cysteine-rich domain. However, this could be an artifact of crystal packing in this fragment, and the position of L2 may be very different in the native molecule (18). It is possible that it is rotated into a position similar to that of L1 in relation to the cysteine-rich domain. However, irrespective of this, a cavity of ϳ24-Å diameter occupies the center of the molecule and possibly represents a binding pocket. Each L domain resembles a loaf of bread with dimensions 24 ϫ 32 ϫ 37 Å and is formed from a single right-handed ␤ helix capped at the ends by short ␣ helices and disulfide bonds. The base of the loaf is formed from a six-stranded ␤ sheet five residues in length. Both sides are formed from ␤ sheets three amino acids in length, and the top is composed of irregular loops connecting the short ␤ strands. As predicted from sequence comparisons, the cysteine-rich domain is composed of repetitive modules resembling parts of laminin and the tumor necrosis factor receptor. These form a rod-like structure connecting the two globular L domains from which a large mobile loop projects into the putative binding pocket. Despite this wealth of structural detail very little is known about the precise location and nature of the ligand binding site(s). Studies with chimeric insulin/IGF-I receptors suggest that the C terminus of the cysteine-rich domain is a major determinant of IGF-I binding specificity (19 -22). Its location in the N-terminal fragment of the IGF-I receptor is consistent with it forming part of a ligand binding pocket (18). Furthermore, alanine mutagenesis of residues in the LI N-terminal globular domain indicate that this also forms part of the ligand binding site (23). These studies also demonstrated that a Cterminal peptide of the ␣ subunit, amino acids 692-702, is involved in IGF-I binding (23). Furthermore, fusion of this C-terminal fragment to the N-terminal 460 amino acids results in a recombinant protein, which binds IGF-I with an affinity similar to that of the full-length secreted recombinant extracellular domain (24,25), suggesting that these elements are sufficient to form an intact ligand binding site. Alanine scanning studies of the structurally related insulin receptor have demonstrated that determinants in the L1 domain and in the C terminus of the ␣ subunit (amino acids 705-715) are sufficient to form a ligand binding site (26 -28). In the present study we have used alanine mutagenesis to localize the equivalent binding site of the IGF-I receptor. The results of the studies indicate that it is formed from three elements, the first in the L1 domain, the second predominantly in the cysteine-rich domain and the third at the C terminus of the ␣ subunit between amino acids 692 and 702. MATERIALS AND METHODS General Procedures-All molecular biological procedures, including agarose gel electrophoresis, restriction enzyme digestion, ligation, bacterial transformation, and DNA sequencing were performed by standard methods (29). All oligonucleotides were purchased from DNA Technology (Aarhus, Denmark). Restriction and modifying enzymes were from New England BioLabs (Beverly, MA). Recombinant IGF-I (receptor grade) was from GroPep (Adelaide, Australia). High performance liquid chromatography-purified mono-iodinated [ 125 I-Tyr 31 ]IGF-I (30) was from Novo Nordisc A/S. Protease inhibitors were from Roche Molecular Biochemicals (Mannheim, Germany). Medium and serum for tissue culture were from Life Technologies A/S (Tåstrup, Denmark). PEAK Rapid cells (293 cells constitutively expressing SV40 large T antigen) were purchased from Edge Biosystems (Gaithersburg, MD). The mammalian expression vector pcDNA3-zeo(ϩ) was from Invitrogen (San Diego, CA). The hybridoma secreting monoclonal antibody 24-31 directed toward the IGF-I receptor ␣ subunit (31) was a generous gift of Drs. M. Soos and K. Siddle (University of Cambridge, UK). Protein A-purified IgG from the hybridoma medium was kindly provided by by Dr. P. Jorgensen (Novo Nordisc A/S, Bagsvaerd, Denmark). cDNAs encoding both full-length and recombinant secreted extracellular domain of the IGF-I receptor were as previously described (23). Oligonucleotide-directed Mutagenesis-Oligonucleotide-directed mutagenesis was performed by the method of Kunkel (32). Uracil containing single-stranded DNA prepared from phage rescued from Escherichia coli CJ236 transformed with a cDNA encoding the full-length IGF-I receptor cloned into the phagemid pTZ18U. Restriction sites were deleted or introduced with the specific mutation to facilitate screening of the mutants. Successful mutagenesis was confirmed by DNA sequencing. Expression of Mutant Receptor cDNAs-Recombinant mutant secreted IGF-I receptor cDNAs were reconstructed in the plasmid pcDNA3-zeo(ϩ) for expression. DNA for transfection was prepared from 10-ml overnight cultures by a boiling hexadecyltrimethylammonium bromide method (33) followed by purification using QIAwell strips. The mutant receptor cDNAs were expressed transiently in Peak Rapid cells (293 cells constitutively expressing SV40 large T antigen) by transfection using Fugene 6 (Roche Molecular Biochemicals, Mannheim, Germany) according to the manufacturers' directions. Conditioned medium was harvested 4 days post-transfection and, if necessary, concentrated prior to assay using Centriprep 30 centrifugal concentrators (Millipore, Bedford, MA). Receptor Binding Assays-Soluble IGF-I receptor binding assays were performed using a modification of the microtiter plate antibody capture assay that we have described previously (23). Microtiter plates (Nunc Maxisorb, Roskilde, Denmark) were incubated overnight at 4°C with anti-IGF-I receptor antibody 24-31 IgG (100 l/well of 46 g/ml solution in phosphate-buffered saline). Washing, blocking, and receptor binding were as previously described. Competitive binding assays with labeled and unlabeled IGF-I were carried out as done previously, except that the incubation was for 16 h at 25°C. Binding data were analyzed by computer fitting to a one-site model to obtain the K d of the expressed protein. Western Blotting-Western blotting of conditioned media with an anti-IGF-I receptor ␣ subunit peptide (amino acids 31-50) antibody (Santa Cruz Biotechnology, Santa Cruz, CA) was performed using previously described procedures (27). Blots were visualized by chemiluminescence using reagents from Pierce (Rockford, IL). L1 Domain Mutagenesis-In previous alanine-scanning mutagenesis studies of the L1 domain of the structurally related insulin receptor, we have demonstrated that the residues critical for insulin binding are those exposed side chains located in (i) the ␤ sheet forming the wall of the central cavity, i.e. the base of the L1 domain and (ii) the adjacent third ␤ sheet (26). Thus we systematically mutated the exposed residues in the equivalent regions of the IGF-I receptor to alanine; glycines, prolines, and cysteines were not mutated to avoid potential structurally deleterious effects. The mutant IGF-I receptor cDNAs were transiently expressed in 293 PEAK cells. To confirm and evaluate expression, conditioned medium from transfected cells was analyzed by immunoblotting with an antibody directed toward the N terminus of the ␣ subunit of the IGF-I receptor. In conditioned medium from transfections with all mutant cDNAs except those with mutations of Leu 32 , Leu 33 , Ser 35 , Tyr 54 , and Thr 93 , a M r 135,000 protein, representing the IGF-I receptor ␣ subunit was detectable in comparable amounts to that in conditioned media from cells transfected with wild type receptor cDNA (data not shown). The failure to detect receptor in blots of the medium from cells transfected with the Leu 32 , Leu 33 , and Ser 35 mutant cDNAs despite the presence of IGF-I binding activity in the medium (see below) is presumably because the epitope of the antibody used for these blots is directed toward amino acids 31-50 of the receptor ␣ subunit. In contrast, in medium from cells transfected with Tyr 54 and Thr 93 , there was neither detectable IGF-I binding activity nor receptor detectable by blotting despite the presence of an immunoreactive M r 160,000 protein corresponding to receptor precursor in detergent lysates of transfected cells (data not shown), a finding that has previously been observed with mutations that impair appropriate folding of this form of the homologous insulin receptor (26). Equilibrium binding studies were performed on conditioned media from transfected cells to characterize IGF-I binding to wild type and mutant receptors. As previously described (23), IGF-I binding to recombinant secreted receptor displayed simple kinetics and could be best fitted to a single-site model (data not shown). Computer analysis indicated a single population of binding sites with a K d of 0.67 Ϯ 0.06 ϫ 10 Ϫ9 M (mean Ϯ S.E., n ϭ 8). It should be noted that this value is higher than we have previously reported (23) and probably reflects changes in the source of IGF-I, assay conditions, and computerized analysis of binding data. Because studies utilizing alanine-scanning mutagenesis have demonstrated that meaningful changes in affinity, produced by a single alanine substitution, range from 2to 100-fold (34), in the experiments described below we regarded any mutant with a greater than 2-fold increase in K d , i.e. K d greater than 1.3 ϫ 10 Ϫ9 M, as exhibiting a significant disruption of IGF-I-receptor interactions. The results of our analyses of the IGF-I receptor L1 domain alanine mutants are shown in Fig. 1. Data are expressed as a ratio of the dissociation constant of the mutant receptor to that of the wild type receptor. Conditioned medium from cells transfected with two mutant cDNAs discussed above, Tyr 54 and Thr 93 , failed to exhibit sufficient [ 125 I-Tyr 31 ]IGF-I binding to permit accurate quantitative analysis of the expressed receptors' binding properties. As discussed, immunoblotting failed to reveal any evidence of secretion of these receptors from the cell, indicating that the mutant proteins were malfolded. Of the other 27 alanine mutants, 10 caused a significant impairment of IGF-I binding, i.e. greater than 2-fold increase in K d . Eight of these 10 mutants (Asp 8 , Asn 11 , Tyr 28 , His 30 , Leu 33 , Leu 56 , Phe 58 , and Arg 59 ) are located in the N-terminal half of the L1 domain and result in increases in K d for IGF-I ranging from 3-fold (Phe 58 ) to 9-fold (Asp 8 ). One mutant (Trp 79 ), which results in a 3-fold increase in K d for IGF-I is located in the bulge region, amino acids, in the fourth turn of the ␤ helix. The last mutant (Phe 90 ) is located in the C-terminal half of the domain and results in a 23-fold increase in K d . Cysteine-rich Domain Mutagenesis-With the exception of glycines, prolines, and cysteines, we mutated to alanine all the residues, in the cysteine-rich domain, that are predicted to be accessible to ligand, on the basis of the published structure of the receptor N-terminal fragment (18), i.e. in the region between amino acids 240 and 284. When expressed in 293 PEAK cells, all alanine mutants in this region appeared to be secreted normally, on the basis of immunoblotting of conditioned media and cell lysates as described above (data not shown). Thus it is likely that there was no major perturbation of receptor structure attributable to the mutations. Equlibrium binding assays were performed on conditioned media from the transfected cells to characterize the IGF-I binding properties of the mutant receptors. Fig. 2 summarizes these results. Only 4 out of a total of 26 mutations produced significant decreases in affinity for IGF-I. These are all located at the N terminus of the region analyzed and produce decreases of 2to 6-fold; the largest decreases of 6-and 4-fold are produced by the mutations of Phe 241 and Glu 242 , respectively, to alanine. Mutagenesis of the C Terminus of the ␣ Subunit-We systematically mutated amino acids 692 to 702 to alanine and expressed the resulting mutant cDNAs in 293 PEAK cells. As previously reported, when analyzed by immunoblotting as described above, all mutants appeared to be folded and secreted normally (data not shown) (23). Equilibrium binding studies were performed on conditioned media to evaluate the IGF-I binding properties of the mutants. The results of these experiments are shown in Fig. 3. Three mutants Phe 695 , Ser 699 , and Val 702 appeared to be without effect on affinity for IGF1. Mutation of Phe 701 produced a receptor that had no detectable IGF-I binding activity. Mutation of Phe 692 , Glu 693 , Asn 694 , Leu 696 , His 697 , Asn 698 , and Ile 700 to alanine results in decreases in affinity for IGF-I ranging from 10-fold (Asn 698 ) to 29-fold (Phe 692 ). DISCUSSION In the present study, using structure-directed and alaninescanning mutagenesis, we have identified 22 amino acids in the ␣ subunit of the IGF-I receptor, which appear to be functional determinants of its ligand binding site. Ten of these are located in the L1 domain, four in the cysteine-rich domain, and eight at the C terminus of the ␣ subunit. The amino acids in the L1 and cysteine-rich domains are organized into two discontinuous epitopes. The first of these is located in the N-terminal part of the L1 domain and is composed of the amino acids Asp 8 , Asn 11 , Tyr 28 , His 30 , Leu 33 , Leu 56 , Phe 58 , Arg 59 , and Phe 90 . They form a footprint on the ␤ sheet, which forms the base of the domain and the wall of the putative ligand binding cavity (Fig. 4). Mutation of each of these residues to alanine produces a 2-to 10-fold decrease in affinity with the exception of Phe 90 , which produced a 23-fold decrease. The second epitope is formed from the L1 residue Trp 79 , which is located in the loop/bulge (amino acids 78 -85) between the ␤ sheets forming one of the sides and the base of the domain and the cysteine-rich domain residues Arg 240 , Phe 241 , Glu 242 , and Phe 251 ; Trp 79 is in contact with Glu 242 (Fig. 4). These form a small patch on the cysteine-rich domain adjacent to the base of the L1 domain. Mutation of each of these residues results in a 2-to 6-fold decrease in affinity. The observation that the disruptive mutants in both domains form contiguous patches on the protein surface surrounded by non-disruptive mutations provides strong evidence that they are contact sites for IGF-I (34). The third element of the binding site is formed from amino acids Phe 692 , Glu 693 , Asn 694 , Leu 696 , His 697 , Asn 698 , Ileu 697 , and Phe 701 , located at the C terminus of the ␣ subunit, a region of the receptor for which there is no structural information. Thus, whether all the side chains forming this binding element are directly involved in interaction with IGF-I or whether the effects of the mutations are indirect cannot be ascertained, until the structure of this region of the receptor is determined. Nonetheless, the magni- FIG. 2. Alanine-scanning mutagenesis of the cysteine-rich domain. 293PEAK cells were transfected with cDNAs encoding alanine mutants of ligand-accessible amino acids of the cysteine-rich domain of the recombinant secreted IGF-I receptor (amino acids 240 -284) prepared by oligonucleotide-directed mutagenesis. Four days after infection conditioned medium from the cells was harvested and the expression and IGF-I binding of the mutant receptors were evaluated as described under "Materials and Methods." The dissociation constant was determined by computer fitting to a single-site model. The dissociation constant of the wild type receptor determined under these conditions was 0.67 Ϯ 0.06 ϫ 10 Ϫ9 M (mean Ϯ S.E., n ϭ 8). The results are expressed as the ratio of the dissociation of the mutant to that of the wild type and represent the mean Ϯ S.E. of three independent determinations. The amino acids mutated to alanine are designated by the single-letter code. FIG. 3. Alanine-scanning mutagenesis of amino acids 692-702. 293PEAK cells were transfected with cDNAs encoding alanine mutants of amino acids 692-702 of the recombinant secreted IGF-I receptor prepared by oligonucleotide-directed mutagenesis. Four days after infection conditioned medium from the cells was harvested and the expression and IGF-I binding of the mutant receptors were evaluated as described under "Materials and Methods." The dissociation constant was determined by computer fitting to a single-site model. The dissociation constant of the wild type receptor determined under these conditions was 0.67 Ϯ 0.06 ϫ 10 Ϫ9 M (mean Ϯ S.E., n ϭ 8). The results are expressed as the ratio of the dissociation of the mutant to that of the wild type and represent the mean Ϯ S.E. of three independent determinations. The amino acids mutated to alanine are designated by the single-letter code. *, receptor affinity too low to be accurately determined (mutant K d (MUT)/wild type K d (WT) Ͼ 250). FIG. 4. Structure of the functional epitopes of the L1 and cysteine-rich domains. The C␣ backbone of the the L1 and cysteine-rich domains is shown as a ribbon representation. The amino acids mutated are shown in space-filling representation. Alanine mutants of amino acids in yellow had no effect on affinity. Alanine mutants of those in orange produced a 2-to 10-fold reduction in affinity, and those in red had Ͼ10-fold reduction. Alanine mutations of residues in white resulted in receptors that were not secreted in detectable amounts. This figure was prepared with the Swiss PDB Viewer (53). tude of the observed effects of the mutations of amino acids in this region on affinity for IGF-I indicate that this element of the binding site appears to provide the majority of the free energy of the interaction with IGF-I, as we have previously observed for the homologous insulin receptor (28). The data summarized above imply that this ligand binding site of the IGF-I receptor is formed from 14 -22 amino acids and thus implies that a similar number of IGF-I side chains form the cognate receptor binding site. Although this may seem surprising, because it indicates a quarter to a third of the surface of IGF-I is involved in binding to the receptor, a similar proportion of the surface of the homologous insulin molecule forms the receptor binding site (35,36). This is consistent with the crystal structure of the N terminus of the IGF-I receptor; the binding pocket is large enough to completely accommodate either molecule (18). This finding is also supported by recent molecular reconstructions of the insulin-receptor complex from electron microscopic studies that indicate that the binding cavity engulfs the ligand molecule, with at least a third of the ligand side chains being involved in interactions with the receptor (37). Both the insulin and IGF-I receptors and insulin and IGF-I exhibit high homology (9 -11), although each receptor is highly specific for its cognate ligand (19,23,38). It is therefore pertinent to consider whether the findings of the present study provide any clues either as to whether similar binding mechanisms are employed by each receptor/ligand pair or to the molecular basis for this specificity. Comparison with the results of previous studies from this laboratory (26 -28), using directed alanine scanning in the absence of structural information to characterize the ligand binding site of the insulin receptor, reveals that, in L1 in each receptor, there is an overlapping epitope that is located in the three N-terminal strands of the ␤ sheet forming the base of the domain and some of the residues at the N terminus of the corresponding adjacent ␤ strands forming the side of the domain. This is somewhat larger in the IGF-I receptor and is composed of the side chains of Asp 8 , Asn 11 , Tyr 28 , His 30 , Leu 33 , Leu 56 , Phe 58 , Arg 59 , and Phe 90 (Fig. 5). In the insulin receptor (26), it consists of the side chains of Asp 12 , Arg 14 , Asn 15 , Gln 34 , Leu 36 , and Phe 64 (Fig. 5). Arg 10 , Asn 11 , and Phe 58 , corresponding to Arg 14 , Asn 15 , and Phe 64 in the insulin receptor, are conserved in both proteins but exhibit strikingly different functional properties when mutated to alanine; the IGF-I receptor mutations only result in a 2-to 10-fold decrease in affinity, whereas those in the insulin receptor lead to greater than 100-fold decrease (28). In the insulin receptor two other groups of side chains in the L1 domain, Phe 39 and Tyr 67 , and Phe 89 , Asn 90 , and Tyr 91 , have been implicated in ligand binding (26). Although mutation of Phe 39 and Tyr 67 to alanine produce a significant decrease in affinity for insulin (10-to 20-fold for Phe 39 and 2-fold for Tyr 67 (26)), it is unlikely, on the basis of the IGF-I structure (18) and that of the homologous model of the insulin receptor that we have produced, that they play a direct role in binding. They are both located in ␤ sheets forming the side of the L1 domain and are two residues away from its base, in which all the other binding determinants are located (see Fig. 5). This is somewhat surprising because Phe 39 has been implicated in conferring insulin specificity on the insulin receptor (39). It must therefore be concluded that this is an indirect effect and that further studies, including a high resolution structure of the ligand receptor complex will be necessary to resolve this issue. The insulin receptor residues Phe 89 , Asn 90 , and Tyr 91 form a second epitope on L1 (26). These amino acids are located in the loop/bulge just N-terminal to the fourth strand of the ␤ sheet forming the base of the domain (18). Asn 90 and Tyr 91 are conserved in the IGF-I receptor, but they do not appear to play a role in ligand binding. The only residue of the corresponding region of IGF-I receptor that we have shown to be important for binding is Trp 79 . However, this participates in an epitope that is formed largely from cysteine-rich domain residues. The IGF-I receptor cysteine-rich domain epitope is composed of the side chains of Arg 240 , Phe 241 , Glu 242 , and Phe 251 in addition to Trp 79 from the L1 domain. Of these residues, only Phe 241 and Phe 251 are conserved in the insulin receptor (Phe 247 and Phe 257 , respectively (9, 10)). Although we have not formally evaluated whether this region of the insulin receptor is involved in ligand binding, we feel it is unlikely. On the basis of the IGF-I structure, the distance from the L1 residues involved in ligand binding is greater than 30 Å, which is significantly larger than the largest dimension (20 Å) of the putative receptor binding site of the insulin molecule (35,36). The C-terminal ␣ subunit element of the ligand binding site is highly conserved in both receptors yet the residues composing this epitope appear to play very different roles in ligand binding in each receptor (Fig. 6), as we have discussed in a previous study (23). However, despite these findings, chimeric minireceptors composed of the amino acids 1-470 of the insulin receptor and the C-terminal epitope of the insulin receptor or IGF-I receptor bind insulin with nearly identical affinity (24). Minireceptors formed from amino acids 1-460 of the IGF-I receptor and the C-terminal epitope of either receptor also exhibit the same behavior, i.e. nearly identical affinity for IGF-I (24). Both receptors bind their cognate ligands with similar affinities, but as we have discussed, there appear to be significant quantitative differences between the alanine scanning results for each receptor; there appear to be significantly more amino acids in the functional epitopes of the insulin receptor whose mutation to alanine results in a profound impairment of insulin binding (26). One possible explanation for this finding is that many of these mutants of the insulin receptor are producing their quantitative effects both indirectly by causing intramolecular perturbation of the structure of the binding site as well as directly perturbing side-chain interactions with the (53). Both receptor domains are shown viewed from the base, and the C␣ backbone is shown in ribbon representation. Amino acids whose mutation to alanine compromises affinity for ligand are shown in space-filling representation. Residues are colored according to the magnitude of the effect of the mutation on affinity; yellow corresponds to a 2-to 10-fold decrease, orange to a 10-to 100-fold decrease, and red to Ͼ100-fold decrease. ligand. This appears to be likely, because we have previously demonstrated in a quantitative characterization of these alanine mutants, that the sum of the changes in free energy of binding attributable to each mutant is far in excess of the free energy of the interaction of the receptor with ligand (28). Whereas we have not been able to obtain similar data for the IGF-I receptor, because we have been unable to accurately quantitate the free energy change attributable to the mutation of Phe 701 to alanine, this does not appear to be the case for the IGF-I receptor, where the sum of the free energy changes attributable to the other mutations is more commensurate with the free energy of the interaction with IGF-I (data not shown). A second possibility is that main-chain interactions play a significant role in the IGF-I/receptor interaction; their contribution to the interaction would not be detectable by the methods used here. It is perhaps noteworthy in this context that recent alanine scanning studies of the interaction of IGF-I with IGF binding protein-3 failed to implicate any IGF-I side chains in this interaction (40). From the above it is clear that, despite the homologies between insulin and IGF-I and between their cognate receptors and despite the quantitative similarities between their interactions, each ligand-receptor pair seems to employ a distinct binding mechanism. This is surprising because of the close homology of the ligands (41), and particularly so, in view of the degree of conservation of the amino acids forming insulin's receptor binding site (35,36) in IGF-I (41), 8 out of 13 residues were absolutely conserved and, of the remainder, only one was non-conservatively substituted. Mutational studies of IGF-I structure and function have been limited; the only conserved or conservatively substituted residues, corresponding to those critical for insulin binding, that have been studied are Val 11 (42,43), Phe 23 (44), Tyr 24 and Tyr 60 (45), which are equivalent to Val B12 , Phe B24 , Phe B25 , and Tyr A19 , respectively, in insulin. In addition, systematic mutation of these residues to alanine has not been undertaken. However, despite these limitations, these studies do provide some relevant insights. Mutation of Val 11 to alanine reduces the affinity of IGF-I by only 60% (42), whereas the equivalent mutation of Val B12 of insulin reduces its affinity by 99% (36). In contrast modification of Phe 23 of IGF-I to glycine reduces its affinity by 98% (44), whereas the affinity of the equivalent insulin analogue, Gly B24 insulin for the insulin receptor is nearly that of native insulin (46). This clearly confirms that the molecular mechanisms underlying interaction with the receptor are different for each ligandreceptor pair. Studies with chimeric insulin/IGF-I receptors have indicated that the specificity of the receptors for IGF-I is mediated by the sub-domain of the cysteine-rich domain between amino acids 190 and 290 (19,22). The cysteine-rich domain binding epitope that we have identified resides in this region. As already discussed, two of the five residues forming this epitope Phe 241 and Phe 251 are conserved in the insulin receptor and mutation of the Trp 79 , Arg 240 , and Glu 242 to alanine have only relatively small effects on affinity for IGF-I. This is certainly insufficient to account for the 100-fold difference in affinity observed for the non-cognate ligand (19). However, recent experimental evidence suggesting another mechanism has been presented (47). Hoyne et al. (47) have demonstrated that the loop 255-265, which is adjacent to the cysteine-rich domain functional epitope that we have identified, modulates insulin/IGF-I affinity. Substitution of this loop for the equivalent loop of the insulin receptor increases the affinity of that receptor for IGF-I. It is quite clear from the results of the present study that this loop does not participate directly in the binding of IGF-I, and by implication its role in modulating affinity must be indirect. The loop in the insulin receptor exhibits significant charge differences from that of the IGF-I receptor: Two lysine and two arginine residues in the insulin receptor loop compared with two glutamate and one aspartate residues in the IGF-I receptor loop. It has been suggested that it might therefore produce an unfavorable charge environment in the putative binding cavity for IGF-I binding (18). Also, in the insulin receptor this loop is significantly longer than that of the IGF-I receptor and could thus possibly sterically impair access of the bulkier IGF-I molecule to the binding site. This mechanism is supported by the finding that reduction in the length of the C-domain of IGF-I, reducing the bulk of the molecule, specifically increases its affinity for the insulin receptor (48). Further support is provided by the finding that shortening the insulin receptor loop significantly increases its affinity for IGF-I. 2 Further experimental study will be necessary to fully elucidate this mechanism. Several reports suggest that the full-length IGF-I receptor, like the related insulin receptor, has a higher affinity for IGF-I than we have reported here for the secreted recombinant receptor (49) and exhibits complex binding kinetics with curvilinear Scatchard plots (49,50) and negative cooperativity (50,51). Two models have been proposed to explain such behavior in this family of receptors and ligands (49,52). Both propose that there are two distinct ligand binding sites of differing 2 J. Whittaker and A. V. Groth, unpublished observations. FIG. 6. Comparison of the C-terminal epitopes of the IGF-I and insulin receptors. The effects of alanine mutations of amino acids 692-702 of the IGF-I receptor and of amino acids 705-715 of the insulin receptor on affinity for their cognate ligands are compared. Results are presented as ratios of the dissociation constant of the mutant receptor to that of the wild type receptor. Data for the insulin receptor mutants were taken from Ref. 27. *, receptor affinity too low to be accurately determined (mutant K d (MUT)/ wild type K d (WT) Ͼ 250). affinities on each ␣ subunit and that there are two receptor binding sites on each ligand molecule. Ligand sequentially binds to one site on the first ␣ subunit and then to the second site on the other ␣ subunit, cross-linking the two heterodimers and generating the high affinity component of the receptorligand interaction. Subsequent binding of a second ligand molecule disrupts the cross-linking of the first and accelerates its dissociation (negative cooperativity). In the recombinant secreted form of the receptor, ligand only binds to one of the binding sites (the higher affinity binding site) and thus displays a lower affinity than the native receptor and simple binding kinetics. Data presented by Schaffer (49) for the holoreceptor indicate that the affinity of the high affinity binding site of the solubilized receptor is significantly greater than that of the low affinity, and thus it would be expected that mutations compromising the affinity of the soluble receptor, i.e. of the higher affinity binding site, would also compromise the affinity of the native receptor. Although we have not yet formally compared the effects of mutations on the affinities of the secreted and native IGF-I receptors, our previous study (26) with the insulin receptor indicate that this is indeed the case. In summary, we have demonstrated by mutational analysis that the ligand binding site of the secreted extracellular domain of the IGF-I receptor is composed of three elements. The first of these, an epitope at the N terminus of the molecule, overlaps the corresponding epitope of the insulin receptor and exhibits significant identity to it. The second is in the cysteinerich domain and probably has no equivalent in the insulin receptor. The third, located at the C terminus of the ␣ subunit, is the most conserved and provides the majority of the free energy of the interaction. Further structural studies will be necessary to determine how these elements interact in the interaction of the receptor with IGF-I.
v3-fos-license
2020-11-08T14:07:08.890Z
2020-11-06T00:00:00.000
218644047
{ "extfieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1128/jvi.01969-20", "pdf_hash": "215a851e6f5622f28bd4a1f3b4ba46f50fd22a1e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41481", "s2fieldsofstudy": [ "Biology" ], "sha1": "e9d5a3b060b7c45d7b75fa6dbd6853c22f7ea202", "year": 2020 }
pes2o/s2orc
The SARS-CoV-2 Conserved Macrodomain Is a Mono-ADP-Ribosylhydrolase SARS-CoV-2 has recently emerged into the human population and has led to a worldwide pandemic of COVID-19 that has caused more than 1.2 million deaths worldwide. With no currently approved treatments, novel therapeutic strategies are desperately needed. T he recently emerged pandemic outbreak of coronavirus disease 2019 (COVID-19) is caused by a novel coronavirus named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (1, 2). As of 2 November 2020, this virus has been responsible for ,46 million cases of COVID-19 and .1.2 million deaths worldwide. SARS-CoV-2 is a member of the subgenus Sarbecovirus of the genus Betacoronavirus (b-CoVs) with overall high sequence similarity with other severe acute respiratory syndrome-related coronaviruses, including SARS-CoV. While most of the genome is .80% similar to SARS-CoV, there are regions where amino acid conservation is significantly lower. As expected, the most divergent proteins in the SARS-CoV-2 genome from SARS-CoV include the spike glycoprotein and several accessory proteins, including 8a (absent), 8b (extended), and 3b (truncated). However, somewhat unexpectedly, several nonstructural proteins also show significant divergence from SARS-CoV, including nonstructural proteins 3, 4, and 7, which could affect the biology of SARS-CoV-2 (3,4). Coronaviruses encode 16 nonstructural proteins that are processed from two polyproteins, 1a and 1ab (pp1a and pp1ab) (5). The largest nonstructural protein is nonstructural protein 3 (nsp3), which contains multiple modular protein domains. These domains in SARS-CoV-2 diverge in amino acid sequence from SARS-CoV as much as 30%. The SARS-CoV-2 nsp3 includes three tandem macrodomains (Mac1, Mac2, and Mac3) (Fig. 1A) (3). The individual macrodomains of SARS-CoV-2 show similar, if not more, amino acid divergence compared to the other domains of nsp3 and more divergence than all nonstructural proteins except nsp4 and nsp7. Mac1 diverges 28% from SARS-CoV and 59% from Middle East respiratory syndrome coronavirus (MERS-CoV), while Mac2 and Mac3 diverge 24% from SARS-CoV. It is feasible that these significant sequence differences could impact the unique biology of SARS-CoV-2. However, macrodomains have a highly conserved structure, and thus, sequence divergence may have little impact on their overall function. Mac1 is present in all CoVs, unlike Mac2 and Mac3, and early structural and biochemical data demonstrated that it contains a conserved three-layered a/b/a fold and binds to mono-ADP-ribose (MAR) and other related molecules (6)(7)(8)(9)(10). This is unlike Mac2 and Mac3, which fail to bind ADP-ribose and instead appear to bind to nucleic acids (11,12). ADP-ribose is buried in a hydrophobic cleft of Mac1, where the ADP-ribose binds to several highly conserved residues, such as an aspartic acid at position 1022 (D1022) of SARS-CoV pp1a (D22 of SARS-CoV and SARS-CoV-2 Mac1) and asparagine at position 1040 of pp1a (N1040) (N40 of SARS-CoV and SARS-CoV-2 Mac1) ( Fig. 1B) (6). Mac1 homologs are also found in alphaviruses, hepatitis E virus, and rubella virus, and structural analysis of these macrodomains has demonstrated that they are very similar to CoV Mac1 (13,14). All are members of the larger MacroD-type macrodomain family, which includes human macrodomains Mdo1 and Mdo2 (15). The CoV Mac1 was originally named ADP-ribose-10-phosphatase (ADRP) based on data demonstrating that it could remove the phosphate group from ADP-ribose-10phosphate (6)(7)(8). However, the activity was rather modest, and it was unclear why this would impact a virus infection. More recently it has been demonstrated that CoV Mac1 can hydrolyze the bond between amino acid chains and ADP-ribose molecules (16)(17)(18), indicating that it can reverse protein ADP-ribosylation (6,8). ADP-ribosylation is a posttranslational modification catalyzed by ADP-ribosyltransferases [ARTs; also known as poly(ADP-ribose) polymerases (PARPs)] through transferring an ADP-ribose moiety from NAD 1 onto target proteins (19). The ADP-ribose is transferred as a single unit of MAR, or single units of MAR are transferred consecutively to form a PAR chain. Several Mac1 proteins have been shown to hydrolyze MAR but have minimal activity for PAR (16,17). Several MARylating PARPs are induced by interferon (IFN) and are known to inhibit virus replication, implicating MARylation in the host response to infection (20). Several reports have addressed the role of Mac1 in the replication and pathogenesis of CoVs, mostly using the mutation of a highly conserved asparagine to alanine (N41A-SARS-CoV). This mutation abolished the MAR-hydrolase activity of SARS-CoV Mac1 (18). This mutation has minimal effects on CoV replication in transformed cells but reduces viral load, leads to enhanced IFN production, and strongly attenuates both murine hepatitis virus (MHV) and SARS-CoV in mouse models of infection (7,18,21,22). MHV Mac1 was also required for efficient replication in primary macrophages, which could be partially rescued by the PARP inhibitors XAV-939 and 3-AB or small interfering RNA , and from hepatitis E virus (HEV). Sequences were aligned using the ClustalW method from Clustal Omega online tool with manual adjustment. Identical residues are boldface, shaded in gray, and marked with asterisks; semiconserved residues are shaded in gray and marked with two dots (one change among all viruses) or one dot (two changes or conserved within the CoV family). Based on the close structural similarities between viral macrodomains, we hypothesized that SARS-CoV-2 Mac1 has binding and hydrolysis activities similar to those of other CoV Mac1 enzymes. In this study, we determined the crystal structure of the SARS-CoV-2 Mac1 protein bound to ADP-ribose. Binding to and hydrolysis of MAR were tested and directly compared to those of a human macrodomain (Mdo2) and the SARS-CoV and MERS-CoV Mac1 proteins by several in vitro assays. All CoV Mac1 proteins bound to MAR and could remove MAR from a protein substrate. However, the initial rate associated with the loss of substrate was highest for the SARS-CoV-2 Mac1 protein, especially under multiturnover conditions. In addition, none of these enzymes could remove PAR from a protein substrate. These results indicate that Mac1 protein domains likely have similar functions and will be instrumental in the design and testing of novel therapeutic agents targeting the CoV Mac1 protein domain. RESULTS Structure of the SARS-CoV-2 Mac1 complexed with ADP-ribose. To create recombinant SARS-CoV-2 Mac1 for structure determination and enzyme assays, nucleotides 3348 to 3872 of SARS-CoV-2 isolate Wuhan-hu-1 (accession number NC_045512), representing amino acids I1023 to K1197 of pp1a, were cloned into a bacterial expression vector containing an N-terminal 6-His tag and TEV (tobacco etch virus) protease cleavage site. We obtained large amounts (.100 mg) of purified recombinant protein. A small amount of this protein was digested by the TEV protease to obtain protein devoid of any extra tags for crystallization and used to obtain crystals from which the structure was determined. Our crystallization experiments resulted in the same crystal form (needle clusters) from several conditions, but only when ADP-ribose was added to the protein. This represents an additional crystal form (P2 1 ) among the recently determined SARS-CoV-2 macrodomain structures (29,30). The structure of SARS-CoV-2 Mac1 complexed with ADP-ribose was obtained using X-ray diffraction data to 2.2 Å resolution and contained four molecules in the asymmetric unit that were nearly identical ( Table 1). The polypeptide chains could be traced from V3-M171 for subunits A/C and V3-K172 for subunits B/D. Superposition of subunits B/D onto subunit A (169 residues aligned) yielded root mean square deviations (RMSD) of 0.17 Å, 0.17 Å, and 0.18 Å, respectively, between Ca atoms. As such, subunit A was used for the majority of the structure analysis described herein. The SARS-CoV-2 Mac1 protein adopted a fold consistent with the MacroD subfamily of macrodomains that contains a core composed of a mixed arrangement of 7 b-sheets (parallel and antiparallel) that are flanked by 6 a-helices ( Fig. 2A and B). As mentioned above, apo crystals were never observed for our construct, though the apo structure has been solved by researchers at The Center for Structural Genomics of Infectious Diseases (PDB code 6WEN) (30) and the University of Wisconsin-Milwaukee (PDB code 6WEY) (31). Further analysis of the amino acid sequences used for expression and purification revealed that our construct had 5 additional residues at the C terminus (MKSEK) and differs slightly at the N terminus as well (GIE versus GE) relative to 6WEN. In addition, the sequence used to obtain the structure of 6WEY is slightly shorter than SARS-CoV-2 Mac1 at both the N-and C-terminal regions (Fig. 3A). To assess the effect of these additional residues on crystallization, chain B of the SARS-CoV-2 Mac1, which was traced to residue K172, was superimposed onto subunit A of the protein with PDB code 6W02 (30), a previously determined structure of ADP-ribose bound SARS-CoV-2 Mac1. Analysis of the crystal packing of 6W02 indicates that the additional residues at the C terminus would clash with symmetry-related molecules (Fig. 3B). This suggests that the presence of these extra residues at the C terminus likely prevented the generation of the more tightly packed crystal forms obtained for 6W02 and 6WEY, which diffracted to high resolution. The ADP-ribose binding pocket contained large regions of positive electron density consistent with the docking of ADP-ribose (Fig. 4A). The adenine forms two hydrogen bonds with D22-I23, which makes up a small loop between b2 and the N-terminal half of a1. The side chain of D22 interacts with N6, while the backbone nitrogen atom of I23 interacts with N1, in a fashion very similar to that of the SARS-CoV macrodomain (6). This aspartic acid is known to be critical for ADP-ribose binding for alphavirus macrodomains (26,27). A large number of contacts are made in the highly conserved loop between b3 and a2, which includes many highly conserved residues, including a GGG motif and N40, which is completely conserved in all enzymatically active macrodomains (32). N40 is positioned to make hydrogen bonds with the 39 OH groups of the distal ribose, as well as a conserved water molecule ( Fig. 4B and C). K44 and G46 also make hydrogen bonds with the 29 OH of the distal ribose, and G48 makes contact with a Values in parentheses are for the highest-resolution shell. b R merge = R hkl R I jI i (hkl) 2 ,I(hkl).j/R hkl R I I i (hkl), where I i (hkl) is the intensity measured for the ith reflection and ,I(hkl). is the average intensity of all reflections with indices hkl. c R meas = redundancy-independent (multiplicity-weighted) R merge (47,54). d R pim = precision-indicating (multiplicity-weighted) R merge (55,56). e CC 1/2 is the correlation coefficient of the mean intensities between two random half-sets of data (57,58). f R factor = R hkl kF obs (hkl) j 2 jF calc (hkl) k/R hkl jF obs (hkl)j; R free is calculated in an identical manner using 5% of randomly selected reflections that were not included in the refinement. the 19 OH and a water that resides near the catalytic site, while the backbone nitrogen atom of V49 hydrogen bonds with the a-phosphate. The other major interactions with ADP-ribose occur in another highly conserved region consisting of residues G130, I131, and F132, which are in the loop between b6 and a5 (Fig. 4B). The a-phosphate accepts a hydrogen bond from the nitrogen atom of I131, while the b-phosphate accepts hydrogen bonds from the backbone nitrogen atom of G130 and F132. The phenyl ring of F132 may make van der Waals interactions with the distal ribose to stabilize it, which may contribute to binding and hydrolysis (33). Loops b3-a2 and b6-a5 are connected by an isoleucine bridge that, following ADP-ribose binding, forms a narrow channel around the diphosphate which helps position the terminal ribose for water-mediated catalysis (6). Because there are only a few studies testing the activity of mutant forms of the macrodomain, is not exactly clear which of these residues are important for ADP-ribose binding, hydrolysis, or both. Additionally, a network of direct contacts of ADP-ribose to solvent along with water-mediated contacts to the protein are shown (Fig. 4C). Comparison of SARS-CoV-2 Mac1 with other CoV macrodomain structures. We next sought to compare the SARS-CoV-2 Mac1 to other deposited structures of this protein. Superposition with Apo (6WEN) and ADP-ribose complexed protein (6W02) yielded RMSD of 0.48 Å (168 residues) and 0.37 Å (165 residues), respectively, indicating a high degree of similarity ( Fig. 5A and B). Comparison of the ADP-ribose binding site of The structure was rendered as ribbons and colored using the visible spectrum from the N terminus (blue) to the C terminus (red). (B) The structure was colored by secondary structure showing sheets (magenta) and helices (green). The ADP-ribose is rendered as gray cylinders, with oxygens and nitrogens colored red and blue, respectively. SARS-CoV-2 Mac1 with that of the apo structure (6WEN) revealed minor conformational differences in order to accommodate ADP-ribose binding. The loop between b3 and a2 (H45-V49) undergoes a change in conformation, and the side chain of F132 is moved out of the ADP-ribose binding site (Fig. 5C). Our ADP-ribose-bound structure is nearly identical to 6W02, except for slight deviations in the b3-a2 loop and an altered conformation of F156, where the aryl ring of F156 is moved closer to the adenine ring ( Fig. 5C and D). However, this is likely a result of crystal packing, as F156 adopts this conformation in each subunit and would likely clash with subunit residues related by either crystallographic or noncrystallographic symmetry. We next compared the ADP-ribose bound SARS-CoV-2 Mac1 structure with that of SARS-CoV (PDB code 2FAV) (6) and MERS-CoV (PDB code 5HOL) (34) Mac1 proteins (Fig. 6). Superposition yielded RMSD of 0.71 Å (166 residues) and 1.06 Å (161 residues) for 2FAV and 5HOL, respectively. Additionally, the ADP-ribose binding mode in the SARS-CoV and SARS-CoV-2 structures almost perfectly superimposed ( Fig. 6A and C). The conserved aspartic acid residue (D22, SARS-CoV-2 Mac1) that binds to adenine is localized in a similar region in all 3 proteins, although there are slight differences in the rotamers about the Cb-Cg bond. The angles between the mean planes defined by the OD1, CG, and OD2 atoms relative to SARS-CoV-2 Mac1 are 23.1°and 46.5°for the SARS-CoV and MERS-CoV Mac1 structures, respectively. Another notable difference is that SARS-CoV and SARS-CoV-2 macrodomains have an isoleucine (I23) following this aspartic acid, while MERS-CoV has an alanine (A22) (Fig. 6C and D). Conversely, SARS-CoV-2 and SARS-CoV Mac1 have a valine instead of an isoleucine immediately following the GGG motif (V49/I48). From these structures, it appears that having two isoleucines in this location would clash and that the Merbecovirus and Sarbecovirus b-CoVs have evolved in unique ways to create space in this pocket ( Fig. 6D and data not shown). Despite these small differences in local structure, the overall structure of CoV Mac1 domains remain remarkably conserved and indicates that they likely have similar biochemical activities and biological functions. SARS-CoV, SARS-CoV-2, and MERS-CoV bind to ADP-ribose with similar affinities. To determine if the CoV macrodomains had any noticeable differences in their ability to bind ADP-ribose, we performed isothermal titration calorimetry (ITC), which measures the energy released or absorbed during a binding reaction. Macrodomain proteins from human (Mdo2), SARS-CoV, MERS-CoV, and SARS-CoV-2 were purified and tested for their affinity to ADP-ribose. All CoV Mac1 proteins bound to ADP-ribose with low micromolar affinity (7 to 16 mM), while human Mdo2 bound with an affinity at least 30 times stronger (;220 nM) ( Fig. 7A and B). As a control, we tested the ability of the MERS-CoV macrodomain to bind to ATP and observed only minimal binding with millimolar affinity (data not shown). At higher concentrations, the SARS-CoV-2 macrodomain caused a slightly endothermic reaction, potentially the result of protein aggregation or a change in conformation (Fig. 7A). The MERS-CoV Mac1 had a greater affinity for ADP-ribose than SARS-CoV or SARS-CoV-2 Mac1 in the ITC assay ( Fig. 7A and B); however, our results found the differences between these macrodomain proteins to be much smaller than previously reported (9). As an alternate method to confirm ADPribose binding, we conducted a thermal shift assay. All 4 macrodomains tested denatured at higher temperatures with the addition of ADP-ribose (Fig. 7C). We conclude that the Merbecovirus and Sarbecovirus Mac1 proteins bind to ADP-ribose with similar affinities. CoV macrodomains are MAR-hydrolases. To examine the MAR-hydrolase activity of CoV Mac1, we first tested the viability of using ADP-ribose binding reagents to detect MARylated protein. Previously, the use of radiolabeled NAD 1 has been the primary method for labeling MARylated protein (16,17). To create a MARylated substrate, the catalytic domain of the PARP10 (glutathione S-transferase [GST]-PARP10 CD) protein was incubated with NAD 1 , leading to its automodification. PARP10 CD is a standard substrate that has been used extensively in the field to analyze the activity of macrodomains (16,18,26,27). PARP10 is highly upregulated upon CoV infection (23,35) and is known to primarily auto-MARylate on acidic residues, which are the targets of the MacroD2 class of macrodomains (27). We then tested a panel of anti-MAR, anti-PAR, or both anti-MAR and anti-PAR binding reagents/antibodies for the ability to detect MARylated PARP10 by immunoblotting. The anti-MAR and anti-MAR/PAR binding reagents, but not anti-PAR antibody, bound to MARylated PARP10 (Fig. 8A). Therefore, in this work, we utilized the anti-MAR binding reagent to detect MARylated PARP10. We next tested the ability of SARS-CoV-2 Mac1 to remove ADP-ribose from MARylated PARP10. SARS-CoV-2 Mac1 and MARylated PARP10 were incubated at equimolar amounts of protein at 37°C, and the reaction was stopped at 5, 10, 20, 30, 45, or 60 min (Fig. 8B). As a control, MARylated PARP10 was incubated alone at 37°C and collected at similar time points (Fig. 8B). Each reaction had equivalent amounts of MARylated PARP10 and Mac1, which was confirmed by Coomassie blue staining (Fig. 8B). An immediate reduction of more than 50% band intensity was observed within 5 min, and the ADPribose modification was nearly completely removed by SARS-CoV-2 Mac1 within 30 min (Fig. 8B). The MARylated PAPR10 bands intensities were calculated, plotted, and fitted using nonlinear regression (Fig. 8C). This result indicates that the SARS-CoV-2 Mac1 protein is a mono-ADP-ribosylhydrolase enzyme. Mac1 proteins, especially under multiple-turnover conditions, and all 3 viral macrodomains gave rise to a more rapid loss of substrate than the human Mdo2 enzyme (Fig. 9B). However, further enzymatic analyses of these proteins are warranted to more thoroughly understand their kinetics and binding affinities associated with various MARylated substrates. CoV Mac1 proteins do not hydrolyze PAR. To determine if the CoV Mac1 proteins could remove PAR from proteins, we incubated these proteins with an auto-PARylated PARP1 protein. PARP1 was incubated with increasing concentrations of NAD 1 to create a range of modification levels (Fig. 10A). We incubated both partially and heavily modified PARP1 with all four macrodomains and poly-ADP-ribose glycohydrolase (PARG) as a positive control for 1 h. While PARG completely removed PAR, none of the macrodomain proteins removed PAR chains from PARP1 (Fig. 10B). We conclude that macrodomain proteins are unable to remove PAR from an automodified PARP1 protein under these conditions. ELISAs can be used to measure ADP-ribosylhydrolase activity of macrodomains. Gel-based assays such as those described above suffer from significant limitations in the number of samples that can be tested at once. A higher-throughput assay will be needed to more thoroughly investigate the activity of these enzymes and to screen for inhibitor compounds. Based on the success of our antibody-based detection of MAR, we developed an enzyme-linked immunosorbent assay (ELISA) that has an ability to detect de-MARylation similar to that of our gel-based assay, but with the ability to do so in a higher-throughput manner (Fig. 11A). First, MARylated PARP10 was added to ELISA plates. Next, the wells were washed and then incubated with different concentrations of the SARS-CoV-2 Mac1 protein for 60 min. After incubation, the wells were washed and treated with anti-MAR binding reagent, followed by horseradish peroxidase (HRP)-conjugated secondary antibody and the detection reagent. As controls, we detected MARylated and non-MARylated PARP10 proteins bound to glutathione plates with anti-GST antibody and anti-MAR binding reagents and their corresponding secondary antibodies (Fig. 11B). SARS-CoV-2 Mac1 was able to remove MAR signal in a dose-dependent manner, and results were plotted to a linear non-regression-fitted line ( Fig. 11C). Based on these results, we believe that this ELISA will be a useful tool for screening potential inhibitors of macrodomain proteins. DISCUSSION Here, we report the crystal structure of SARS-CoV-2 Mac1 and its enzyme activity in vitro. Structurally, it has a conserved three-layered a/b/a fold typical of the MacroD family of macrodomains and is extremely similar to other CoV Mac1 proteins ( Fig. 2 and 6). The conserved CoV macrodomain (Mac1) was initially described as an ADPribose-10-phosphatase (ADRP), as it was shown to be structurally similar to yeast enzymes that have this enzymatic activity (7,36). Early biochemical studies confirmed this activity for CoV Mac1, though its phosphatase activity for ADP-ribose-10-phosphate was rather modest (6-8). Later, it was shown that mammalian macrodomain proteins could remove ADP-ribose from protein substrates, indicating protein de-ADP-ribosylation as a more likely function for the viral macrodomains (32,37,38). Shortly thereafter, the SARS-CoV, HCoV-229E, feline infectious peritonitis virus (FIPV), several alphavirus, and the hepatitis E virus macrodomains were demonstrated to have de-ADP-ribosylating activity (16)(17)(18). However, this activity has not yet been reported for the MERS-CoV or SARS-CoV-2 Mac1 protein. In this study, we show that the Mac1 proteins from SARS-CoV, MERS-CoV, and SARS-CoV-2 hydrolyze MAR from a protein substrate (Fig. 6). Their enzymatic activities were similar despite sequence divergence of almost 60% between SARS-CoV-2 and MERS-CoV. However, the initial rate associated with the loss of substrate was largest for the SARS-CoV-2 Mac1 protein, particularly under multiple-turnover conditions. It is unclear what structural or sequence differences may account for the increased activity of the SARS-CoV-2 Mac1 protein under these conditions, especially considering the pronounced structural similarities between these proteins, specifically the SARS-CoV Mac1 (0.71 Å RMSD). It is also unclear if these differences would matter in the context of the virus infection, as the relative concentrations of Mac1 and its substrate during infection are not known. We also compared these activities to that of the human Mdo2 macrodomain. Mdo2 had a greater affinity for ADP-ribose than the viral enzymes but had significantly reduced enzyme activity in our experiments. Due to its high affinity for ADP-ribose, it is possible that the Mdo2 protein was partially inhibited by rebinding to the MAR product in these assays. Regardless, these results suggest that the human and viral proteins likely have structural differences that alter their biochemical activities in vitro, indicating that it may be possible to create viral macrodomain inhibitors that do not impact the human macrodomains. We also compared the ability of these macrodomain proteins to hydrolyze PAR. None of the macrodomains were able to hydrolyze either partially or heavily modified PARP1, further demonstrating that the primary enzymatic activity of these proteins is to hydrolyze MAR (Fig. 10). When viral macrodomain sequences are analyzed, it is clear that they have at least 3 highly conserved regions (Fig. 1B) (24). The first region includes the NAAN (residues 37 to 40) and GGG (residues 46 to 48) motifs in the loop between b3 and a2. The second domain includes a GIF (residues 130 to 132) motif in the loop between b6 and a5. The final conserved region is a VGP (residues 96 to 98) motif at the end of b5 and extends into the loop between b5 and a4. Both of the first two domains have welldefined interactions with ADP-ribose (Fig. 3). However, no one has addressed the role of the VGP residues, though our structure indicates that the glycine may interact with a water molecule that makes contact with the b-phosphate. Identifying residues that directly contribute to ADP-ribose binding, hydrolysis, or both by CoV Mac1 proteins will be critical to determining the specific roles of ADP-ribose binding and hydrolysis in CoV replication and pathogenesis. While all previous studies of macrodomain de-ADP-ribosylation have primarily used radiolabeled substrate, we obtained highly reproducible and robust data utilizing ADPribose binding reagents designed to specifically recognize MAR (39,40). The use of these binding reagents should enhance the feasibility of this assay for many labs that are not equipped for radioactive work. Utilizing these binding reagents, we further developed an ELISA for de-MARylation that has the ability to dramatically increase the number of samples that can be analyzed compared to the gel-based assay. To our knowledge, previously developed ELISAs were used to measure ADP-ribosyltransferase activities (41) but no ELISA has been established to test the ADP-ribosylhydrolase activity of macrodomain proteins. This ELISA should be useful to those in the field to screen compounds for macrodomain inhibitors that could be either valuable research tools or potential therapeutics. The functional importance of the CoV Mac1 domain has been demonstrated in several reports, mostly utilizing the mutation of a highly conserved asparagine that mediates contact with the distal ribose (Fig. 3B) (18,21,22). However, the physiological relevance of Mac1 during SARS-CoV-2 infection has yet to be determined. In addition, the proteins that are targeted by the CoV Mac1 for de-ADP-ribosylation remains unknown. Unfortunately, there are no known compounds that inhibit this domain that could help identify the functions of this protein during infection. The outbreak of COVID-19 has illustrated an urgent need for developing multiple therapeutic drugs targeting conserved coronavirus proteins. Mac1 appears to be an ideal candidate for further drug development based on (i) its highly conserved structure and biochemical activities within CoVs and (ii) its importance for multiple CoVs to cause disease. Targeting Mac1 may also have the benefit of enhancing the innate immune response, as we have shown that Mac1 is required for some CoVs to block IFN production (18,23). Considering that Mac1 proteins from divergent aCoVs such as HCoV-229E and FIPV also have de-ADP-ribosylating activity (16,17), it is possible that compounds targeting Mac1 could prevent disease caused by a wide variety of CoVs, including those of veterinary importance like porcine epidemic diarrhea virus (PEDV). Additionally, compounds that inhibit Mac1 in combination with the structure could help identify the mechanisms it uses to bind to its biologically relevant protein substrates, remove ADP-ribose from these proteins, and potentially define the precise function for Mac1 in SARS-CoV-2 replication and pathogenesis. In conclusion, the results described here will be critical for the design and development of highly specific Mac1 inhibitors that could be used therapeutically to mitigate COVID-19 or future CoV outbreaks. MATERIALS AND METHODS Plasmids. The SARS-CoV macrodomain (Mac1) (residues 1000 to 1172 of pp1a) was cloned into the pET21a1 expression vector with an N-terminal His tag. The MERS-CoV Mac1 (residues 1110 to 1273 of pp1a) was also cloned into pET21a1 with a C-terminal His tag. SARS-CoV-2 Mac1 (residues 1023 to 1197 of pp1a) was cloned into the pET30a1 expression vector with an N-terminal His tag and a TEV cleavage site (Synbio). The pETM-CN Mdo2 Mac1 (residues 7 to 243) expression vector with an N-terminal His-TEV-V5 tag and the pGEX4T-PARP10-CD (residues 818 to 1025) expression vector with an N-terminal GST tag were previously described (32). All plasmids were confirmed by restriction digestion, PCR, and direct sequencing. Protein expression and purification. A single colony of Escherichia coli cells [C41(DE3)] containing plasmids harboring the constructs of the macrodomain proteins was inoculated into 10 ml LB medium and grown overnight at 37°C with shaking at 250 rpm. The overnight culture was transferred to a shaker flask containing 1 liter 2Â Terrific broth (TB) medium at 37°C until the optical density at 600 nm (OD 600 ) reached 0.7. The proteins were induced with 0.4 mM IPTG (isopropyl-b-D-thiogalactopyranoside) either at 37°C for 3 h or at 17°C for 20 h. Cells were pelleted at 3,500 Â g for 10 min and frozen at 280°C. Frozen cells were thawed at room temperature, resuspended in 50 mM Tris (pH 7.6)-150 mM NaCl, and sonicated with the following cycle parameters: amplitude, 50%; pulse length, 30 s; number of pulses, 12, with incubation on ice for .1 min between pulses. The soluble fraction was obtained by centrifuging the cell lysate at 45,450 Â g for 30 min at 4°C. The expressed soluble proteins were purified by affinity chromatography using a 5-ml prepacked HisTrap HP column on an AKTA Pure protein purification system (GE Healthcare). The fractions were further purified by size exclusion chromatography (SEC) with a Superdex 75 10/300 GL column equilibrated with 20 mM Tris (pH 8.0)-150 mM NaCl, and the protein was sized as a monomer relative to the column calibration standards. To cleave off the His tag from the SARS-CoV-2 Mac1, purified TEV protease was added to purified SARS-CoV-2 Mac1 protein at a ratio of 1:10 (wt/wt) and then passed back through the nickel-nitrilotriacetic acid (Ni-NTA) HP column. Protein was collected in the flowthrough and equilibrated with 20 mM Tris (pH 8.0), 150 mM NaCl. The SARS-CoV-2 Mac1, free from the N-terminal 6-His tag, was used for subsequent crystallization experiments. For the PARP10 CD protein, the cell pellet was resuspended in 50 mM Tris-HCl (pH 8.0), 500 mM NaCl, 0.1 mM EDTA, 25% glycerol, 1 mM dithiothreitol (DTT) and sonicated as described above. The cell lysate was incubated with 10 ml of glutathione Sepharose 4B resin from GE Healthcare, equilibrated with the same buffer for 2 h, and then applied to a gravity flow column to allow unbound proteins to flow through. The column was washed with the resuspension buffer until the absorbance at 280 nm reached baseline. The bound protein was eluted out of the column with resuspension buffer containing 20 mM reduced glutathione and then dialyzed back into the resuspension buffer overnight at 4°C. Isothermal titration calorimetry. All ITC titrations were performed on a MicroCal PEAQ-ITC instrument (Malvern Pananalytical Inc., MA). All reactions were performed in 20 mM Tris (pH 7.5)-150 mM NaCl using a 100 mM concentration of all macrodomain proteins at 25°C. Titration of 2 mM ADP-ribose or ATP (MilliporeSigma) contained in the stirring syringe included a single 0.4-ml injection, followed by 18 consecutive injections of 2 ml. Data analysis of thermograms was carried out using the "one set of binding sites" model of the MicroCal ITC software to obtain all fitting model parameters for the experiments. Differential scanning fluorimetry. Thermal shift assay with differential scanning fluorimetry (DSF) involved the use of a LightCycler 480 instrument (Roche Diagnostics). In total, a 15-ml mixture containing 8Â SYPRO orange (Invitrogen) and 10 mM macrodomain protein in buffer containing 20 mM HEPES-NaOH (pH 7.5) and various concentrations of ADP-ribose were mixed on ice in a 384-well PCR plate (Roche). Fluorescent signals were measured from 25 to 95°C in 0.2°C/30-s steps (excitation, 470 to 505 nm; detection, 540 to 700 nm). The main measurements were carried out in triplicate. Data evaluation and melting temperature (T m ) determination involved use of the Roche LightCycler 480 protein melting analysis software, and data-fitting calculations involved the use of single-site binding curve analysis on GraphPad Prism. (ii) PAPR10 CD ADP-ribose hydrolysis. All reactions were performed at 37°C for the designated time. A 1 mM solution of MARylated PARP10 CD and purified Mac1 protein was added in the reaction buffer (50 mM HEPES, 150 mM NaCl, 0.2 mM DTT, and 0.02% NP-40). The reaction was stopped with addition of 2Â Laemmli sample buffer containing 10% b-mercaptoethanol. Protein samples were heated at 95°C for 5 min before loading and separated onto SDS-PAGE cassettes (Thermo Fisher Scientific Bolt 4 to 12% bis-Tris Plus gels) in MES (morpholineethanesulfonic acid) running buffer. For direct protein detection, the SDS-PAGE gel was stained using InstantBlue protein stain (Expedeon). For immunoblotting, the separated proteins were transferred onto a polyvinylidene difluoride (PVDF) membrane using an iBlot 2 dry blotting system (ThermoFisher Scientific). The blot was blocked with 5% skim milk in phosphate-buffered saline (PBS) containing 0.05% Tween 20 and probed with the anti-mono-or poly-ADP-ribose binding reagents/antibodies MABE1076 (anti-MAR), MABC547 (anti-PAR), and MABE1075 (anti-MAR/PAR) (MilliporeSigma) and the anti-GST tag monoclonal antibody MA4-004 (ThermoFisher Scientific). The primary antibodies were detected with secondary infrared antirabbit and anti-mouse immunoglobulin antibodies (LI-COR Biosciences). All immunoblots were visualized using an Odyssey CLx imaging system (LI-COR Biosciences). The images were quantitated using ImageJ (National Institutes for Health [NIH]) or Image Studio software. (iii) Kinetic analysis of ADP-ribose hydrolysis. To quantify the initial rate of substrate decay (k) associated with the four macrodomains, each data set represented in the substrate decay immunoblots in Fig. 6C Fig. 6D. (iv) ELISA-based MAR hydrolysis. ELISA Well-Coated glutathione plates (G-Biosciences, USA) were washed with phosphate-buffered saline (PBS) containing 0.05% Tween 20 (PBS-T) and incubated with 50 ml of 100 nM automodified MARylated PARP10 CD in PBS for 1 h at room temperature. Following four washes with PBS-T, various concentrations of SARS-CoV-2 Mac1 were incubated with MARylated PARP10 CD for 60 min at 37°C. Purified macrodomains were 2-fold serially diluted starting at 100 nM in reaction buffer prior to addition to MARylated PARP10 CD. Subsequently, ELISA wells were washed four times with PBS-T and incubated with 50 ml/well of anti-GST (Invitrogen MA4-004) or anti-MAR (MAB1076; MilliporeSigma) diluted 1:5,000 in 5 mg/ml bovine serum albumin (BSA) in PBS-T (BSA5-PBS-T) for 1 h at room temperature. After four additional washes with PBS-T, each well was incubated for 1 h at room temperature with 50 ml of anti-rabbit-HRP (SouthernBiotech, USA) or anti-mouse-HRP (Rockland Immunochemicals, USA) conjugate diluted 1:5,000 in BSA5-PBS-T. The plate was washed four times with PBS-T, and 100 ml of TMB (5,59-tetramethyl benzidine) peroxidase substrate solution (SouthernBiotech, USA) was added to each well and incubated for 10 min. The peroxidase reaction was stopped with 50 ml per well of 1 M HCl before proceeding to reading. Absorbance was measured at 450 nm and subtracted from 620 nm using a Biotek Powerwave XS plate reader (BioTek). As controls, MARylated PARP10 CD and non-MARylated PARP10 were detected with both anti-MAR and anti-GST antibodies. The absorbance of non-MARylated PARP10 CD detected with anti-MAR antibody was used to establish the background signal. The percentage of signal remaining was calculated by dividing the experimental signal (with enzyme) minus background by the control (no enzyme) minus the background. (ii) PARP1 ADP-ribose hydrolysis. To evaluate the PAR hydrolase activity of CoV macrodomains, 200 ng of PARP1 slightly automodified with 5 mM NAD 1 or highly automodified with 500 mM NAD 1 was used as the substrate for the de-PARylation assays. Recombinant macrodomain protein (1 mg) was added to the reaction buffer (100 mM Tris-HCl [pH 8.0], 10% [vol/vol] glycerol, and 10 mM DTT) containing automodified PARP1 and incubated for 1 h at 37°C. Recombinant PARG (1 mg) was used as a positive control for PAR erasing (43). Reaction mixtures were resolved on 4 to 12% Criterion XT bis-Tris protein gels, transferred onto nitrocellulose membrane, and probed with the anti-PAR polyclonal antibody 96-10. Structure determination. (i) Crystallization and data collection. Purified SARS-CoV-2 Mac1 in 150 mM NaCl-20 mM Tris (pH 8.0) was concentrated to 13.8 mg/ml for crystallization screening. All crystallization experiments were set up using an NT8 drop-setting robot (Formulatrix, Inc.) and UVXPO MRC (Molecular Dimensions) sitting drop vapor diffusion plates at 18°C. One hundred nanoliters of protein and 100 nl crystallization solution were dispensed and equilibrated against 50 ml of the latter. The SARS-CoV-2 Mac1 complex with ADP-ribose was prepared by adding the ligand, from a 100 mM stock in water, to the protein at a final concentration of 2 mM. Crystals were obtained in 1 to 2 days from the Salt Rx HT screen (Hampton Research), condition E10 (1.8 M NaH 2 PO 4 /K 2 HPO 4 , pH 8.2). Refinement screening was conducted using the additive screen HT (Hampton Research) by adding 10% of each additive to the Salt Rx HT E10 condition in a new 96-well UVXPO crystallization plate. The crystals used for data collection were obtained from Salt Rx HT E10 supplemented with 0.1 M NDSB-256 (nondetergent sulfobetaine) from the additive screen. Samples were transferred to a fresh drop composed of 80% crystallization solution and 20% (vol/vol) polyethylene glycol 200 (PEG 200) and stored in liquid nitrogen. X-ray diffraction data were collected at the Advanced Photon Source, IMCA-CAT beamline 17-ID, using a Dectris Eiger 2X 9M pixel array detector. (ii) Structure solution and refinement. Intensities were integrated using XDS (44,45) via Autoproc (46), and the Laue class analysis and data scaling were performed with Aimless (47). Notably, a pseudotranslational symmetry peak was observed at (0, 0.31 0.5) that was 44.6% of the origin. Structure solution was conducted by molecular replacement with Phaser (48) using a previously determined structure of ADP-ribose bound SARS-CoV-2 Mac1 (PDB 6W02) as the search model. The top solution was obtained in the space group P2 1 with four molecules in the asymmetric unit. Structure refinement and manual model building were conducted with Phenix (49) and Coot (50), respectively. Disordered side chains were truncated to the point for which electron density could be observed. Structure validation was conducted with Molprobity (51), and figures were prepared using the CCP4MG package (52). Superposition of the macrodomain structures was conducted with GESAMT (53). Statistical analysis. All statistical analyses were done using an unpaired two-tailed Student's t test to assess differences in mean values between groups, and graphs show means and SD. P values of #0.05 were considered significant. Data availability. The coordinates and structure factors for SARS-CoV-2 Mac1 were deposited in the Worldwide Protein Databank (wwPDB) with the accession code 6WOJ.
v3-fos-license
2023-07-19T15:19:50.873Z
2023-07-01T00:00:00.000
259960375
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9059/11/7/2006/pdf?version=1689585365", "pdf_hash": "b344bec3f3e169c98b3349b7193f67bf279e2328", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41483", "s2fieldsofstudy": [ "Biology" ], "sha1": "f46e2cad38cba101f96ed35e17693e3872d3dbbd", "year": 2023 }
pes2o/s2orc
Extracellular Matrix- and Integrin Adhesion Complexes-Related Genes in the Prognosis of Prostate Cancer Patients’ Progression-Free Survival Prostate cancer is a heterogeneous disease, and one of the main obstacles in its management is the inability to foresee its course. Therefore, novel biomarkers are needed that will guide the treatment options. The extracellular matrix (ECM) is an important part of the tumor microenvironment that largely influences cell behavior. ECM components are ligands for integrin receptors which are involved in every step of tumor progression. An underlying characteristic of integrin activation and ligation is the formation of integrin adhesion complexes (IACs), intracellular structures that carry information conveyed by integrins. By using The Cancer Genome Atlas data, we show that the expression of ECM- and IACs-related genes is changed in prostate cancer. Moreover, machine learning methods revealed that they are a source of biomarkers for progression-free survival of patients that are stratified according to the Gleason score. Namely, low expression of FMOD and high expression of PTPN2 genes are associated with worse survival of patients with a Gleason score lower than 9. The FMOD gene encodes protein that may play a role in the assembly of the ECM and the PTPN2 gene product is a protein tyrosine phosphatase activated by integrins. Our results suggest potential biomarkers of prostate cancer progression. Introduction Prostate cancer is among the most common cancers with regard to incidence and mortality [1,2]. According to the Global Cancer Observatory, in 2020, there were 1,414,259 new prostate cancer cases diagnosed (7.3% of all sites) and 375,304 deaths from this disease (3.8% of all sites) [3]. Surgical intervention (radical prostatectomy) and radiotherapy are the usual treatment options for localized prostate cancer [4,5]. However, the biochemical recurrence, which is defined by a rise in the blood level of prostate-specific antigen (PSA), occurs within 10 years in a fraction of patients treated with radical prostatectomy (20-40% of cases) and radiotherapy (30-50% of cases) [6]. Biochemical recurrence is usually a sign of a progressive disease, which is accompanied by symptoms or evidence of disease progression on imaging [7]. Although the five-and ten-year survival rates in prostate cancer are favorable in comparison to some other more aggressive cancer types, the recurrence of the disease is fatal for a substantial number of patients. The probability to develop prostate cancer highly increases with age, and it is considered that 30-40% of men older than 50 years of age have prostate cancer, but not all cases are clinically significant [8]. In line with these observations, one of the greatest obstacles in prostate cancer treatment is the inability to foresee the course of the disease and to recognize the tumors that will be indolent and require no or minimal intervention and those that are more malignant and will progress fast. Therefore, novel biomarkers of disease progression and therapeutic targets are needed [9]. ditionally, survival trees are easier to interpret and present by clinicians than the Cox regression results. Materials and Methods The main methodological workflow of this article is presented in Figure 1 and described in the following sections. Briefly, after the TCGA PRAD (prostate adenocarcinoma) dataset was downloaded, differentially expressed genes (DEGs) were analyzed. Subsequently, the enrichment analysis was performed on the DEGs. All the mentioned steps were performed with the TCGAbiolinks R package [27,28]. After that, the rpart R module (version 4.1.19) [29,30] was used to perform recursive partitioning and the progression-free survival analysis. Furthermore, the R commander (version 2.8-0) and EZR packages (version 1.61) [31,32] were used to establish the Kaplan-Meier estimate of individual nodes determined by rpart. The reason why we performed survival analysis with all the matrisome and adhesome genes and not only DEGs is that rpart analysis defines risk subgroups, so the changes of gene expressions in a subgroup of patients could be masked by global levels of gene expression in pooled prostate cancer samples. Biomedicines 2023, 11,2006 3 of 18 partitioning and survival trees for the establishment of prognostic subgroups. Considering the prostate cancer heterogeneity, we trust that our approach better-describes its characteristics. Additionally, survival trees are easier to interpret and present by clinicians than the Cox regression results. Materials and Methods The main methodological workflow of this article is presented in Figure 1 and described in the following sections. Briefly, after the TCGA PRAD (prostate adenocarcinoma) dataset was downloaded, differentially expressed genes (DEGs) were analyzed. Subsequently, the enrichment analysis was performed on the DEGs. All the mentioned steps were performed with the TCGAbiolinks R package [27,28]. After that, the rpart R module (version 4.1.19) [29,30] was used to perform recursive partitioning and the progression-free survival analysis. Furthermore, the R commander (version 2.8-0) and EZR packages (version 1.61) [31,32] were used to establish the Kaplan-Meier estimate of individual nodes determined by rpart. The reason why we performed survival analysis with all the matrisome and adhesome genes and not only DEGs is that rpart analysis defines risk subgroups, so the changes of gene expressions in a subgroup of patients could be masked by global levels of gene expression in pooled prostate cancer samples. Figure 1. The workflow of this study. The conducted steps are shown in green rectangles. The software used, and the method that each performs, is shown in red rectangles. ECM, extracellular matrix; IAC, integrin adhesion complex; EZR, Easy R. ECM-and IACs-Related Genes' Retrieval Matrisome is the ensemble of genes encoding the extracellular matrix (ECM) and ECM-associated proteins, which was predicted bioinformatically in the genome of various model organisms by using the characteristic domain-based organization of ECM proteins [33,34]. The matrisome genes (N = 1027) were retrieved from: http://matrisome.org/ (accessed on 1 September 2022) [33,34]. These genes can be further divided into genes encoding core matrisome proteins and matrisome-associated proteins. The consensus adhesome consists of the 60 most common proteins that are extracted from quantitative proteomic datasets, in which IACs were induced by the canonical ligand fibronectin. These proteins are likely to represent the core cell adhesion machinery and were retrieved from [37]. The conducted steps are shown in green rectangles. The software used, and the method that each performs, is shown in red rectangles. ECM, extracellular matrix; IAC, integrin adhesion complex; EZR, Easy R. ECM-and IACs-Related Genes' Retrieval Matrisome is the ensemble of genes encoding the extracellular matrix (ECM) and ECMassociated proteins, which was predicted bioinformatically in the genome of various model organisms by using the characteristic domain-based organization of ECM proteins [33,34]. The matrisome genes (N = 1027) were retrieved from: http://matrisome.org/ (accessed on 1 September 2022) [33,34]. These genes can be further divided into genes encoding core matrisome proteins and matrisome-associated proteins. The consensus adhesome consists of the 60 most common proteins that are extracted from quantitative proteomic datasets, in which IACs were induced by the canonical ligand fibronectin. These proteins are likely to represent the core cell adhesion machinery and were retrieved from [37]. The final combined list of matrisome, adhesome, and consensus adhesome genes had 1286 genes in total, and is provided in the Supplementary Material. Data Preparation The TCGAbiolinks R package [27,28] was used to download, prepare, and analyze The Cancer Genome Atlas (TCGA) [38] prostate adenocarcinoma (PRAD) dataset. This dataset contains gene expression data for 497 prostate cancer patients and corresponding non-transformed prostate tissues for a subset of 52 patients. The same R package was used to pre-process, normalize, and filter the dataset and prepare it for the differential gene expression, functional enrichment, and survival analyses. Differential Gene Expression and Functional Enrichment Analyses To gain insight into differentially expressed genes (DEGs) in prostate cancer in comparison to non-transformed prostate tissue, we set the following criteria in the TCGAbiolinks R package: |log2FC| ≥ 1 (corresponding to |fold change| ≥ 2) and FDR (false discovery rate) p-value < 0.01. These conditions yielded 2037 DEGs. Among these 2037 genes, we singled out ECM-and IACs-related genes with changed expression in prostate cancer. The functional enrichment analysis for the Gene Ontology Cellular Component (GO CC) category using 2037 DEGs was performed by using the TCGAbiolinks R package. Clinical Data Retrieval The clinical data in Table 1 were downloaded from the cBioPortal [39] and NCI Genomic Data Commons (GDC, TCGA) portals [40]. The downloaded data were combined in a single file according to the patients' unique TCGA codes. In total, there were 493 patients with clinical information available. The event that we considered was progression-free survival (PFS, N = 93). This is because, fortunately, only a smaller percentage of patients had an event needed for overall survival analyses. This makes an overall survival analysis in prostate cancer suboptimal. Some variables in our analysis contained missing data. However, the decision trees that we obtained in the survival analysis by using recursive partitioning hold an advantage in comparison to traditional statistical methods as they are not as affected by missing data [41]. The Survival Analysis Variables from Table 1 (age, Gleason score, TNM staging, and residual tumor information) were supplemented with gene expression data for matrisome and adhesome genes, and their prognostic value was determined through recursive partitioning. The American Joint Committee on Cancer (AJCC) recommends recursive partitioning for the analysis in prognostic studies [42,43]. We used the rpart package [29,30] in the programming language R (version 4.2.1) [44] for the creation of survival trees. Rpart is an abbreviation for Recursive PARTitioning, and it is the frequently used method for the construction of survival trees. Survival trees obtained through the rpart method enable visual inspection and comparison of prognostic factors [42,43]. The basic principles of the rpart method are elaborated more closely in our previous publications [26,45]. Briefly, first we calculated the importance of individual variables. Second, we generated the survival tree, which is defined by its decision nodes and terminal nodes (leaves). The analysis began with all patients, who were then further divided into prognostic subgroups at each decision node. At the first decision node (the root node), a logical check was conducted. If the criterion imposed by that node was met, the left side of the tree was followed, and if not, the right side was followed. This action was repeated at each decision node through to the point at which the terminal node was reached. At each decision node, a variable was used to subdivide patients in two subgroups, with maximum differences in their hazard ratios (HR). If no further improvement in subdivision was possible, the terminal nodes were reached. Patients in the first decision node (the root node) had a hazard ratio of 1, and the hazard ratio for patients in each further node was assigned in comparison to this value. Overfitting is a frequent problem in machine learning which, in this case, can lead to an extensive fragmentation of the tree, for which it is hard to find a biological meaning. To avoid overfitting, we set the complexity parameter (CP) to 0.0592 and 0.0636 for the ECM and for the IAC genes, respectively. Table 1. Clinical information of The Cancer Genome Atlas patients. The number (N) and the percentage (in parentheses) of patients that belong to a certain category are shown. Some categories contain unknowns (NAs). The table was modified and adapted from our recent publication [26]. The log-rank test was used to analyze the difference in survival between patients in terminal nodes, and the results were presented as survival curves showing the Kaplan-Meier survival estimate [46]. The analysis was performed by using the EZR package [32], an add-on in R commander (a basic-statistics graphical user interface to R) [31]. The obtained data were statistically significant since the log-rank test p-value was <0.001. The Expression of Matrisome and Adhesome Genes Appears to Be Aberrant in Prostate Cancer Gene expression analysis of prostate tissue from prostate cancer patients described in the Materials and Methods Section revealed 2037 differentially expressed genes (DEGs) when compared to non-transformed prostate tissue. The result of the functional enrichment analysis for the Gene Ontology Cellular Component (GO CC) category using these 2037 genes and the TCGAbiolinks R package is provided in Figure 2. The top-20 GO Cellular Compartment terms are shown. The enrichment analysis on these genes showed that the GO terms 'extracellular matrix' (N = 35 genes) and 'integrin complex' (N = 12 genes) were among those that were highly enriched in the Gene Ontology Cellular Component (GO CC) category ( Figure 2). In the GO Biological Process (GO BP) category, we detected the 'cell adhesion' term (N = 62 genes) among the top-20 categories. The ECM-and IACs-related DEGs are listed in Table 2. 2037 genes and the TCGAbiolinks R package is provided in Figure 2. The top-20 GO Cellular Compartment terms are shown. The enrichment analysis on these genes showed that the GO terms 'extracellular matrix' (N = 35 genes) and 'integrin complex' (N = 12 genes) were among those that were highly enriched in the Gene Ontology Cellular Component (GO CC) category ( Figure 2). In the GO Biological Process (GO BP) category, we detected the 'cell adhesion' term (N = 62 genes) among the top-20 categories. The ECM-and IACsrelated DEGs are listed in Table 2. The TCGA PRAD dataset was used with the following criteria: |log2FC| ≥ 1 (corresponding to |fold change| ≥ 2) and FDR (false discovery rate) p-value < 0.01. The enrichment analysis was performed by using the TCGAbiolinks R programming language package. Table 2. Matrisome and adhesome genes up-(red; N = 71) and down-regulated (green; N = 177) in prostate cancer in comparison to healthy prostate tissue according to TCGA PRAD data. The numbers in parentheses represent the fold change (FC; threshold |FC| ≥ 2x). The adjusted p-value is <0.01 for each gene. The genes are shown in descending FC values' order. ECM glycoproteins, collagens, and proteoglycans belong to the core matrisome category, and ECM-affiliated proteins, ECM regulators, and secreted factors belong to the category of matrisome-associated proteins [33,34]. Category Genes (FC) Integrins ECM Glycoproteins The enrichment analysis of differentially expressed genes (N = 2037) in prostate cancer. The TCGA PRAD dataset was used with the following criteria: |log2FC| ≥ 1 (corresponding to |fold change| ≥ 2) and FDR (false discovery rate) p-value < 0.01. The enrichment analysis was performed by using the TCGAbiolinks R programming language package. Among the ECM-and IACs-related DEGs are many proteins that give a structure to the ECM, such as collagens, various ECM glycoproteins, and ECM proteoglycans (Table 2). Additionally, the expression of ECM regulators, involved in organization of the ECM, is also perturbed. The genes encoding for secreted factors that stimulate the crosstalk between tumor and host cells, (lymph)angiogenesis, and the hijack and recruitment of immune cells, also change expression. With such an extensive perturbation in the ECM composition, it is hard to speculate which characteristics of the ECM changed. However, it is known from the literature that the tumor ECM in general gains an increase in density and mechanical stiffness [47] due to the changed quantity of ECM structural proteins and the extent of crosslinking. Integrins are a link between the ECM and intracellular machinery that are highly alerted to the changes in the ECM. It is interesting to note that in prostate cancer, integrins and adhesome genes mainly show decreased expression ( Table 2). It would be important to relate these differences to phenotypes of prostate cancer and to decipher whether there are compensatory mechanisms, such as, for example, the increase in the expression of some of the integrin ligands (e.g., collagens). The expression of genes that we showed are involved in the prognosis of PFS of prostate cancer patients, PTPN2 and FMOD, did not change the global expression between prostate cancer and non-transformed tissue according to the criteria used (|log2FC| ≥ 1 and FDR p-value < 0.01). Table 2. Matrisome and adhesome genes up-(red; N = 71) and down-regulated (green; N = 177) in prostate cancer in comparison to healthy prostate tissue according to TCGA PRAD data. The numbers in parentheses represent the fold change (FC; threshold |FC| ≥ 2x). The adjusted p-value is <0.01 for each gene. The genes are shown in descending FC values' order. ECM glycoproteins, collagens, and proteoglycans belong to the core matrisome category, and ECM-affiliated proteins, ECM regulators, and secreted factors belong to the category of matrisome-associated proteins [33,34]. Category Genes ( ECM-and IACs-Related Genes Are Involved in Prognosis of Progression-Free Survival in Prostate Cancer Patients Recursive partitioning is the method recommended by the AJCC for the analysis of prognostic studies [42,43]. Therefore, we used the rpart method to determine the prognostic value of the following variables (Table 1): age, Gleason score, TNM staging, residual tumor information, and the gene expression data for the ECM-and IACs-related genes. The ECM-and IACs-related genes were separately analyzed. The importance of individual variables is shown in Figures 3A and 4A. By performing the rpart analysis, our result from a previous publication, which found the Gleason score to be the strongest prognostic factor in prostate cancer among the studied variables, was confirmed [26]. The five most informative variables in Figure 3A in addition to the Gleason score were the expressions of FMOD, MMP11, COL1A1, COL3A1, and COL5A2 genes. Among them, only FMOD emerged on the survival tree. In Figure 4A, the five most informative variables in addition to the Gleason score were the expressions of PTPN2, RPL23A, MRTO4, PTPN1, and BRIX1. Among those, only the PTPN2 gene expression variable emerged on the survival tree. From the variable importance analysis, it was evident that even the most informative individual variable (the Gleason score) had a score of only 36 (matrisome data) and 27 (adhesome data) in comparison to the whole model, bearing the score of 100. Therefore, the multivariate approach to survival analysis is the only way to correctly describe the patients' prognosis. AJCC guidelines for prognostic studies suggest that a prognostic value of a single variable is evaluated by considering the other variables [42,43]. The rpart method follows this criterion because rpart uses all variables in the analysis. The results of the rpart algorithm performed on our data are presented on a survival tree (Figures 3B and 4B). Figures 3B and 4B show that, by using two variables in each survival tree, patients were further subdivided into two decision nodes and three terminal nodes (leaves) in each tree. Variables used in the decision nodes in Figures 3B and 4B are the Gleason score and the FMOD and PTPN2 gene expressions. FMOD and PTPN2 refined the prognosis of patients with a Gleason score < 9, respectively. The importance of the variables in Figures 3B and 4B was determined by their position in the survival tree: the topmost variable (the Gleason score) holds the largest amount of information, the variable below the topmost is the second largest by the content of information, and so on. It is obvious from Figures 3B and 4B that there were three prognostic subgroups on each. For Figure 3B, they were: (a) low Gleason score and high FMOD expression, (b) low Gleason score and low FMOD expression, and (c) high Gleason score. The HR gradually increased from the left to the right of the survival tree. By using the complexity parameter (CP) = 0.0592, we did not find a variable that further refined the high Gleason score patients (≥9). However, when the CP was set at Biomedicines 2023, 11,2006 9 of 17 CP = 0.0371, we obtained a separation in that group of patients according to the expression of the MFAP3 gene. Namely, MFAP3 high expression (≥1389) was associated with worse survival (HR 2.8 vs. 0.37). In Figure 4B, we also established three prognostic subgroups: (a) low Gleason score and low PTPN2 expression, (b) low Gleason score and high PTPN2 expression, and (c) high Gleason score. In this survival tree, the HR also gradually increased from the left to the right. To conclude, by using the Gleason score information supplemented with the expression of FMOD and PTPN2 genes, a stratification of prostate cancer patients into several prognostic subgroups with significantly different hazard ratios (low, medium, and high risk of progression) was achieved. The results of recursive partitioning (Figures 3B and 4B) were further supplemented by survival curves obtained using the Kaplan-Meier method for subgroups from each decision node. The difference in survival for subgroups defined by the left and the right branches of the decision node 1 (the Gleason score) is shown in our previous publication [26]. The subgroups from decision node 2 are shown in Figure 5 (FMOD expression) and Figure 6 (PTPN2 expression). The log-rank test p-value was statistically significant (p < 0.001) for both genes (Figures 5 and 6). Discussion The driving processes in prostate cancer progression encompass intertwined actions of several signaling pathways, which are potentiated by genetic and epigenetic alterations, Discussion The driving processes in prostate cancer progression encompass intertwined actions of several signaling pathways, which are potentiated by genetic and epigenetic alterations, Figure 6. Difference in patients' survival for the left and the right branches of the second decision node from Figure 4B, which uses the PTPN2 gene expression as a separation criterion. Discussion The driving processes in prostate cancer progression encompass intertwined actions of several signaling pathways, which are potentiated by genetic and epigenetic alterations, changes in gene expression, and post-transcriptional and post-translational modifi-cations [1,2,48]. However, although a large amount of data exists regarding the mentioned processes, one of the greatest barriers in prostate cancer treatment is still the inability to precisely foresee the course of a disease, and therefore, to define the risk subgroups which would guide the treatment options. In our previous work, we added to the efforts which try to reveal prostate cancer PFS prognosis biomarkers [26]. In that work, the Gleason score emerged as the most informative prognostic factor among all the clinical and the gene expression variables studied. Herein, we extended the analysis to the ECM-and IACs-related genes. Our results are based on the TCGA PRAD dataset, and they dissect differential expression of ECM-and IACs-related genes and their value as prognostic factors in the progression-free survival of prostate cancer patients. In this article, based on the TCGA PRAD dataset, ECM (matrisome) gene expression appeared to be highly aberrant in prostate cancer tissue. The enrichment analysis on the DEGs showed that the GO term 'extracellular matrix' (N = 35 genes) was among those that were enriched in the Gene Ontology Cellular Component (GO CC) category ( Figure 2). Genes from all the ECM categories (Table 2) showed changed expression. As mentioned in the Results Section, with such a comprehensive change in the expression of individual components, it is hard to speculate which of the ECM general characteristics are changed in prostate cancer. However, it is known from the literature that the cancers' ECMs in general gain an increase in density and mechanical stiffness [47]. In a search for prognostic factors among the ECM-related genes, the expression of the FMOD gene appeared to refine the prognosis based on the Gleason score. Namely, the patients with a Gleason score lower than 9 were further subdivided into two prognostic subgroups based on the FMOD gene expression. The patients with high FMOD expression had better survival ( Figure 5). The FMOD gene encodes the fibromodulin protein, which belongs to the family of small interstitial proteoglycans [61]. This protein interacts with type I and type II collagen fibrils and inhibits fibrillogenesis in vitro. Therefore, fibromodulin may play a role in the assembly of the extracellular matrix. It may also regulate TGFbeta activities by sequestering TGF-beta into the extracellular matrix (www.genecards. org accessed on 1 December 2022). In the prostate cancer setting, FMOD was shown to be overexpressed in human prostate epithelial cancer cell lines in vitro. Additionally, the authors showed that the cancerous tissue expressed significantly higher levels of intracellular fibromodulin compared to matched, benign tissue from the same patients. Higher levels were also detected in cancerous tissue in comparison to tissue from patients with only a benign disease [62,63]. Furthermore, in a study based on Brazilian individuals, FMOD gene variants were suggested to be potential biomarkers for prostate cancer and benign prostatic hyperplasia [64]. However, in a recent article, it was shown that higher FMOD expression was associated with better disease-free survival of prostate cancer patients, a finding that agrees with our results [65]. This would mean that, although the cancerous tissue has higher FMOD expression than non-transformed prostate tissue, in prostate cancer, higher FMOD expression bears a better prognosis. Here, it needs to be remembered that, besides FMOD, our analysis showed that COL1A1, COL3A1, and COL5A2 genes were also shown to have high informative value in the prognosis of PFS when individually analyzed ( Figure 3A). It would be interesting to imply their functional role and to further delineate whether FMOD and these three collagen genes are interacting in the architecture of certain prostate cancer phenotypes that affect the patients' survival. Integrin Adhesion Complexes-Related Genes Expression and Prognostic Significance in Progression-Free Survival of Prostate Cancer Patients Integrin receptors are involved in almost every process of cancer formation and progression [66]. Therefore, it is not surprising that numerous preclinical studies on targeting integrins in different cancer types revealed encouraging results. However, there are still obstacles in translating these results into the clinics [67,68]. In addition to all the difficulties [69,70], in our recent paper [19], we suggested that integrin crosstalk could potentially complicate and undermine the effects of targeting integrins. Integrin crosstalk is a phenomenon in which the modulation of the activity and/or expression of one integrin (subunit or a heterodimer) affects the activity and/or expression of other integrin(s) (subunit(s) or heterodimer(s)). To circumvent integrin crosstalk, but to keep the advantages of targeting the integrin pathway, we suggest that the analysis of proteins downstream of integrin ligation and activation could reveal effective therapeutic targets. Therefore, in this paper, we focused on integrin adhesion complexes (IACs), in a search for potential prognostic biomarkers and therapeutic targets in prostate cancer. IACs are essential protein-composed adhesion structures whose components were also detected outside of Metazoa, confirming their ancient evolutionary origin [71]. There are several types of IACs recognized [21], which include nascent adhesions [72], focal complexes [73], focal adhesions [74], fibrillar adhesions [75], reticular adhesions [76], and hemidesmosomes [77]. Although IACs vary in their appearance, size, dynamics, and composition, the core components of integrin adhesome have been identified by several groups [35][36][37]. The integrin adhesome consists of proteins that are affiliated with the structure and signaling activity of integrin-mediated adhesions [36]. By analyzing the core integrin adhesome components, we found that their expression is highly perturbed in prostate cancer. Namely, the category 'integrin complex' appeared among the top functionally enriched Gene Ontology Cellular Component (GO CC) terms ( Figure 2). Furthermore, we detected 44/264 (16.7%) adhesome genes whose expression was significantly changed by ≥2 times (either up-or down-regulated) in prostate cancer, in comparison to non-transformed prostate tissue (Table 2). An important notion is that majority of these genes are downregulated in the prostate cancer tissue. Their functional role and the potential compensatory mechanisms remain to be investigated. In addition to changes in gene expression, in our analysis, we found that the expression of some of the adhesome genes was correlated with PFS in univariate and multivariate approaches. The examples of genes implicated in the univariate approach are PTPN2, RPL23A, MRTO4, PTPN1, and BRIX1 ( Figure 4A). However, except for PTPN2, those genes did not emerge on the survival tree. This would mean that the expression of the mentioned genes is probably correlated with some of the variables which already hold a prognostic value, such as, for example, the Gleason score. It is interesting to note that three genes (PTPN1, PTPN2, and PTPN12) from the PTPN family of protein tyrosine phosphatases emerged in univariate analysis. Tyrosine phosphorylation is an important post-translational modification in cell adhesion that is dynamically regulated by the protein tyrosine phosphatases and kinases [78]. While PTPN1 [79][80][81] and PTPN12 [82,83] were implicated in prostate cancer biology, the involvement of PTPN2 in prostate cancer is not documented [84]. Regarding integrin signaling, complex roles for PTPN1 [85][86][87][88], PTPN2 [89], and PTPN12 [90] have been documented. Despite this, it needs to be mentioned that PTPN proteins have other, broader roles [84]. Therefore, it cannot be ruled out that some of these other roles are also important for the biology of prostate cancer. The PTPN2 gene expression appeared on the survival tree as a variable that refines the PFS of lower (<9) Gleason score patients. Our results suggested that its higher expression bears a poorer prognosis. PTPN1 and PTPN2 are highly related PTPs [84], but, as mentioned previously, PTPN2 has not been implicated in prostate cancer. However, PTPN2 is a key predictor of prognosis for pancreatic adenocarcinoma, and its higher expression is associated with a poor prognosis [91]. Overexpression of PTPN2 also predicted a poor survival in clear cell renal cell carcinoma [92], which agrees with our results. However, low PTPN2 expression was associated with poor overall survival in ovarian serous cystadenocarcinoma [93], indicating its versatile roles in different cancer types. The connection of PTPN2 with integrin signaling was confirmed by several articles, which indicate activation of PTPN2 by integrins. Namely, it was recently shown that the catalytic activity of PTPN2 is auto-regulated by its intrinsically disordered tail and activated by ITGA1 [89]. An earlier article also documented that PTPN2 is activated by the integrin ITGA1/ITGB1 and that it subsequently dephosphorylates EGFR and negatively regulates EGF signaling [94]. In line with this, the same group showed that PTPN2 activity was induced upon integrinmediated binding of endothelial cells to the collagen matrix [95]. However, the potential role of PTPN2 activation by integrins in prostate cancer remains to be investigated. To conclude, PTPN2 might be a potential target in prostate cancer treatment, whose targeting is achievable because the PTPN2 inhibitors are available. An interesting notion is that neither ECM-nor IACs-related genes defined risk subgroups for the Gleason score ≥ 9, according to the conservative complexity parameters that we selected. It could be that the high Gleason score cancers show such aberrant ECM-and IACs-related genes' expression that are of a great importance for cancer progression and, therefore, are common to all patients. This would mean that ECM-and IACs-related genes' aberrant expression is underlying for all high Gleason score (≥9) patients. Methodological Considerations In this article we used recursive partitioning to define the risk subgroups of prostate cancer patients in the analysis that included clinical information and the gene expression data. Recursive partitioning is the method recommended by AJCC for the analysis of prognostic studies [42,43]. Due to the prostate cancer heterogeneity, it is to be expected that this method better describes its diversity than the Cox regression analysis, which is used by majority of papers dealing with similar questions. Moreover, the survival tree, obtained by recursive partitioning, is easier to interpret than the Cox regression results. Therefore, we believe that our approach is more appropriate to analyze the prostate cancer survival data. Conclusions ECM is the first frontier of the cell towards its surroundings, and it is among the main determinants of the cell's behavior. Therefore, important roles of the ECM in cancer development, progression, and prognosis were documented. By using the TCGA PRAD dataset, in this article, the expression of ECM genes in prostate cancer was analyzed and correlated with progression-free survival of prostate cancer patients. We revealed that the expression of ECM-related genes changed in prostate cancer. Moreover, the ECM-related genes showed prognostic significance for the prostate cancer patients, who were stratified according to the Gleason score. Our results confirmed the important roles for the ECMrelated genes in prostate cancer and suggested the potential biomarkers of prostate cancer progression from the list of the ECM-related genes. Integrins are among the main receptors for the ECM ligands. Several unique characteristics, including integrin crosstalk and the formation of IACs, make integrins exceptional among the signaling receptors. Therefore, their roles in tumor formation, progression, and drug resistance were noted early on [96]. In this paper, we showed that the expression of integrin and IAC genes changed in prostate cancer. Moreover, some of these genes are appearing in univariate and multivariate approaches in the prognosis of PFS, suggesting their potential role in the discovery of biomarkers of prostate cancer progression. Consequently, our results support the early notion that considered integrins (and downstream proteins) attractive therapeutic targets, a strategy that is still hotly debated [68,70]. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/biomedicines11072006/s1, Table S1: A list of ECM-and IACs-related genes used in this study. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2022-04-03T15:37:09.380Z
2022-03-31T00:00:00.000
247894946
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/14/7/1458/pdf", "pdf_hash": "d8c85eb06eebc93ab9ddaaa6206cd14b715136c6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41486", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c36ac43f983bd27e96ffef00d2d2d33a6eb8faea", "year": 2022 }
pes2o/s2orc
EatWellNow: Formative Development of a Place-Based Behavioral “Nudge” Technology Intervention to Promote Healthier Food Purchases among Army Soldiers Approximately 17% of military service members are obese. Research involving army soldiers suggests a lack of awareness of healthy foods on post. Innovative approaches are needed to change interactions with the military food environment. Two complementary technological methods to raise awareness are geofencing (deliver banner ads with website links) and Bluetooth beacons (real-time geotargeted messages to mobile phones that enter a designated space). There is little published literature regarding the feasibility of this approach to promote healthy behaviors in retail food environments. Thus, we conducted a formative feasibility study of a military post to understand the development, interest in, and implementation of EatWellNow, a multi-layered interactive food environment approach using contextual messaging to improve food purchasing decisions within the military food environment. We measured success based on outcomes of a formative evaluation, including process, resources, management, and scientific assessment. We also report data on interest in the approach from a Fort Bragg community health assessment survey (n = 3281). Most respondents agreed that they were interested in receiving push notifications on their phone about healthy options on post (64.5%) and that receiving these messages would help them eat healthier (68.3%). EatWellNow was successfully developed through cross-sector collaboration and was well received in this military environment, suggesting feasibility in this setting. Future work should examine the impact of EatWellNow on military service food purchases and dietary behaviors. Introduction A 2020 report suggests that one in five military service members are classified as obese (BMI of 30.0 and above) and have difficulty meeting the Army Body Composition Program (ABCP) standard [1]. A 2016 study found that active-duty military service members with obesity were 33% more likely to suffer musculoskeletal injury, leading to over 3.6 million injuries between 2008 and 2017 [2]. This problem also causes a substantial economic burden for the Department of Defense, which spends over $1 billion a year on healthcare costs among active-duty service members, veterans, and their families [3]. Additionally, it is estimated that there are over 650,000 days of lost work per year among active-duty military due to obesity-related health issues [4]. Research suggests that soldiers have low adherence to the Dietary Guidelines for Americans [5]. Obesity and poor dietary choices have been associated with poor attentiveness, reduced vision, adversarial work relationships, and reduced physical fitness [6][7][8]. A high level of body fat has been shown to harm performance in several military critical tasks [9,10]. Thus, strategies are needed to optimize diet-related health in military service members. US military installations often have abundant unhealthy food options, including fast food, which has led to military officials seeking changes to improve the food environment [11][12][13][14][15]. Through programs like the DoD Go For Green initiative [16], Army's Performance Triad program [17], Healthy Base Initiative [13], and Holistic Health and Fitness [18], the military has made efforts to improve the food environment in military installations. While improving availability is essential, service members must be aware of this availability and feel accommodated during their food purchase (i.e., the environment is set up to make shopping easy and enjoyable) [19]. Previous research has found that US Army Soldiers perceive a lack of healthy options on the installation, despite these efforts [20]. Thus, there is a need for approaches that make military service members more aware of healthy options and "nudge" them to purchase and consume these healthier options. Nudging is a key component of Behavioral Economics as a part of choice architecture [21]. Behavioral Economics uses choice architecture to design environments to influence consumer decision-making [21]. A practical approach uses cues or environmental triggers that remind customers to make healthier choices [21]. A recent meta-analysis of choice architecture nudging interventions found that choice architecture interventions promote behavior change, particularly when influencing food choices [22]. One emerging approach is "geofencing", a real-time targeted marketing approach [23][24][25]. A geofence is a virtual perimeter in a real-world geographic area. These virtual perimeters can be established in multiple ways, including using cellular data signals [25,26]. A person crosses into a virtual geofence with their mobile device (i.e., smartphone). If the location service on their device is on (in an app or website that they are on), the person's location can be detected to receive the geofence messaging. From this point, as part of the "user audience", this person can receive a banner, display, and/or video advertisement. A similar but different approach to geofencing is geotargeted messaging through Bluetooth beacons, which are hardware transmitters that broadcast their identifier to nearby portable electronic devices (e.g., smartphones) when within a set distance (i.e., 10 feet) [27]. Once within that distance, it can trigger an associated mobile phone app to post a push notification message to the smartphone. These mobile phone messaging approaches are rapidly increasing in the retail sector as a targeted advertisement technique for customers within a particular geographic area [26]. Research has found that 60% of American adult consumers look for local information on their mobile devices, 40% look for information while on the go, and 70% are willing to share their location for something in return [28]. A recent study found that geofencing doubled engagements with retail stores, increasing awareness and customer visits [24]. Another marketing study with a grocery store chain and Starbucks found a 60% increase in-store visits post-campaign exposure [24]. Military service members are also plugged into their smartphones, with over 60% of their digital content views on their smartphones [29]. Geofencing and Bluetooth beacon messaging are conducive to military installations because the demographic and geolocations of service members are relatively well defined, making targeted promotion easier. Therefore, this population and setting is a prime unit for testing the feasibility of this type of technology. Recently, a text-message-based intervention in a non-military environment aimed to reduce sodium intake by sending just-in-time adaptive messages to participants when they entered a grocery store, restaurant, or home to promote behavior change related to high sodium foods. This study indicated that those receiving these just-in-time messages reduced sodium intake by −1537 mg compared to -233 mg [30]. Despite this potential, we found no published research combining geofencing and Bluetooth beacons as a multilayered environmental change intervention approach to improve dietary behaviors in a military context. Therefore, the purpose of our feasibility project was to understand the potential of a multi-layered complementary smartphone messaging system ("EatWellNow") where geofencing smartphone banner advertisements and Bluetooth beacon-triggered smartphone push notifications are sent to military service members outside (banner advertisements) and inside (push notifications) the retail food site (Army dining facility, now officially referred to as "Warrior Restaurants"). The system has an associated mobile phone application and website for additional content delivery. Our study included a secondary review of a previously collected community survey, examining the process of designing and developing EatWellNow, and understanding the feasibility of implementing this system to improve dietary behaviors at a military installation. Materials and Methods This formative feasibility study utilized a mixed-methods approach, including the following: (1) a secondary aggregated data review of a previously collected community assessment survey of the Fort Bragg community (military service members, family members, veterans, etc.) to understand the interest in a geofencing and beacon push notification approach, (2) development of EatWellNow, (3) examining the feasibility of this approach at a military installation, and (4) field observations of the technology to determine device functionality. Secondary Data Review of a Military Community Health Assessment Survey In collaboration with the Fort Bragg Department of Public Health, we utilized the aggregate results of the Fort Bragg Community Survey, which was part of the installation's Community Health Assessment, collected in April-May 2021. The survey was voluntary and took 10-15 min to complete. The survey was open to anyone in the Fort Bragg community, including service members and family members of a current service member, military retirees and family members of military retirees, civilian employees, contractors, or others. Survey promotional materials were distributed through members of the Community Survey Working Group (heads of offices; individuals who informed of the creation of the survey), who were asked to share with their networks. The survey was publicized through the Womack Army Medical Center and Garrison Public Affairs offices. They posted weekly on Facebook and through their social media platforms (Facebook/Facebook groups are a significant way for Fort Bragg to communicate with social groups on post and with families). Cumberland County (where Fort Bragg is largely located) paid for Facebook promotions of the survey through the whole month of April to distribute to the county. Fort Bragg Public Health Leaders had flyers printed (with a QR code to the survey) and passed them out at their vaccine drive-throughs on post (April 2021). Individuals could take the survey while they were sitting in cars for their 15 min observation period. The survey included questions about sociodemographic information, including biological sex, civilian or military, age (age range in years), race/ethnicity, marital status, the highest level of education completed, and military rank or sponsor's military rank. Respondents were asked if they would be interested in receiving push notifications about healthy foods on post and if they would be more likely to buy healthy food if they received these push notifications. They were also asked if they would be more likely to buy healthy food if they received advertisements regarding healthy foods on post within websites and apps they already use. Items related to interest in receiving notifications/messages were rated using a 4-point Likert-type scale that ranged from 1 = strongly disagree to 4 = strongly agree. The survey was open on 1 April 2021 and closed on 15 May 2021. Aggregated survey data were summarized through statistical analysis. This work is exempt through DHHS 46.101 (b) relating to unidentifiable survey or interview data for research and demonstration projects that are conducted by or subject to the approval of department or agency heads (Fort Bragg Department of Public Health), which are designed to study, evaluate, or otherwise examine public benefit or service programs (reference: DHHS, Code of Federal Regulations TITLE 45, 2009, available at https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/#46.101 (accessed on 13 January 2022). Partnership Development For this interactive food environment project to be successful, we initiated a collaborative partnership between the project team and relevant expert stakeholders on post, including public health officials, health promotion/wellness officials, and food service management. We partnered with a digital marketing firm (Propellant Media [31]) that focused on geotargeting advertising for digital marketing expertise and implementation of cellular geofencing. To develop and implement the Bluetooth beacon-based geofencing system, the project team partnered with an independent software programmer with expertise in developing a mobile phone application for beacon geofencing. Each of these partners had a crucial role in helping to guide the development of this interactive food environment approach. Development of EatWellNow Geotargeted Messaging System Our goal was to create a complementary multi-layer system called EatWellNow. Cellular data signal geofencing banner advertisements were the first layer of interaction with service members outside the retail food site. Bluetooth beacon messaging was the second layer of interaction with service members, taking place within the retail food environment. Cellular Data Signal Geofencing Banner Advertisements The first layer of interaction is with the virtual geofence set around the location of the retail food site. When a person crosses into a cellular data signal-based virtual fence, they begin passively receiving advertisements for the retail food site on smartphone apps and websites when on their mobile phone within the food site and when they leave the virtual fence, for up to 30 days. Stakeholders from Nutrition Care (dining services), public health, the project team, and the digital marketing company met to develop smartphone geofencing advertisements. These planning meetings focused on developing messages and images for banner advertisements, the radius of the geofencing target areas, and the campaign's duration. The goal was to create messages and use imagery relevant to military service members using the retail food site. The study team also created the geofence radius around the retail food site to efficiently target potential customers. The geofence radius was determined in collaboration with the marketing company and based on the goal of reaching soldiers using the specific commercial area where the retail food site was located and those traveling in proximity to the venue via surrounding transportation networks. This is a passive advertising approach; as long as users of the smart phone devices opt in to the location services on their devices, they receive the banner advertisements. Due to the strategic placement of the geofence, those receiving these messages are affiliated in some way with the military. No identifiable information was collected from users. We implemented the cellular data geofencing campaign for 30 days, a time period decided on in collaboration between the study team and marketing agency as a reasonable period to see consumer interaction with the messages. Website Landing Page We developed a dedicated website landing page to learn more about the facility in collaboration with our stakeholders. The website could be accessed when a person clicked on either the geofenced banner ads or the beacon push notifications. The website could also be accessed directly through the web address but was not made available for search engines for this feasibility study. This website aimed to provide users with information about the military installation's healthy venues and encourage healthier eating decisions. This website had information about the retail food site (hours of operation, location, and menus), as well as subpages on how to eat for "Athletic Performance" and "Heart Health" and create healthy meals and salads ("Healthy Meals", "Salad Bar"). We developed website content based on input from our stakeholders' expertise in the interests of military service members and healthy food promotion items at the retail food site. Bluetooth Beacons After several meetings with relevant stakeholders to discuss the development of the system, our research team and stakeholders developed an interactive food environment Bluetooth signaling system. The goal of the system was to nudge users towards healthier decisions at each decision point with simple messaging and graphics. The project team and food service management personnel collaborated with a software programmer to develop an interactive Bluetooth messaging system for sending healthy nudges within the retail food site. This interactive experience was based on a smartphone app we developed, called "Eat Well Now", which served as the interaction hub between the user and the beacons. The beacons were a relay device to the user's phone. First, a user downloads the EatWellNow app, which our team developed as a dedicated mobile phone application for the beacon experience. After downloading the app and coming within a certain distance (for this study, 10 feet) of the beacons placed in different locations within the retail food site (e.g., entrance, checkout, soda machine), the user can receive push notifications on their phone from the programmed Bluetooth beacons. There is a recommended minimum of 3 feet of spacing between the beacon and the customer for a Bluetooth beacon approach, but beacons do not interfere with each other, as they are programmed to "wait their turn". The entire system is shown in Figure 1. The project team met with the Army Department of Public Health, Army Health and Wellness, and food service staff to design the messages and determine the placement of beacons in the retail food site. The criteria for developing the messages were (1) to tailor each message to the location within the retail food site where behaviors were to be changed, (2) simple messaging that was quickly interpretable while shopping and that was goal oriented, and (3) to make messages relevant to a military audience. Each beacon The project team met with the Army Department of Public Health, Army Health and Wellness, and food service staff to design the messages and determine the placement of beacons in the retail food site. The criteria for developing the messages were (1) to tailor each message to the location within the retail food site where behaviors were to be changed, (2) simple messaging that was quickly interpretable while shopping and that was goal oriented, and (3) to make messages relevant to a military audience. Each beacon had a specific message notification to send to the user's phone (e.g., "Plate healthy enough? Click Here"). When users click on the message notification, they are routed to an app screen specific to that beacon location (see Figure 2). Formative Feasibility Study Assessment Methods We measured the success of the proposed project through outcomes of the process and formative evaluation measurement. We tracked the timeline to implement the technology and compared it to the proposed timeline using records of project progress. The geofencing analytics identified clicks into the advertisement, click-through rates (the percentage of people visiting a web page who access a hypertext link to a particular advertisement), the timing of clicks, the types of devices used, the topic category of websites where users were seeing and clicking the advertisements (e.g., Hobbies and Special Interests, Arts and Entertainment, etc.). The website analytics provided information on clicks into the website, time spent on the website, and features of the website where users showed the most interest through clicks and time spent. We tracked the resources needed and the amount of scheduled time allowed for the program's implementation, including resources needed in terms of equipment, personnel, and time, using project records (logs, budget expenses, reports, etc.). We informally documented (without use of a structured, validated instrument) institutional willingness, motivation, and capacity to carry through project-related tasks, including documenting challenges and resources for fulfilling research commitments using unstructured project records and funding agency reporting materials. For management assessment, we assessed challenges and strengths of research team capacities through project records and funding agency reporting materials. For scientific assessment, we assessed if the procedures that were used protected respondent privacy and prevented potential threats to validity through monitoring of data collection and data storage. Descriptive statistics, including frequencies, means, and standard deviations, were generated for all relevant data, particularly the geofencing and website analytics data. For this formative feasibility study, we examined the feasibility of the beacon system within the project team and not the end-users due to a restriction on implementation stated in the funding mechanism. Feasibility was based on the ability to develop and implement the system within the existing military food service infrastructure, demand for Formative Feasibility Study Assessment Methods We measured the success of the proposed project through outcomes of the process and formative evaluation measurement. We tracked the timeline to implement the technology and compared it to the proposed timeline using records of project progress. The geofencing analytics identified clicks into the advertisement, click-through rates (the percentage of people visiting a web page who access a hypertext link to a particular advertisement), the timing of clicks, the types of devices used, the topic category of websites where users were seeing and clicking the advertisements (e.g., Hobbies and Special Interests, Arts and Entertainment, etc.). The website analytics provided information on clicks into the website, time spent on the website, and features of the website where users showed the most interest through clicks and time spent. We tracked the resources needed and the amount of scheduled time allowed for the program's implementation, including resources needed in terms of equipment, personnel, and time, using project records (logs, budget expenses, reports, etc.). We informally documented (without use of a structured, validated instrument) institutional willingness, motivation, and capacity to carry through project-related tasks, including documenting challenges and resources for fulfilling research commitments using unstructured project records and funding agency reporting materials. For management assessment, we assessed challenges and strengths of research team capacities through project records and funding agency reporting materials. For scientific assessment, we assessed if the procedures that were used protected respondent privacy and prevented potential threats to validity through monitoring of data collection and data storage. Descriptive statistics, including frequencies, means, and standard deviations, were generated for all relevant data, particularly the geofencing and website analytics data. For this formative feasibility study, we examined the feasibility of the beacon system within the project team and not the end-users due to a restriction on implementation stated in the funding mechanism. Feasibility was based on the ability to develop and implement the system within the existing military food service infrastructure, demand for this type of approach from potential consumers, meeting market needs of food service and public health stakeholders interested in implementing the approach long term, the cost of the approach, and the ability to implement in a timely manner. Summary of Community Needs Assessment Survey Findings There was a total of 3281 respondents for the community needs assessment survey. A majority (64.5%) of respondents were interested in receiving push notifications to their phones about healthy options on post. The majority (68.3%) also agreed that receiving these notifications would help them to eat healthier. Most respondents (74.6%) agreed that if they received advertisements within websites and apps they already use on their phones about healthy options on post, they would be more likely to buy healthy food. Most were interested in receiving push notifications about healthy options on post at least once per day (51%). A summary of participant demographics and survey findings can be found in Table 1. Attribute % (n) Agreed that if they received advertisements within websites and apps they already use on their phones about healthy options on post, they would be more Implementation We were able to successfully deploy and make the beacons and associated app operational within the retail food site. We also created a connected data server that allowed the project team to understand user interaction with the beacons. The server was cloud-based, created through the Google Firebase SDK mobile application development platform used to develop the EatWellNow mobile app. In the future, this will allow the team to examine user engagement based on different messaging approaches. Timeline Assessment The team developed messages, geofencing advertisements, the landing page website, and the Bluetooth beacon programs within a nine-month period, which was within the study period, despite disruptions related to COVID-19. Usage For the cell signal geofencing notifications, we had 587 clicks on the advertisements in one month. The overall clickthrough rate was 0.11%, with the highest being 0.84% on day one. Clicks into the advertisement remained steady during the 30 days, with a mean of 19 clicks per day (SD = 7.73). Most clicks (79.3%) were from cell phones, followed by tablets (16.7%) and desktops/laptops (3.9%). Most of the clicks into the geofencing advertisement occurred on websites in the "Hobbies and Special Interest" (35.9%) contextual category, followed by "Arts and Entertainment" (17.9%) and "Computer and Video Games" (6.1%). The website analytics demonstrated 703 site visits (mean of 29.5 per day, SD = 12.9) and 578 unique visitors during the one-month test period. Most (70%) of the sessions were from mobile phones. The most visited subpages were the home page, athletic performance tab, and healthy meals table. The mean time spent on the site was 12 min and 5 s. We successfully tested the beacon system among the project team. Specifically, the project team set the beacons within the retail food environment and went through the beacon system to ensure that each beacon sent the appropriate message to mobile smartphones. We also tested clicking through the notifications to the infographics and the website to ensure they worked within the retail food site. All aspects of the beacon messaging system worked successfully within the retail food environment. Resources Assessment The resources shared between the various stakeholders meant that all the necessary resources to complete the project were available. The grant mechanism provided funds to purchase the beacons ($100 for a pack of four beacons), build the app ($3000), and purchase the cellphone-based geofencing ($250 for setup, $300 for creative asset development, and $4400 for 550,000 total impressions). The team met weekly over the project period to plan and discuss progress and had four dedicated hour-long meetings for message development and planning around beacon placement. Development of the EatWellNow app and beacon system took 200 h of dedicated time from the programmer. Stakeholders discussed strate-gies for having adequate resources for this approach's long-term sustainability, including integration into dining services and public health efforts. Institutional Willingness We found high institutional willingness across relevant stakeholders across the military installation. This included Army Garrison leadership, who gave approval for the project and its implementation; food service leadership, who met with us regularly and allowed testing of the program at retail sites; and public health and health and wellness partners, who regularly attended meetings and were engaged in development and implementation. One challenge was determining action steps for obtaining approval for integrating new communication devices in the installation, but this was resolved through communication with relevant project stakeholders. Management Assessment Our team had all the necessary expertise to complete the study. The diversity of having military, academic, food service, nutrition, computer science, and communication expertise was a strength of the project. We used a graduate student computer programmer, given our budget constraints. Future development may utilize a professional programmer to expedite the process. Scientific Assessment We ensured that all processes protected the privacy of end-users. We developed and incorporated a legal privacy statement for the beacon system mobile phone application. We did not conduct direct human subject research, but we stored the data on clicks into advertisements, websites, and push notifications on a secure server. Discussion This study found that an interactive food environment experience using a multilayered geofencing approach is feasible on a military installation. This feasibility is based on the ability to develop and implement the EatWellNow approach within the existing military food service infrastructure, the demand for this type of approach from potential consumers, the market of food service and public health stakeholders interested in implementing the approach long term, the relatively low cost of the approach, and the ability to implement in a timely manner. The EatWellNow approach was successful due to the participatory approach of including strategic partners across multiple relevant sectors on and off post, including in the areas of public health, nutrition, dining services, computer science, and marketing. This aligns with the literature suggesting the importance of collaboration in public health intervention work [32]. Future work should more closely examine how these cross-sector partnerships can help facilitate the modernization of military approaches to positively impact daily life activities. Based on the community survey findings, individuals were interested in receiving app push notifications on their phones about healthy options on post. They also agreed that receiving these notifications would help them eat healthier. These results align with previous research showing that a geofencing campaign may positively influence dietary habits among adults [29] and with the larger literature around the effectiveness of mobile phone apps to promote healthier dietary and physical activity behaviors [33]. Our findings also align with a recent meta-analysis that found that nudging approaches for food choices can be particularly effective [22]. More specifically, this analysis found that decision structure (e.g., changing defaults, adjusting physical or financial efforts, social consequences, and micro-incentives) is a more impactful approach than other approaches such as decision assistance (e.g., providing reminders) or decision information (e.g., providing social norms, providing information) [22]. Future testing of EatWellNow will integrate more decision structure approaches and test the effectiveness of the different nudging approaches in the military setting. Industry reports suggest that 93% of active-duty military either visited a website, researched a product, or bought the product after they had seen an on-post advertisement for that product [34]. This suggests a desirable approach and that further testing is needed. Customer interaction with the cell signal geofencing advertisement suggests it may be a viable approach to promote healthy eating on a military installation. Clicks represent the customer's willingness to engage with our healthy eating advertisement. The clickthrough rate (the percentage of people on a website displaying the healthy eating advertisement who clicked the healthy eating advertisement link for more information) was within the industry average [35]. We used a singular general health promotion message of "Eat well, perform well" for the cellular geofencing over one month, as the primary goal was to test implementation. The presentation of this message was used in multiple message formats that varied in size, shape, and accompanying images. Using a singular message rather than a variety of messages may explain the relatively high clickthrough rate on day one and the lower average the rest of the month. Those who were initially attracted to the message and clicked through likely didn't feel the need to click again with the same message and website landing page. This is reflected in the fact that most of the visitors to the website landing page were unique visitors rather than repeat visitors. The retail food site where this was tested currently serves around 20,000-30,000 customers per week. It is unclear whether the website visitors were existing or potential customers, or how much of the available consumer market would use this venue given its characteristics and location. Examining this approach at a variety of venues in different geographic areas, with different store characteristics and varying market potential, may elucidate the broader impact of this approach. Future work should examine the changes in the number of meals served and new customers using this and other retail food sites due to the geofencing messaging. Including messages around sales promotions of healthy items, as well as having a less static landing page, may be even more impactful and create sustained use. Sincemost clicks occurred on the "Hobbies and Special Interests", "Arts and Entertainment", and 'Computer and Video Games" contextual categories, these may be strategic advertising contexts to reach military service members with this type of intervention. The higher interest in the "Athletic Performance" and "Healthy Meals" section suggests that these topics may be of particular interest to this audience. More work should be done to better understand use of the website, including how the use may evolve when the food environment is represented more comprehensively and how the website could be optimized to increase engagement and sustained use in conjunction with the geofencing and geotargeted messaging. The strengths of this study include the strategic development of a multi-sector partnership to develop and assess the feasibility of this approach, the use of a structured feasibility assessment approach, community engagement through the use of a community survey with a relatively large sample, and the use of digital marketing analytics to understand interaction with the geofencing and website landing page. The limitations include the convenience sample for the community survey and the lack of direct testing of the beacon system with end-users due to funding mechanism restrictions regarding project implementation. Conclusions This project demonstrates a clear opportunity to build a scalable and impactful digital healthy eating interactive food environment experience to encourage military service members to eat healthier on a military installation. The involvement of the military dining services and public health collaborators in this feasibility study demonstrates the collective interest in and potential of using this approach on a military installation. We believe that there are many opportunities to further develop the interactive experience, including enhancing the app interface to allow for more tailored content based on the soldiers' goals and integrating the geofencing and Bluetooth analytics to create a cohesive food environment experience. The project's next steps are to further develop the system, with direct feedback from military service members, to optimize relatability and usability through focus groups and iterative user-centered testing, and then examine the impact of the program through an efficacy trial. Future work will examine the impact of the intervention on food purchases, dietary behaviors, and health outcomes among military service members interacting with the intervention. Institutional Review Board Statement: This work is exempt through DHHS 46.101 (b) relating to unidentifiable survey or interview data for research and demonstration projects that are conducted by or subject to the approval of department or agency heads, and which are designed to study, evaluate, or otherwise examine public benefit or service programs (reference: DHHS, Code of Federal Regulations TITLE 45, 2009, available at https://www.hhs.gov/ohrp/regulations-andpolicy/regulations/45-cfr-46/#46.101) (accessed on 13 January 2022). Informed Consent Statement: Patient consent was waived due to this work being exempt through DHHS 46.101 (b) relating to unidentifiable survey or interview data for research and demonstration projects that are conducted by or subject to the approval of department or agency heads, and which are designed to study, evaluate, or otherwise examine public benefit or service programs (reference: DHHS, Code of Federal Regulations TITLE 45, 2009, available at https://www.hhs.gov/ohrp/ regulations-and-policy/regulations/45-cfr-46/#46.101) (accessed on 13 January 2022). Data Availability Statement: Data are available upon request from the corresponding author.
v3-fos-license
2019-03-15T02:58:03.841Z
2019-01-01T00:00:00.000
76663795
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3C5753C7DC3F8E1EF93C972B183F8E58/S0950268818002960a.pdf/div-class-title-span-class-italic-campylobacter-jejuni-span-capsule-types-in-a-peruvian-birth-cohort-and-associations-with-diarrhoeal-disease-severity-div.pdf", "pdf_hash": "7d9ab92a2d6991cebf614a74c428d160b6ca680e", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41487", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "e4d30b2bf1a8f156d09a9d1c4995e1d03be3fcf7", "year": 2019 }
pes2o/s2orc
Campylobacter jejuni capsule types in a Peruvian birth cohort and associations with diarrhoeal disease severity Abstract Campylobacter jejuni is a leading cause of bacterial diarrhoea worldwide. The objective of this study was to examine the association between C. jejuni capsule types and clinical signs and symptoms of diarrhoeal disease in a well-defined birth cohort in Peru. Children were enrolled in the study at birth and followed until 2 years of age as part of the Malnutrition and Enteric Infections birth cohort. Associations between capsule type and clinical outcomes were assessed using the Pearson's χ2 and the Kruskal–Wallis test statistics. A total of 318 C. jejuni samples (30% from symptomatic cases) were included in this analysis. There were 22 different C. jejuni capsule types identified with five accounting for 49.1% of all isolates. The most common capsule types among the total number of isolates were HS4 complex (n = 52, 14.8%), HS5/31 complex (n = 42, 11.9%), HS15 (n = 29, 8.2%), HS2 (n = 26, 7.4%) and HS10 (n = 24, 6.8%). These five capsule types accounted for the majority of C. jejuni infections; however, there was no significant difference in prevalence between symptomatic and asymptomatic infection (all p > 0.05). The majority of isolates (n = 291, 82.7%) were predicted to express a heptose-containing capsule. The predicted presence of methyl phosphoramidate, heptose or deoxyheptose on the capsule was common. Introduction Diarrhoeal disease is the leading infectious disease cause of morbidity and the second leading infectious cause of death globally in children under 5 years of age [1]. Morbidity resulting from enteric infection can have significant consequences, including acquired malnutrition and linear growth deficits from repeated enteric infections before the age of two [2,3]. Malnutrition in early childhood may predispose children to more severe and prolonged infections, and can result in impaired cognitive development that yield negative societal outcomes and long-term health effects including cardiovascular and metabolic diseases [3][4][5]. One of the leading causes of bacterial diarrhoea and enteric infection worldwide is Campylobacter jejuni [6,7]. In developed regions of the world, C. jejuni is a common foodborne pathogen associated with poultry and contaminated dairy products. Infection can cause an acute dysentery and/or febrile illness in children and adults [6]. In developing countries, C. jejuni is endemic. Poor hygiene, lack of sanitation and living in close proximity to animals contribute to recurrent infection. C. jejuni is frequently isolated in the stools of healthy children and the rates of infection between symptomatic and asymptomatic cases are often similar. Clinical disease from C. jejuni in developing countries primarily affects the paediatric population and typically results in acute watery diarrhoea with concomitant signs and symptoms such as fever, abdominal pain and vomiting [6,7]. Additionally, recent studies have also found an association of Campylobacter infection with malnutrition and growth stunting in paediatric populations in the developing world [7]. To date only a few virulence factors have been characterised in C. jejuni. Nevertheless, the polysaccharide capsule (CPS) present on the surface of the bacterium has been demonstrated to be one of the most important virulence determinants [8]. The capsule is composed of repeating saccharide units attached to the outer membrane via to a phospholipid anchor [9]. Many of C. jejuni CPSs are characterised by the presence of heptoses in unusual configurations (e.g. altro, ido, gulo) and non-stoichiometric modifications to the sugars, including ethanolamine and methyl phosphoramidate (MeOPN). The levels of MeOPN are non-stoichiometric due to phase variation of the genes encoding MeOPN transferases. Recent studies have shown that MeOPN contributes to complement resistance [8,10,11]. Capsule is the major serodeterminant of the Penner serotyping scheme, of which there are 47 serotypes of C. jejuni [12]. However, genomic analyses of the variable regions of the CPS loci of the 47 Penner serotypes indicated that there are 35 distinct CPS types [13]. Strains belonging to the same serotype group are predicted to express the identical CPS structures. Nevertheless, difference in the Penner serotype can be also due to differences in lipooligosaccharide (LOS) structures; the LOSs have been suggested to be a minor serodeterminant in the Penner typing scheme [14]. The LOSs are additional saccharide structure present at the surface of the bacterium. Compared with the CPS, LOSs are composed only of a few repeating saccharide units and are anchored in the membrane via a different family of lipid. In this study, we used a multiplex polymerase chain reaction (PCR) method for determination of CPS types. This method can discriminate the 35 CPS types, shown in Table 1. Unlike the Penner typing scheme, the capsule multiplex PCR method is not subjected to LOS interference for the attribution of CPS type. Nonetheless, the attribution is based on the presence of gene sequences and does not provide information on the modulation of structure and expression level of the capsule. C. jejuni present homopolymeric GC track in the nucleic sequence of genes involved the capsule biosynthesis. During replication, insertion/deletion of G or C nucleotide(s) induces a frame shift that can cause the enzyme to be non-functional. This mechanism adds an additional layer of complexity to the modulation of the capsule structure [15]. It is believed that these modulations confer an advantage in escaping the host immune response. Currently, there are no vaccines against C. jejuni; however, efforts are ongoing to develop a CPS conjugate vaccine [16,17]. Capsule-based vaccines have been successfully developed for other encapsulated mucosal pathogens including type B Haemophilus influenzae, Neisseria meningitidis and Streptococcus pneumoniae. Given these successes, and the fact that the CPS is a major virulence determinant, a capsule conjugate C. jejuni vaccine is a rationale strategy that could significantly reduce the global burden of disease. The Interactions of Malnutrition & Enteric Infections: Consequences for Child Health and Development (MAL-ED) project was established in 2009 as a worldwide collaboration to further investigate the impact of enteric infection, including C. jejuni, on child health outcomes, growth and development [5]. The MAL-ED network collected data on diarrhoeal illness in poverty stricken communities in eight developing countries: Peru, Brazil, Bangladesh, India, Pakistan, Tanzania, South Africa and Nepal. Initial results from the global MAL-ED project revealed Campylobacter as the most frequently isolated pathogen, and highlight Campylobacter as a cause of significant morbidity in children in Loreto, Peru [7,18]. This study also demonstrated an association between Campylobacter infection and linear growth shortfalls and increased intestinal permeability and inflammation in children [19]. Here we apply the multiplex method of determination of CPS types to C. jejuni strains isolated as part of the MAL-ED project in Peru. Materials and methods Clinical data were collected from three rural communities in the Department of Loreto in Peru: Santa Clara de Nanay, Santo Tomás and La Unión as previously described [20]. Briefly, 198 Peruvian children were enrolled within 17 days after birth and followed until 2 years of age. Field researchers visited the participant's home twice a week and collected information from the mother or caregiver regarding the child's dietary intake, general health information and surveillance for infectious diseases since the previous visit [20]. Stool samples were collected during diarrhoeal episodes, classified as three or more loose stools in a 24-h period if onset was after two or more diarrhoea-free days. Staff members obtained data related to diarrhoeal illness, including associated symptoms and whether any treatment or hospitalisations were needed. Associated symptoms included the presence and duration of dehydration, fever >37.5°C, anorexia, vomiting, dysentery, the need for oral rehydration therapy or hospitalisation and any episode with four or more semi-liquid/liquid stools, among others. Routine, non-diarrhoeal stool samples were also obtained at monthly surveillance visits to assess for asymptomatic shedding during the first year of life and quarterly during the second year of life. Definitions Diarrhoea was defined as greater than or equal to three loose stools in a 24-h period or the presence of blood in at least one stool sample [20]. Two diarrhoeal episodes were considered to be distinct if separated by at least 2 days of normal stool. C. jejuni diarrhoeal illness was defined as any diarrhoeal episode in which C. jejuni was isolated. Asymptomatic infection was defined as a non-diarrhoeal stool sample that was positive for C. jejuni. The first C. jejuni positive sample, obtained from either a symptomatic or asymptomatic sample, was considered to be the first infection. Dysentery was defined as mother's observation of blood in a stool during an episode of diarrhoea. Microbiology Campylobacter strains were isolated as previously described and underwent phenotypic testing utilising oxidase and catalase phenotypical testing for differentiation between C. jejuni and C. coli [7]. Isolates were stored at −80°C in Mueller-Hinton (MH) broth containing 15% glycerol. Strains were revived by plating onto MH agar plates and incubation under a microaerobic atmosphere (85% N 2 , 10% CO 2 and 5% O 2 ) at 37°C for 24-72 h. Capsule typing DNA from the revived strains was extracted using the DNeasy extraction kit (Qiagen, USA). Capsule typing was performed using a multiplex PCR assay following the Poly et al. protocol. The typing system is able to discriminate all of the 35 CPS types described above for C. jejuni. It also includes a C. jejuni-specific positive primer set to confirm speciation [13]. Samples that tested negative with this species-specific primer set were excluded from analysis. Non-typeable C. jejuni isolates were those confirmed by multiplex PCR as C. jejuni, but with an unidentifiable capsule type. Samples that contained multiple C. jejuni CPS types were counted within the total number of isolates for each CPS type. These individuals were assumed to be co-infected. Analyses Analyses were performed based on capsule phenotype across the total number of infections and the number of first infections. Descriptive analyses were performed to assess the prevalence of each capsule type for all, first and subsequent total infections and diarrhoea-associated infections. Diarrhoeal illness characteristics were assessed across capsule types for all and first symptomatic infections. To determine if specific C. jejuni capsule structures were associated with diarrhoeal disease severity, we compared the prevalence of multiple clinical parameters by infection with specific capsule structures. Statistical comparisons were made using Pearson's χ 2 tests with a two-sided α = 0.05. All statistical analyses were performed using SAS version 9.3 (Cary, NC). Results A total of 318 C. jejuni infections were identified, of which 171 (53.8%) were characterised as 'first infections' (Table 2). Approximately 70% of all infections were asymptomatic with no significant differences in the proportion of diarrhoeal cases between the first and subsequent infections (29.2% vs. 32.0%, respectively; p = 0.6). From the 318 infections, 352 C. jejuni isolates were identified due to co-infection with two C. jejuni CPS types in 30 subjects and three CPS types in two subjects. Co-infection with multiple Campylobacter isolates was equally prevalent among first and subsequent infections (11.4% and 7.5%, respectively; p = 0.2). The top five most common serotypes among all infections included the HS4 complex (14.8%), the HS5/31 complex (11.9%), HS15 (8.2%), HS2 (7.4%) and HS10 (6.8%) with no significant differences in the frequency of isolation by the presence or absence of clinical illness (Figs 1 and 2). The HS4 complex, which encompasses eight related CPS types, was the most common capsule type identified among all (14.8%), first (16.1%) and subsequent (13.2%) infections. The HS4 complex was also the most common among total diarrhoea-associated infections (18.2%). However, the HS5/31 complex was the most common capsule type identified in diarrhoea-associated first infections (15.0%). The majority of children (95/172; 55.2%) had between two and up to seven C. jejuni infections during the 2 years of the study. The remaining children (77/172; 44.8%) had a single C. jejuni infection. It is interesting to note that in the group with more than one infection, 97.9% of the subsequent infections were of a distinct CPS type. In the few cases in which a child shed a strain of the same CPS type seen in a previous infection, this may have been due to re-infection with a new strain or recrudescent infection with the original strain [21]. The data were stratified by individual features of the different CPS types as shown in Table 1. Then, we examined whether the presence (or predicted presence based on gene content) of MeOPN, heptose or deoxyheptose residues was associated with disease outcome. Specifically, across all symptomatic infections, 52.0% of children infected with a C. jejuni strain with a capsule containing MeOPN, heptose and deoxyheptose suffered from fever, 57.5% developed anorexia, 50.0% had vomiting, 56.2% needed oral rehydration therapy, 54.8% had at least 1 day with greater than or equal to four semi-liquid/liquid stools and 46.2% suffered from dysentery (Table 3). When comparing CPS types that contained MeOPN, heptose and deoxyheptose vs. those that did not contain one of those three structures, there were no significant differences across all signs, symptoms and management for all or first symptomatic infections (all p-values >0.1). Furthermore, there were no significant differences in the duration of any of the diarrhoeal signs and symptoms across capsule types (data not shown). Discussion The most common serotypes identified in this cohort of subjects were the HS4 complex, HS5/31 complex, HS15, HS2 and HS10 which collectively accounted for ∼50% of all isolates. To our knowledge, these data represent the first report of CPS types of C. jejuni strains from South America. We assessed the association between capsule type and signs and symptoms of clinical disease among all, first and subsequent infections and found only minor, non-statistically significant variations, suggesting little to no variability in the proportion of isolates responsible for infections across a range of epidemiologically important strata. In a recent systemic review on global C. jejuni Penner serotype distribution, which covered >21 000 sporadic cases of C. jejuni diarrhoea, eight C. jejuni serotypes accounted for >50% of all isolates globally [22]. Interestingly, the major CPS/serotype found worldwide and in the current study was the HS4 complex, and HS2, HS5/31, HS15 and HS10 were also among the most frequent. These data also support the current understanding that the majority of C. jejuni infections are attributed to a limited number of CPS types [16,22]. The overwhelming majority of isolates in our sample population (n = 291, 82.7%) are predicted to contain a heptosecontaining CPS. We observed a trend towards more disease signs, symptoms and need for clinical management in CPS types with heptose; however, this association is likely due to the high proportion of C. jejuni CPS types with the potential to express heptose in our sample. Our findings do not further explain the contribution of heptose in the pathogenesis of diarrhoeal disease, which to date remains elusive [23]. While we were unable to determine if specific CPS types were more virulent than others, we observed trends in increased disease severity when subjects were infected with serotypes that contained MeOPN, heptose or deoxyheptose. Overall, our results do not appear to reveal any capsule-specific variability in disease signs, symptoms or need for treatment. Approximately 70% of all infections in this study were asymptomatic with no significant difference in the proportion of cases or capsule type distribution between the first and subsequent infections. This supports previous studies that have also found asymptomatic C. jejuni infection to be common in children living in developing countries around the world [19,24]. Infection with or without diarrhoeal illness is an important factor in the development of short-and long-term sequelae [19]. Evidence suggests acquired immunity and resistance to C. jejuni colonisation is possible after several exposures to various capsule types and increasing age [17,25]. However, the low incidence of re-infection with the same CPS type described in this study is consistent with a role for CPS in natural immunity. Efforts to develop a C. jejuni vaccine are underway and a capsule-conjugate vaccine based on the polysaccharide CPS has yielded promising results [16]. Early studies with a recently developed monovalent capsule conjugate vaccine were shown to be 100% effective against Campylobacter-associated diarrhoeal disease in primates. This form of the vaccine eliminates the risk of several chronic sequelae, including Guillain-Barré syndrome, seen with other forms of vaccination such as oral whole cell vaccines [17]. However, a licensable capsule-based conjugate vaccine will have to be multivalent, targeting the most common and 4 Britney Neitenbach et al. Epidemiology and Infection pathogenic capsule types [17]. These data contribute to the global understanding of CPS diversity and facilitate the prioritisation of CPS targets. If developed, a vaccine could be marketable to populations living in endemic regions and travellers from the developed countries, with the goal of significantly reducing the overwhelming burden of disease. The results presented here are based on the genotypic characterisation of C. jejuni without confirmation of phenotypic expression of specific epitopes. In particular, capsule structure data are based on the genes present in specific capsules. Based on this testing, we assumed that certain phenotypic characteristics were present on the C. jejuni capsule when specific serotypes were detected by the multiplex PCR assay. However, biological variability may modify if and how these structures are expressed during infection and during clinical illness. This study adds to the increasing understanding of the distribution of the C. jejuni capsule types associated with acute infection. These data, along with the increased recognition of Campylobacter as a global cause of morbidity in paediatric populations in low-middle income countries and as a causative agent of travellers' diarrhoea, highlight the potential utility of a capsulebased C. jejuni vaccine as one measure in reducing the burden of enteric infections. Disclaimer The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Department of the Navy, Department of Defense, nor the U.S. Government. This is a U.S. Government work. There are no restrictions on its use. There were no financial conflicts of interests among any of the authors. The study protocol was approved by the Naval Medical Research Center Institutional Review Board in compliance with all applicable Federal regulations governing the protection of human subjects. This work was supported by work unit number 6000.RAD1.DA3.A0308. Copyright statement FP, PG and CKP are employees of the U.S. Government. This work was prepared as part of official duties. Title 17 U.S.C. §105 provides that 'Copyright protection under this title is not available for any work of the United States Government.' Title 17 U.S.C. §101 defines a U.S. Government work as a work prepared by a military service member or employee of the U.S. Government as part of that person's official duties. ORT, oral rehydration therapy. 6 Britney Neitenbach et al.
v3-fos-license
2018-12-16T18:46:01.148Z
2018-12-01T00:00:00.000
54472610
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-018-5156-1", "pdf_hash": "4e3638481708dfdf51ab4dc5447759e8bc122735", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41488", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1f37ea5b2b668fc75fa7f21e483e98a38b203c06", "year": 2018 }
pes2o/s2orc
Management of uterine sarcomas and prognostic indicators: real world data from a single-institution Background Uterine sarcomas consist a heterogeneous group of mesenchymal gynecological malignancies with unclear therapeutic recommendations and unspecific but poor prognosis, since they usually metastasize and tend to recur very often, even in early stages. Methods We retrospectively analyzed all female patients with uterine sarcomas treated in our institution over the last 17 years. Clinico-pathological data, treatments and outcomes were recorded. Kaplan-Meier curves were plotted and time-to-event analyses were estimated using Cox regression. Results Data were retrieved from 61 women with a median age of 53 (range: 27–78) years, at diagnosis. Fifty-one patients were diagnosed with leiomyosarcoma (LMS), 3 with high grade endometrial stromal sarcoma (ESS), 5 with undifferentiated uterine sarcoma (UUS), 1 with Ewing sarcoma (ES) and 1 with Rhabdomyosarcoma (RS). 24 cases had stage I, 7 stage II, 14 stage III and 16 stage IV disease. Median disease-free survival (DFS) in adjuvant approach was 18.83 months, and median overall survival (OS) 31.07 months. High mitotic count (> 15 mitoses) was significantly associated with worse OS (P < 0.001) and worse DFS (P = 0.028). Conclusions Mitotic count appears to be independent prognostic factor while further insights are needed to improve adjuvant and palliative treatment of uterine sarcomas. Background Sarcomas form a heterogeneous group of malignant tumors of mesenchymal origin. Occasionally these tumors may originate from the uterus (uterine sarcomas) including mainly leiomyosarcomas (LMS), endometrial stromal sarcomas (ESS) and undifferentiated uterine sarcomas (UUS), according to the College of American Pathologists' classification for uterine sarcomas (Table 1) [1]. Uterine sarcomas account for 3-7% of all uterine cancers and affect women of all ages with higher incidence between 5th to 7th decades of life [2]. The prognosis of these tumors remains poor, with 5-year survival rate reaching 40%. Further insights are needed in order to predict the course of uterine sarcomas and improve their treatment. Up to now, several characteristics of uterine sarcomas are identified as prognostic factors including tumor grading, FIGO staging (International Federation of Oncology and Obstetrics), mitotic count, age and necrosis [3][4][5]. The French Federation of Anticancer Centers (FNCLCC) has developed a scoring system for grading of soft tissue sarcomas, evaluating 3 histologic criteria: tumor differentiation, mitotic count and necrosis [6]. However, its use has not been generalized as a prognostic tool for uterine sarcomas [7]. Apart from their common origin, these tumors present with distinct biological and molecular profiles that may also determine their behavior under treatment [8]. Several different pathways with oncogenic importance are implicated in the evolution of these sarcomas [9]. For instance, LMSs harbor complex karyotypes and genetic alterations with gains and losses of several genetic loci [10], but lack genetic changes within specific genes. In addition, ESSs are characterized by the existence of specific fusion genes a) YWHAE-FAM22 and ZC3H7B-BCOR for high-grade ESS [11,12] or b) JAZF1 rearrangements and PHF1 rearrangements for low-grade ESS [13]. On the contrary, UUSs demonstrate complex genetic alterations totally distinct from the other two histological subsets [13]. The therapeutic approach of uterine sarcomas is similar to the rest of soft tissue sarcomas [14,15]. Surgery remains the mainstay of therapy but recurrence rates in operable disease (stages I-ΙΙΙ) are high [16]. The role of adjuvant therapy to those women is still a matter of debate with controversial results by small studies [16][17][18][19][20][21]. Thus, adjuvant chemotherapy for uterine sarcomas is under consideration with low level of evidence in the existing guidelines [15,22]. Despite the recent advances on the treatment of metastatic or unresectable sarcoma with the addition of olaratumab (a human antiplatelet derived growth factor receptor-α monoclonal antibody) to doxorubicin [15,22] and the introduction of eribulin in patients with advanced LMS [23], the conventional adriamycin-based chemotherapy remains the gold therapeutic standard in advanced setting of the disease [14]. Several other chemotherapeutic agents including trabectedin and pazopanib have also been investigated but without significant survival benefits [24,25] and the prognosis of women with metastatic disease remaining dismal with 2-year survival roughly reaching 30%. Under this perspective, we reviewed the medical files of diagnosed patients in our institution during the last 15 years and retrospectively analyzed their clinicopathological characteristics in order to recognize parameters that affect their prognosis. Selection of patients We retrospectively analyzed all female patients with uterine sarcomas treated in our institution from 2001 to 2016. All included patients in our analysis had histological diagnosis of uterine sarcoma and had undergone staging of their disease. The ethics committee of the Hospital approved the study and patients have signed informed consent for the analysis of their data. Data collection-definition of survival times For each patient, the following data were collected: i) clinicopathological characteristics of sarcoma patients at the time of diagnosis including age, sex, PS (performance status), histologic subtype, grade, stage, mitotic count; ii) local and systemic therapies received, such as the type of surgery, adjuvant or 1st-line chemotherapy for metastatic or recurrent disease and later regimens, the use of radiotherapy; as well as iii) the clinical outcomes including disease progression or death, the site of recurrence/ metastasis and the times of overall survival (OS), progression free survival (PFS), disease free survival (DFS). OS was defined as the time period from the date of diagnosis of gynecological sarcoma to the date of the last follow-up or death and PFS was defined as the time during and after the primary treatment (surgery and adjuvant or 1st line) with no clinical or imaging signs of sarcoma relapse/progression. RFS (recurrence free survival) was defined as the time to recurrence after the adjuvant treatment. Data completeness exceeded 95%. Statistical analysis For categorical variables, data are presented as frequencies with their corresponding 95% confidence intervals (95%CIs), and for continuous variables as median with observed range (minimum-maximum). The 95% CIs of proportions were computed using the modified Wald method. To compare categorical variables, we used the Chi square test or Fischer's exact test where appropriate. *Low-grade endometrial sarcoma is distinguished from benign endometrial stromal nodule by infiltration into the surrounding myometrium and/or lympovascular invasion. Minor marginal irregularity in the form of tongues < 3 mm(up to three) is allowable for an endometrial stromal nodule. This protocol does not apply to endometrial To compare continuous variables, the Mann-Whitney (two-tailed) test was used. Survival curves were plotted and time-to-event analyses were estimated using the Kaplan-Meier method; differences between curves were analyzed using the log-rank test. The median follow-up times were computed using the reverse Kaplan-Meier method. Unadjusted and adjusted hazard ratios (HR) with the respective 95% CIs were estimated using univariate and multivariate Cox regression analysis, respectively. The multivariate Cox regression analysis examined the effect on OS and PFS after adjustment for all already known prognostic parameters at baseline. These baseline parameters were included in the final multi-regression analysis as dichotomous variables as following: (1) age, using 65 years as elderly cutoff (> 65 years =1, ≤65 years =0); (2) advanced disease (tumor stage III or IV = 1, tumor stage = I or II = 0); (3) tumor size, using 10 cm as cutoff (> 10 cm = 1, ≤10 cm = 0); (4) grading (grade 3 = 1, grade 1 or 2 = 0); (5) mitotic index, using the median value of mitotic index of our cohort as the dichotomous threshold (high mitotic index = 1, low mitotic index = 0). Moreover, unadjusted and adjusted odds ratios (OR) with the respective 95% CIs were estimated using logistic regression in order to examine the effect of baseline parameters on the events of death and progression without taking into account the time effect. Statistical analyses were performed using SPSS software package, version 21 (Computing Resource Centre, Santa Monica, California, USA) and GraphPad Prism software (GraphPad Software Inc., La Jolla, California, USA). Statistical significance was defined as a P-value of less than 0.05 for all comparisons. Baseline characteristics From 2001 to 2016, 61 consecutive cases of uterine sarcomas treated in our Department, were included in the retrospective analysis. Primary treatment The majority of our patients (59 of 61 patients, 96.72%, underwent bilateral salpingo-ophorectomy including 15 patients with already known metastatic disease. In the later cases, the aim of the operation was palliative either to alleviate abdominal discomfortness or to control uterine bleeding. The two cases that did not undergo surgery, already advanced disease (stage IV) was confirmed histologically by laparoscopic biopsies. Based on their postoperative CT scans, 42 women (68.85%) were considered free of residual disease and received adjuvant Baseline prognostic factors The effect of baseline dichotomized parameters including age at the time of diagnosis, tumor size, histological subtype, grade, mitotic index and initial stage of the disease in the prognosis of OS was examined in univariate and multivariate setting. In time-to-event analysis, the high mitotic index was significantly associated with worse OS (log-rank p = 0.0002, HR = 3.441, 95%CI: 1.649-7.181) (Fig. 1b) (Table 3). In the multivariate analysis, after adjustment of all baseline parameters, the high mitotic index (> 15mitoses/10HPF) retained its statistically prognostic significance for OS (adjusted HR = 3.283, 95%CI: 1.426-7.559, p = 0.005). In order to predict any DFS benefit, all above aforementioned parameters as well as the type of adjuvant chemotherapy were also re-examined. In addition to high mitotic index (HR = 2.687, 95%CI: 1.160-6.471, p = 0.028), the grade 3 differentiation of the sarcoma (HR = 3.426, 95%CI: 1.014-11.569, p = 0.047) was also recognized to be associated with worse DFS in the univariate setting (Table 3). However, in the multivariate setting, grade 3 differentiation did not reach to statistical significance, while large tumor size (> 10 cm) (adjusted HR = 4.071, 95%CI: 1.205-13.752, p = 0.024) was added to the baseline high mitotic index (adjusted HR = 3.041, 95%CI: 1.127-8.204, p = 0.028) as important predictors for DFS (Table 3). Trying to recognize the responders based on their baseline characteristics, no significant differences were found between patients that relapsed and not relapsed after their adjuvant approach. It is important that all identified parameters describe the strong effect of initial cellular behavior in the outcome of sarcoma. Discussion Uterine sarcomas are rare tumors with highly malignant behavior [2]. They tend to metastasize and recur early; compromising survival of patients. Due to the rarity of the disease and the heterogeneity of the population, the optimal treatment is still a matter of debate. Surgery remains the mainstay of treatment for localized disease, while radiotherapy and chemotherapy have a role as adjuvant treatments as well as palliative treatments for de A B novo metastatic or recurrent disease [15]. Despite accumulating data in the field of the role of adjuvant chemotherapy in gynecological sarcomas [17,20,26,27], the significance of this treatment approach is not yet established and its application in clinical practice remains controversial [16][17][18][19]21]. Several prognostic factors have been recognized from retrospective data to guide therapeutic decisions for [4,5,7,28]. In our study, increased mitotic index was the only recognized independent significant prognostic factor in the multivariate analysis. This is in accordance with previous publications in LMS, ESS and UUS [3][4][5]29]. Especially for UUS, a recent report by Hardell et al., concluded that UUS should be subdivided to mitogenic and not otherwise specified, according to their mitotic index [30]. Mitotic index was also shown to be of prognostic importance even after neoadjuvant chemotherapy for primary, localized, high grade soft tissue sarcomas [31]. Differences in mitotic index are associated though with different molecular subtypes of the disease that may explain the recognized prognostic significance of the mitotic index in these patients. For example, ESS harboring the YWHAE-FAM22 rearrangement are characterized by significant mitotic activity and clinical aggressiveness in contrast to those associated with JAZF1 rearrangements [32]. Not surprisingly, YWHAE-FAM22 rearrangement in ESS is associated with Cyclin D1 overexpression [30], despite the fact that the mechanism of Cyclin D1 upregulation remains unknown. Apart from mitotic index, tumor grade was inversely associated with DFS in our analysis. The fact that the grading according to FNCLCC assesses tumor differentiation, mitotic count and necrosis, indicates dependence of grading and mitotic index. This may explain our finding that grade did not retain its significance in the multivariate analysis. Adjuvant chemotherapy is not the standard of care for uterine sarcomas and several clinical guidelines cannot define its role in the adjuvant setting, even for high-risk to recur patients [15,22]. Adjuvant chemotherapy in stage I and II leiomyosarcomas failed to prolong the overall survival in a retrospective study of 140 women [16]. Following NCCN and ESMO guidelines about adjuvant chemotherapy in high-risk uterine sarcoma patients, our multidisciplinary team strongly supported the adjuvant approach independently of disease stage but tailoring its administration in each individual case. Thus, only three of the 24 patients included in our study with stage I did not receive adjuvant chemotherapy. The identification of patients with unfavorable characteristics might be of importance, for both the clinicians and the patients to make the best choice regarding the option of adjuvant chemotherapy. Our analysis identified mitotic index and tumor size > 10 cm as predictors of worse DFS in patients receiving adjuvant chemotherapy. This is in accordance with the prognostic significance of mitotic index. Our study though has the limitation that almost all stage I-III patients included received adjuvant chemotherapy. Therefore, we cannot assess the benefit of adjuvant chemotherapy among specific subgroup of patients. Furthermore, it is noteworthy, that time to progression after 1st line chemotherapy was similar between patients that were de novo metastatic and those recurred after adjuvant chemotherapy. However, no solid conclusion could be drawn for the impact of prior adjuvant treatment since this was a very heterogenous population that received different chemotherapeutic agents. Molecular drivers and prognostication of uterine sarcomas appear to be evidently different. miRNA profiles of LMS and ESS reveal unique gene signatures [33]. The presence of different fusion genes in low and high grade ESS implies that the molecular pathways involved are distinct [34]. An example of this difference is the high expression of CyclinD1 in ESS harboring the YWHAE/ FAM fusion gene, which can also be used as a diagnostic marker and reflects the aggressive behavior of this entity [35]. Mitotic index is an indicator of proliferation which is not used any more for the classification of ESS according to WHO2003 criteria [36]. However, NCCN classification of uterine sarcomas and WHO2014 classification of tumors of the female reproductive organs include mitotic index as a factor that defines tumor grade [22]. Conclusion Although, our analysis is limited by its retrospective nature and the relative small number of included patients, due to the rarity of this disease we present real world data for the management of these tumors in a reference center in Greece. Our data are indicative that mitotic index is an important prognostic factor for uterine sarcomas. Mitotic index is an indicator of the aggressive behavior of these tumors that harbor high probability of recurrence independently of disease staging at diagnosis. Despite, our study could not add more clear evidence on the role of adjuvant chemotherapy in these patients, provided further insights on the recognition of baseline factors that affect the prognosis of these rare and aggressive tumors. Availability of data and materials The datasets generated and analyzed during the current study are not publicly available due to restriction of the ethics committee of General Hospital Alexandra, Athens, Greece, but are available from the corresponding author on reasonable request. Authors' contributions AK was the primary writer of the article and with ML codirected the study including data analysis and interpretation. DCZ with ML performed statistical analysis. FZ, KK, GT, MK, AT, RZ contributed to conception and design and acquisition of data. NT, DH, IP, AR made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data. AB with MAD contributed to conception and design and were involved in drafting the manuscript and revising it critically for important intellectual content. All authors have read and approved the final manuscript. Ethics approval and consent to participate The ethics committee of General Hospital Alexandra approved the study and patients have signed informed consent for the analysis and publication of their data. Consent for publication Not applicable.
v3-fos-license
2023-10-07T05:07:06.388Z
2023-10-06T00:00:00.000
263701429
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "45ca13105044a3cd5a28e29933e2fcef1eb467b5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41491", "s2fieldsofstudy": [ "Medicine" ], "sha1": "45ca13105044a3cd5a28e29933e2fcef1eb467b5", "year": 2023 }
pes2o/s2orc
Real-world use of avatrombopag in patients with chronic liver disease and thrombocytopenia undergoing a procedure The phase 4 observational cohort study assessed the effectiveness and safety of the thrombopoietin receptor agonist avatrombopag in patients with chronic liver disease (CLD) and thrombocytopenia undergoing a procedure. Patients with CLD may have thrombocytopenia, increasing the risk of periprocedural bleeding. Prophylactic platelet transfusions used to reduce this risk have limitations including lack of efficacy and transfusion-associated reactions. Prophylactic thrombopoietin receptor agonists have been shown to increase platelet counts and decrease platelet transfusions. Effectiveness was assessed by change from baseline in platelet count and proportion of patients needing a platelet transfusion. Safety was assessed by monitoring adverse events (AEs). Of 50 patients enrolled, 48 were unique patients and 2 patients were enrolled twice for separate procedures. The mean (standard deviation) change in platelet count from baseline to procedure day was 41.1 × 109/L (33.29 × 109/L, n = 38), returning to near baseline at the post-procedure visit (change from baseline −1.9 × 109/L [15.03 × 109/L], n = 11). The proportion of patients not requiring a platelet transfusion after baseline and up to 7 days following the procedure was 98% (n = 49). Serious AEs were infrequent (n = 2 [4%]). No treatment-emergent AEs were considered related to avatrombopag. There were 2 mild bleeding events, no thromboembolic events or deaths, and no patients received rescue procedures (excluding transfusions). This study found that in a real-world setting, treatment with avatrombopag was well tolerated, increased the mean platelet count by procedure day, and reduced the need for intraoperative platelet transfusions in patients with CLD and thrombocytopenia. Introduction Chronic liver disease (CLD) is associated with hematological abnormalities, [1] the most common being thrombocytopenia, [2,3] defined as a platelet count < 150 × 10 9 /L, with severe thrombocytopenia defined as a platelet count < 50 × 10 9 /L. [2]he main causes of thrombocytopenia in patients with CLD are splenic sequestration and the decreased production of thrombopoietin by the liver. [2]Reduced hepatic thrombopoietin synthesis in patients with CLD results in a reduction of megakaryocytopoiesis, thrombopoiesis, and platelet release into the circulation. [2]hrombocytopenia in patients with CLD is nearly always linked to cirrhosis, [2] which is the common final stage in CLD progression, resulting in platelet counts decreasing as the severity of cirrhosis increases. [4]There are an estimated 4 million adults Brian Jamieson is an employee of Sobi, Inc. and was an employee of Dova Pharmaceuticals, a Sobi company.Mitchell L. Shiffman has received fees for participating in advisor meetings with AbbVie, Exelexis, Gilead, HepQuagnt, Intercept and Intra-Sana and discloses grant support from Celgene, CymaBay, Durect, Enanta, Galectin, Genfit, Gilead, HepQuant, Hamni, High Tide, Intercept, Madrigal, Mirum, Novo Nordisk, Pliant, and Viking.Mitchell L. Shiffman has also received speaker fees from AbbVie, Gilead, Genentech, Intercept, Intra-Sana, Mallinkrodt.Sanjaya K. Satapathy has served as a speaker for Intercept, Alexion, Dova, as an advisory board member for Gilead, Intercept, Bayer and has received research funding from Novartis, Fibronostics Gilead, Biotest, Genfit, Conatus, Intercept, Shire, Exact Sciences, Eananta, Dova, Bayer.Sanjaya K. Satapathy is an employee of Northwell Health.The study was funded by Dova Pharmaceuticals, a Sobi company.Medical writing and editorial support, funded by Sobi with CLD in the USA (1.6% of the population). [5]Moreover, the incidence of thrombocytopenia in patients with CLD without cirrhosis is 6%, [5] whereas in patients with cirrhosis it can be as high as 92%. [3]nvasive surgical and therapeutic procedures, such as liver biopsies, variceal band ligation, or percutaneous procedures for hepatocellular carcinoma are common in patients with CLD. [6]owever, thrombocytopenia increases the risk of periprocedural bleeding, [6,7] which can result in hospitalizations, disability, and absenteeism. [8]Peer-reviewed literature and expert guidance on the management of thrombocytopenia in patients with CLD is limited [5] and historically treatment options were limited to platelet transfusion. [9]rophylactic platelet transfusion is commonly used to improve thrombocytopenia in patients with CLD, [2,10] but this approach is limited as the response decreases with each subsequent transfusion. [11]Platelet transfusions also expose patients to risks of transfusion reactions and infections, [2,5,9] development of antiplatelet antibodies, [2,5,9] and increased portal hypertension. [5]iven the drawbacks of traditional treatments, focus has shifted to the use of thrombopoietin receptor agonists (TPO-RAs), which interact with the thrombopoietin receptor on megakaryocytes resulting in an increase in platelet production. [5]nitially TPO-RAs were developed to increase platelet counts in patients with immune thrombocytopenia [12,13] and more recently 2 TPO-RAs, avatrombopag and lusutrombopag, were approved by the US Food and Drug Administration (FDA) [14,15] and European Medicines Agency [16,17] for the prophylactic treatment of patients with CLD-associated thrombocytopenia who are undergoing a procedure. [18,19]ecent meta-analyses indicated that the use of TPO-RAs prior to procedures results in increased platelet counts and decreased incidence of platelet transfusions, compared with placebo, whilst having no significant effect on the rate of portal vein thrombosis. [18,20]Avatrombopag is an oral TPO-RA that is indicated for the treatment of thrombocytopenia in patients with CLD prior to a scheduled procedure. [14]In 2 phase 3 randomized, placebo-controlled trials, ADAPT-1 and ADAPT-2, avatrombopag was shown to be superior to placebo in reducing the need for platelet transfusions and rescue procedures for bleeding [21] and was well tolerated, with a generally comparable safety profile to that of placebo. [22]his phase 4 observational cohort study was designed to assess the real-world effectiveness, safety, and treatment patterns of avatrombopag in patients with thrombocytopenia associated with CLD undergoing a procedure. Study design This was a phase 4, multicenter, observational cohort study (NCT03554759) conducted from July 2018 to January 2019.The study planned to enroll a total of 500 subjects from sites in the USA after avatrombopag was approved for this indication by the FDA in May 2018. [14]All treatment decisions were at the discretion of the treating physician as per routine medical care and were not mandated by study design or protocol.The protocol, informed consent form and any appropriate related documents were submitted to the Institutional Review Boards (IRB; the 2 central IRBs were Copernicus Group IRB and Western IRB, with 12 local IRBs utilized at the local level) by the study principal investigators for review and approval, and the study was initiated after the principal investigators and the Sponsor received approval of the protocol and the informed consent form. Data were collected prospectively or retrospectively from information routinely recorded in a patient's medical records and from laboratory data.Visits, examinations, laboratory tests, or procedures were not mandated or recommended as part of this study.The duration of patient participation and collection of clinical data was up to 6 weeks from the initial (baseline) visit or data were extracted from an approximately 6-week window of patient visits.Data were entered into the electronic data capture system based on patient visits occurring within approximately 7 calendar days of their first avatrombopag use (baseline visit), during any visits while taking avatrombopag (treatment period), on procedure day, on discharge day (if applicable) and for any visit performed up to 30 days post-procedure (follow-up period). Inclusion criteria All patients enrolled were ≥ 18 years, had thrombocytopenia associated with CLD and were planned to undergo, or underwent, treatment with avatrombopag prior to a procedure.For retrospective enrollment (patients enrolled and consented after procedure day) patients must have had, at a minimum, a platelet count from approximately 7 days prior to starting avatrombopag and a platelet count on the procedure day, to enable evaluation of study endpoints.All patients provided written informed consent and there were no exclusion criteria for participation in this observational study, including no exclusion of patients for concomitant medications before or during the study. Treatment Avatrombopag was taken orally and doses were determined by the treating physician in conjunction with the FDA-approved US prescribing information. [14]The recommended dosing of avatrombopag is for 5 consecutive days starting 10 to 13 days prior to a scheduled procedure (with the procedure occurring within 5 to 8 days after the last dose), at a dose of 40 mg for patients with a platelet count between ≥ 40 × 10 9 /L and < 50 × 10 9 /L and 60 mg for patients with a platelet count < 40 × 10 9 /L.Avatrombopag was not provided by the sponsor; participating patients received the commercially available drug through a prescription written by a healthcare provider as per standard of care. Effectiveness analysis The effectiveness of avatrombopag was assessed by the change from baseline in platelet count on procedure day and the proportion of patients who received a platelet transfusion after the baseline visit and up to 7 days post-procedure day.The effectiveness of avatrombopag was also assessed by subgroup analyses of the change in platelet count from baseline to procedure day by baseline platelet count group and Child-Turcotte-Pugh (CTP) Grade. Additional ad hoc analyses, not prespecified in the final statistical analysis plan, were also performed.These included a responder analysis based on patients achieving platelet count ≥ 50 × 10 9 /L on procedure day, divided into 2 groups of patients with a baseline platelet count of < 40 × 10 9 /L or patients with a baseline count of ≥ 40 to < 50 × 10 9 /L.The same responder analysis was performed on the subset of patients receiving correct dosing of avatrombopag as per the US prescribing information (excluding patients that received off-label avatrombopag). Safety analysis The safety of avatrombopag was assessed by recording adverse events (AEs).These were reported by the patient or, when appropriate, by a caregiver, surrogate, or the patient's legally authorized representative, and/or collected from data recorded in the patient's medical record.The severity of each AE was recorded, with mild AEs defined as transient, requiring minimal treatment or intervention and not interfering with daily living; moderate AEs defined as being alleviated with specific therapeutic intervention causing some impairment to daily activities and discomfort but posing no significant or permanent risk of harm; and serious AEs defined as AEs resulting in death, a threat to life, hospitalization (even if admitted and discharged on the same day, although an emergency room attendance that did not result in admission was not included), or prolongation of existing hospitalization, a persistent or significant disability, a congenital anomaly, or a requirement for medical or surgical intervention to prevent any of the aforementioned criteria. AEs were deemed to be treatment-emergent AEs (TEAEs) or serious TEAEs when the time course between the administration of avatrombopag and the occurrence or worsening of the AE was consistent with a causal relationship and no other cause (concomitant drugs, therapies, complications, etc) could be identified.The AEs of special interest were defined as thromboembolic events (any thrombotic or embolic event, whether arterial or venous) and bleeding events (any clinically significant blood loss). Statistical analysis The sample size for this study was based on clinical, rather than statistical rationale, and was considered adequate to address the study objective, which was to observe the treatment patterns and effects of avatrombopag in real-world practice.This objective was neither related to the testing of a specific hypothesis, nor to the precision of a particular estimate.The analysis population and the safety population analyzed were defined as all enrolled patients.The analysis of effectiveness endpoints was descriptive and based on data entered in the electronic case report form for enrolled patients.A 95% exact confidence interval (CI) (using the Clopper-Pearson method) was performed for the proportion of patients who received a platelet transfusion after the baseline visit and up to 7 calendar days following procedure day. Patient population This phase 4 observational registry study was conducted at 43 sites in the USA and was terminated by the sponsor, due to enrollment challenges, prior to completing the planned enrollments.When terminated, 29 of the 43 active sites had screened a total of 65 patients, and 25 of the sites had enrolled a total of 50 patients.Of these 50 patients, 48 were unique patients with 2 patients having been reenrolled into the study and received 2 regimens of avatrombopag for separate procedures, as allowed by the protocol. All patients completed a 5-day course of once-daily avatrombopag, with 1 (2%) receiving 20 mg (off-label), 27 (54%) receiving 40 mg, and 22 (44%) receiving 60 mg.All patients received at least 1 concomitant medication during the study, which included medications administered during the procedure (41 [82%] receiving a concomitant medication on procedure day) and for the treatment of AEs.Most concomitant medications were within the pharmacological subclasses of anesthetics and drugs for acid-related disorders. Types of procedures Procedures that occurred during the study could be defined as either the primary procedure or the secondary procedure when multiple procedures occurred at the same time.The most common procedure was upper gastrointestinal (GI) endoscopy (56% [n = 28] of primary procedures and 4% [n = 2] of secondary procedures).Of the 7 procedures classified as "other," 2 were right inguinal hernia repairs, 1 cervical epidural injection, 1 right L3 to L4 microdiscectomy and 1 endometrial curette (all primary), and 1 umbilical hernia repair and 1 sigmoidoscopy (secondary) (Table 2).A total of 2 patients were enrolled Percentages are based on the number of enrolled patients.SD = standard deviation.* Patients may be counted in more than one category.† Other etiology of CLD included cryptogenic cirrhosis (n = 4), autoimmune hepatitis (n = 3), hepatic cirrhosis (n = 2), primary sclerosing cholangitis (n = 1), biliary cirrhosis (n = 1), primary biliary cholangitis (n = 1), hepatoportal sclerosis (n = 1) and cirrhosis of the liver (n = 1).‡ Barcelona clinic liver cancer Grade for hepatocellular carcinoma included: Grade 0 (n = 1), Grade A (n = 2), Grade B (n = 1) and unknown (n = 1). twice for separate procedures, with 1 patient undergoing a GI endoscopy with variceal sclerotherapy followed approximately 6 weeks later by a GI endoscopy without biopsy, and the other patient undergoing a GI endoscopy with variceal banding that was repeated approximately 8 weeks later.No patient had a delayed discharge due to postoperative bleeding or thrombocytopenia. Effectiveness analysis Treatment with avatrombopag resulted in an increased platelet count, with a mean (SD) change in platelet count from baseline to procedure day (days 8 to 15 after the first dose of avatrombopag, n = 38) of 41.1 × 10 9 /L (33.29 × 10 9 /L) (Fig. 1).The platelet count decreased upon cessation of avatrombopag treatment, with a mean (SD) change in platelet on the follow-up visit (days 11-56 after the first dose of avatrombopag, n = 11) of −1.9 × 10 9 /L (15.03 × 10 9 /L).A subgroup analysis by baseline platelet count also revealed that the mean platelet count nearly doubled, or more than doubled, from baseline to procedure day in all subgroups except in the ≥ 100 × 10 9 /L group (Fig. 2).The proportion of patients not requiring a platelet transfusion after baseline and up to 7 calendar days following procedure day was 98% (n = 49; 95% CI: 89.4%-99.9%).One patient with a baseline platelet count of 34 × 10 9 /L received 2 units of platelets 2 days prior to the procedure (with a pre-transfusion platelet count of 46 × 10 9 /L) and 2 units of platelets on the procedure day (with a platelet count of 59 × 10 9 /L after 5 days daily treatment with 60 mg avatrombopag).This was reported as an SAE of thrombocytopenia as a result of the unplanned hospitalization to administer platelet transfusions. Additional effectiveness analysis A subgroup analysis by the severity of cirrhosis, using reported CTP Grade at baseline, was performed.The change in platelet count from baseline to procedure day, within each CTP Grade, was consistent with the overall analysis (Fig. 3), although data were combined for CTP Grades B and C due to the limited numbers of patients. When evaluated by baseline platelet count cohort, 9 of 18 patients with a baseline platelet count < 40 × 10 9 /L (64.3% of patients with platelet count recorded on procedure day; data for 4 were missing) and 14 of the 17 patients with a baseline platelet count ≥ 40 × 10 9 /L to < 50 × 10 9 /L (100% of patients with platelet count recorded on procedure day; data for 3 were missing) achieved a platelet count ≥ 50 × 10 9 /L on procedure day. Safety analysis All enrolled patients received 5 days of once-daily exposure to avatrombopag with no significant safety issues observed.No deaths occurred and 5 serious AEs were reported in 2 (4%) patients, with none considered related to avatrombopag treatment.The majority of patients had TEAEs that were mild (n = 3 [6%]) or moderate (n = 3 [6%]), with 1 (2%) severe TEAE of pyrexia; none led to discontinuation or were considered related to avatrombopag (Table 3).The most commonly reported TEAEs were associated with GI disorders, which was reported by 4(8%) patients (n = 4 [8%]) and the only TEAE reported in more than 1 patient was abdominal pain (n = 2 [4%]).Of the 7 (14%) patients who experienced 1 or more TEAEs, 4 (8%) patients received 60 mg and 3 (6%) patients received 40 mg daily doses of avatrombopag.Neither of the 2 patients (4%) enrolled in the study twice, who received 2 subsequent courses of avatrombopag, reported any TEAEs during the study period. Two mild bleeding events were reported, with both considered unrelated to avatrombopag by the investigator.One patient, prescribed 60 mg avatrombopag, had a baseline platelet count of 36 × 10 9 /L, which increased to 80 × 10 9 /L by study day 10, and had no post-procedural bleeding.On study day 47 the patient experienced hemoptysis, which was mild and resolved on the same day.Another patient, prescribed 40 mg avatrombopag, had a baseline platelet count of 47 × 10 9 /L, which increased to 167 × 10 9 /L by study day 10, and had no post-procedural bleeding.On study day 22 the patient experienced mouth hemorrhage, which was mild and resolved on the Total percentages sum to > 100% as both primary and secondary procedures are listed.GI = gastrointestinal.* One patient received avatrombopag treatment and completed the study but did not have a procedure performed (the planned procedure was canceled).† Secondary procedures were performed at the same time as the primary procedure.www.md-journal.comsame day.No thromboembolic events were reported, and no patients received any rescue procedures (excluding transfusions) for bleeding during the study period. Discussion The objective of this phase 4 study was to collect real-world data on the ability of avatrombopag to increase platelet counts and reduce the need for platelet transfusions or rescue procedures for bleeding in patients with CLD scheduled to undergo a procedure.Avatrombopag was effective and well tolerated by all patients, with the mean platelet count nearly doubling by procedure day and only 1 patient (2%) requiring a platelet transfusion. The progression of CLD frequently results in cirrhosis, [4] which, as well as CLD, is associated with an increased incidence of thrombocytopenia. [2,4]Any invasive procedure that is performed on a patient with CLD and thrombocytopenia carries an inherent risk of procedure-related bleeding.Historically this has been managed by platelet transfusions, but this has certain limitations. [2,5,9]It is also important to note that the primary causes of thrombocytopenia in CLD are splenic platelet sequestration and breakdown, and decreased production of thrombopoietin in the liver, [2,4] and therefore administering steroid treatment (the first-line treatment for immune thrombocytopenia) [12,13] is inappropriate. The advent of TPO-RAs has resulted in a new treatment management strategy to allow procedures that carry a risk of periprocedural bleeding to be managed safely in this vulnerable patient population. [5]TPO-RAs have been shown to be effective at raising platelet counts, reducing the need for transfusions as well as reducing periprocedural bleeding. [18]Avatrombopag is a TPO-RA that has been shown, in both the ADAPT-1 and ADAPT-2 studies, to be superior to placebo in achieving a target platelet count of 50 × 10 9 /L with a safety profile comparable to that of placebo. [21]n this study, avatrombopag consistently increased platelet counts in patients with CLD and thrombocytopenia.The mean (SD) baseline platelet count in this study was 46.9 × 10 9 /L (24.52 × 10 9 /L) and the mean (SD) change in platelet count from baseline to procedure day was an increase of 41.1 × 10 9 /L (33.29 × 10 9 /L), thus demonstrating that most patients achieved the generally recommended platelet count > 50.0 × 10 9 /L on their procedure day. [7]The effectiveness of avatrombopag observed in this phase 4 study in patients with a baseline platelet count < 50 × 10 9 /L is consistent with the data collected in the phase 3 ADAPT-1 (change from baseline to procedure day in platelet count of 32.0 × 10 9 /L) and ADAPT-2 (change from baseline to procedure day in platelet count of 31.3 × 10 9 /L) studies. [21]Likewise, the platelet count increased following initiation of avatrombopag treatment, peaked at procedure day, and returned to near baseline levels at the post-procedure follow-up Figure 1.Platelet count by study visit.Treatment is defined as the day of initiation of avatrombopag up to and including the day before procedure day.For patients who did not have a platelet count assessment on the day of the procedure, but rather had a platelet count assessed the day prior to the procedure, the platelet count assessed the day prior to the procedure was summarized as a procedure platelet count.N = number, SD = standard deviation. visit.Although the subgroups were small, neither the baseline platelet count nor the baseline CTP Grade appeared to impact the observed trend in platelet counts over the study period, suggesting that avatrombopag can be used to increase platelet counts prior to a scheduled procedure irrespective of the degree of thrombocytopenia or cirrhosis.Avatrombopag was also found to reduce the need for platelet transfusions in this patient population.Forty-nine out of 50 enrolled patients did not require a platelet transfusion after the baseline visit and up to 7 days following procedure day (98%; 95% CI: 89.4%-99.9%),and no patients required a rescue procedure for bleeding during the study Figure 2. Platelet count at baseline and procedure day by baseline platelet count.For patients who did not have a platelet count assessment on the day of the procedure, but rather had a platelet count assessed the day prior to the procedure, the platelet count assessed the day prior to the procedure was summarized as a procedure platelet count.N = number, SD = standard deviation.period.Although direct comparisons cannot be made due to the heterogeneity of patient populations and study designs, the proportion of patients not requiring a platelet transfusion was higher than that in the phase 3 ADAPT studies, [21] higher than that reported for eltrombopag (72%), [23] and equivalent (97.5%) [24] or higher (65%-94%) [25][26][27][28] than that reported for lusutrombopag.Similar efficacy results were recently reported in a realworld retrospective study of avatrombopag in patients with CLD scheduled to undergo a procedure (n = 29), which had similar patient population demographics and in which the most common procedure was upper GI endoscopy with planned esophageal band ligation (86%). [29]Patients in that study also had approximately a 2-fold increase in platelet count prior to their procedure, and no patients required rescue therapy. [29]ur study provides limited initial data on the safety and effectiveness of avatrombopag in patients with baseline platelet counts greater than those previously evaluated in the phase 3 ADAPT studies.Increasing platelet counts in patients with baseline platelet counts ≥ 50 × 10 9 /L may be useful prior to more invasive surgical procedures in patients with CLD that require a higher platelet count (e.g., >100 × 10 9 /L), such as elective orthopedic surgery, craniotomy, and neurosurgery. [10]In this study, 9 patients had a baseline platelet count of ≥ 50 × 10 9 /L, of which 5 had a baseline platelet count ≥ 50 × 10 9 /L and < 100 × 10 9 /L, and 4 had a baseline platelet count ≥ 100 × 10 9 /L.The use of avatrombopag could facilitate invasive procedures that would otherwise be postponed or not performed due to the severity of the patient's thrombocytopenia.In the real-world setting, patients with CLD require diverse interventions and severe thrombocytopenia can exclude these patients from life-saving procedures such as percutaneous radio-frequency ablation in malignant lesions. [30]Several operative procedures, such as inguinal hernia repairs, that were not evaluated in the phase 3 ADAPT studies were included in this study.In this varied population of patients with CLD undergoing a range of invasive procedures, no significant bleeding events were noted, and a platelet transfusion was only required in 1 instance. The safety profile observed in this study is comparable to the pooled phase 3 ADAPT data and similar to the reported safety of lusutrombopag. [25,26,28]One of the 2 mild bleeding events was likely procedure-related and none were considered related to avatrombopag by the investigator.Importantly, no thromboembolic events were reported in any patient and no new safety signals were identified in this broader group of patients with CLD, such as patients with a platelet count ≥ 50 × 10 9 /L and patients receiving concomitant medications that were prohibited in the ADAPT studies. The TEAEs identified in this study were similar for all doses of avatrombopag taken and do not suggest any new or unexpected safety concerns compared to the 2 large, multicenter, phase 3 ADAPT studies.The safety and effectiveness profiles did not differ in the 2 patients who received 2 subsequent courses of treatment with avatrombopag for separate procedures, in line with data published for lusutrombopag, [28] and indicating that TPO-RAs are suitable for repeated use in this population. Challenges and limitations A limitation of this study is the small sample size that limits the conclusions, especially those regarding observations drawn from subgroups of patients.Nonexperimental observational studies typically involve a more diverse group of patients than experimental and interventional clinical studies and more accurately reflect real-world medical practice.Future larger real-world studies should help to elucidate the potential use of TPO-RAs prior to procedure in this challenging patient population.Further evidence may allow for shift in the risk/benefit calculation for more invasive procedures in patients with CLDassociated thrombocytopenia. Conclusion The results of this real-world study indicate that avatrombopag is effective in a patient population with CLD of diverse etiologies and severity.Importantly, avatrombopag was well tolerated and effective in increasing platelet counts, allowing procedures to be performed with greater confidence.The limited data presented in this study also suggest that avatrombopag is suitable for repeated use and can be used to prevent periprocedural bleeding in surgical interventions not previously studied.In a real-world setting in patients with CLD and thrombocytopenia, treatment with avatrombopag consistently increased platelet counts to nearly double the baseline platelet count by the day of the procedure and reduced the need for platelet transfusions. Figure 3 . Figure 3. Platelet count at baseline and procedure day by Child-Turcotte-Pugh Grade.Baseline CTP Grade B and C groups were combined due to the small sample size (Grade B: baseline n = 11, procedure n = 10; Grade C: baseline n = 3, procedure n = 2).For patients who did not have a platelet count assessment on the day of the procedure, but rather had a platelet count assessed the day prior to the procedure, the platelet count assessed the day prior to the procedure was summarized as a procedure platelet count.CTP = Child-Turcotte-Pugh Grade, SD = standard deviation. , was provided by Catherine Hoare, PhD, Bioscript Group, Macclesfield, UK.The authors have no conflicts of interest to disclose.The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.North Shore University Hospital, Northwell Health, Manhasset, NY, b Donald and Barbara Zucker School of Medicine at Hofstra/Northwell Health, Hempstead, NY, c Division of Gastroenterology and Comprehensive Transplant Center, Cedars-Sinai Medical Center, Los Angeles, CA, d Liver Institute of Virginia, Liver Institute of Richmond, Liver Institute of Hampton Roads, Bon Secours Mercy Health, Richmond and Newport News, VA, e Sobi™, Inc., Durham, NC. Supplemental Digital Content is available for this article.a Table 1 Demographics and baseline characteristics. Table 3 Overview of treatment-emergent adverse events.
v3-fos-license
2019-04-06T13:07:40.598Z
1979-01-01T00:00:00.000
97989565
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/121441/Documento.pdf?sequence=1", "pdf_hash": "4de4e1435fb44ba0b87a29a081d65e091f94e48c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41493", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "40e1483d28521ed8a797ed80351b11609b0d1b25", "year": 1979 }
pes2o/s2orc
Kinetic and Structural Consequences Derived from Ageing Effects on Electrochemically Formed Layers . The influence of different ageing processes on electrochemical reactions is analysed. Three main types of ageing processes are described: open circuit ageing, potentiostatic ageing and potentiodynamic ageing. The data derived from different electrode processes show that the films are composite systems themselves. They involve various nonequilibrated species which accordingly react to attain either a single equilibrium configuration or a configuration involving equilibria among the various surface species. Surface restructuring and cluster-type reactions are important con tributions toward understanding the dynamic behaviour of electrochemical interfaces. INTRODUCTION Until recent years most of the experimental data on electrochemical kinetics were confined to reactions under stationary state conditions.The information on the diffusional kinetics and Tafel plot relationships is very useful particularly from the operational viewpoint, but it is rather limited in furnishing the detailed mechanisms of the electrochemical reactions.Thus, the influence of electrode surface structure on electrochemical reactions, the contribution of the solvent to electrochemical interface configuration and the participation of adsorption and electrosorption processes, including ion adsorption can only be indirectly deduced from the stationary state data.At present, however, this situation is rapidly changing with the introduction of new, powerful electrochemical relaxation methods.These can be either single or coupled to optical techniques which allow finer details to be distingusihed and new processes and intermediates involved in electrochemical reactions.Thus, through the different types of electrical perturbations, it becomes possible to confine the response of the electrochemical interface mainly to one particular contribution out of the various ones entering the electrode reaction.Despite the large amount of knowledge acquired from the application of new techniques, the number of unsolved problems is so exceedingly large that a complete understanding of the electrochemical reactions seems still far away.Therefore, the feeling exists that most of the earlier simple formalisms applied to explain the dynamic behaviour of the different types of electrode processes should be thoroughly revised through the light of the new and recent findings. The response of the electrochemical system to a particular stimulus (perturbation) depends on the coupling between the characteristics of the perturbation and those defined as the proper system's inertia.Then, for each particular system, it is reasonable to expect a particular response for each perturbation program imposed on it.This basic concept implies two possibilities which require special attention.The first is that the kinetics of a particular physicochemical system under relaxation conditions should, in principle, differ from that observed under stationary state conditions.A coincidence of responses which may occasionally exist under special circumstances should not be taken for granted.The second question regards the fact that each type of perturbation has its own characteristic response.Thus, the change of the system's characteristics produced by a particular stimulus usually brings out only partial information about it.So, by adjusting the experimental conditions adequately it would be possible to get a maximum coupling between a particular step of the electrochemical process and the perturbation programme.Therefore, to arrive at a reasonably detailed and sound explanation of the physicochemical behaviour of the system, the correlation of data derived from the application of a wide variety of perturbation programmes seems unavoidable. The preceding approach is exemplified through the study of the ageing processes occurring at electrochemical interfaces, in order to tackle basic problems of electrochemical reactions covering from corrosion and passivity to electrocatalysis.The idea of ageing was introduced in electrochemistry sometime ago as a possible contribution when either a new phase (multilayer) or a monolayer was formed on the electrode surface in the course of the reaction.r"The word ageing was coined to define the effect of different processes or side effects which promote a change of the electrochemical reaction response involving any kind of layer or film formation, as if the products initially formed suffered additional changes to attain more stable energetic configurations.The ageing effects, which are associated with a relatively large number of electrochemical processes,1-1' were not formerly considered except in a few cases involving oxygen containing films formed on noble metal electrcdes.l"Consequently, the kinetic data of many electrode reactions were obtained under strictly noncomparable conditions.The ageing effects of many reactions are now recognized.Most of these effects are related to corrosion and passivation of metals in aqueous solutions.Hence, the mechanistic conclusions, which were derived by ignoring the ageing effects, are open to criticism. The existence of either long range or short range ageing processes is very straightforwardly demonstrated in many electrochemical systems by the use of different complex multiple potential time perturbation program- Open Circuit Ageing Typical examples of open circuit ageing are shown both by monolayers and multiplayers of oxygencontaining species on several metals.?" Figure 1 shows the single sweep triangular Ell displays obtained with the Pt(polycrystalline)/KHSO.(melt) interface at 22rc. 6 • 7 The electrochemical interface consists of a polished polycrystalline platinum wire (spectroscopically pure, Johnson and Matthey) immersed in the melt prepared from a.r.chemicals (MalIinckrodt).During each anodic potential excursion up to EA.. the electrode surface becomes covered by a monolayer of an oxygencontaining species approaching the pta stoichiometry, as compared to the hydrogen adatom monolayer on .>initial one serves to determine the changes induced by the repetitive triangular perturbation, after a straightforward comparison with the initial Ell display.In this way it is possible to produce most of the changes induced by the types of ageing previously described, although in some cases in an amplified way.Furthermore, additional insight on the behaviour of the first layers of the metal lattice is gained from the potentiodynamic ageing data.This involves the possible surface reconstruction processes and the penetration of foreign atoms into the metallic lattice.Data derived from the different ageing procedures involving various electrochemical systems are described further on.Attempts are made to establish significant conclusions which include: the condition for correlating the potentiodynamic behaviours of metals, the existence of non-equilibrium configurations in film-forming reactions related to the corrosion and passivity of metals, the surface reconstruction and reaccomodation of atoms on the electrode surface with the possible penetration of atoms into the metal lattice, the metastable states at a metal surface and the hydrogen dissolution assisted process. AGEING PROCEDURES The simplest ageing process is denoted as open circuit ageing.It results, for instance, when a new species is produced on the electrode surface under an anodic linear potential sweep up to a potential value at which the electrolysis is immediatly switched off for a certain length of time.Finally, the anodically formed species is electroreduced after ageing with a linear cathodic sweep.The open circuit ageing manifests itself, under constant perturbation parameters, by a shift of the electroreduction potential towards more cathodic values, as the lapse of time from switchoff increases.Usually, a limiting electroreduction potential can be achieved.The charge taking part in the electroreduction process is equal to that participating in the electroformation process.This is so if the number of electrons per reacting species is the same for both electrode reactions and the electroformed species undergoes no chemical dissolution. A different and more complex type of ageing is observed when the interface is perturbed in the following way.As in the former case, the anodic species is produced on the electrode surface by an anodic potentiodynamic sweep.However, once the anodic potential limit is reached, the potentiodynamic sweep is reversed towards the cathodic direction until the null current is attained.The null current potential may be located at any potential from the anodic potential limit downwards, depending on the degree of reversibility of the electrode reaction.Once the null current potential is attained, the system is held at this potential for a certain length of time and afterwards the cathodic potential sweep is continued.At the null current potential different processes are possible.As the film electroformation and electrodissolution occur at the same rate, only partial ageing effects result.Then, a film of average characteristics different from those corresponding to the open circuit aged film is produced.In this case, the participation of chemical dissolution processes is also feasible.The electroreduction of the null current potential aged film also comprises a shift of its electroreduction potential towards more negative values than those corresponding to the nonaged film.The corresponding Ell contour however, should be more complicated than that of the open circuit aged film.For the null current potential ageing the cathodic charge may be either equal, lower, or higher than that recorded during the electroformation process.In general, this type of ageing procedure is referred to as potentiostatic ageing.It is especially suitable for studing multilayer formation processes involving the participation of different species with relatively close electroformation potentials. The so-called ageing effect can also be promoted in a more elaborated way as it is the case of potentiodynamic ageing."In this case a film is first produced at the electrochemcial interface by means of an anodic potentiodynamic sweep at a certain rate.Immediately afterwards, the system is perturbed with a repetitive triangular potential sweep of either the same or another sweep rate during a certain preset length of time.The repetitive triangular potential perturbation is confined to potential limits which are associated just with the removal and the reforming of a fraction of the surface layer.Immediately after the repetitive triangular perturbation, a cathodic potentiodynamic excursion at the same sweep rate as the mes.v" although at present the mechanisms of the ageing processes are not fully understood.platinum, taking the roughness factor correction into account.'"The same amount of pta species electroformed at v. = 0.05 VIs is open circuit aged during 1.25 min.Afterwards, the anodic product is electroreduced at different potential sweep rates (0.05 V/s;£ v, ;£ 0.20 Vis).The electroreduction process is characterized by a cathodic current peak at ca. 0.2 V (vs.Pt/H2(1 atm)/KHSO.(melt)electrode).The open circuit ageing produces a time-independent Ell profile, and the total charge remains constant, as no chemical dissolution exists.The cathodic current peak-height increases linearly with the cathodic sweep rate, and the current peak potential shifts towards more negative potentials according to a linear potential vs, log v, relationship.It is likewise predicted by the linear potential sweep theory of simple electrochemical monolayer reactions."Then, the potentiodynamic profile of the electroreduction of the aged species can be kinetically analysed in terms of relatively simple irreversible reaction mechanisms."The formation of the monolayer of pta species is accounted for by the overall reaction: Pt+H20=PtO+2W+2e. ( The occurrence of reaction (1) implies the following equilibrium in the melt: which participates in the thermal decomposition of the melt.At 227°C the potentiodynamic electroreduction of the open circuit aged species anodically formed is interpreted through a reaction scheme involving an irreversible second order rate determining step where the aged species apparently satisfies the Langmuir adsorption conditions.'For this purpose the following formal reaction scheme is considered: The equation derived from the preceding reaction scheme for the transient current under a linear potential sweep, under the assumption that the faradaic current follows a Tafel equation with the slope RTI2F, reproduces the experimental Ell profiles within 1% over nearly the whole potential range, except at the more negative potentials where the hydrogen evolution reaction begins to interfere."Usually the current peaks related to the electrochemical reduction of the aged species, probably a single type species, become thinner and very symmetric in shape. The potentiodynamic Ell display, considered as the electrochemical spectrum of the system, indicates that the aged species are confined to an energy range much more restricted than the wide range usually found for the non-aged surface species."In the absence of ohmic resistance and dl contributions the steepness and symmetry of the potentiodynamic Ell display is in part determined by the conventional Tafel slope of the irreversible reaction: the thinner the Ell profile, the smaller the Tafel slope.This has been verified both for the Pt(polycrystalline)lKHSO.(melt) interface and for the Au(polycrystalline)/l M H 2SO.(aq) and Au(polycrystalline)/l M HCIO.(aq) interfaces.?" The Tafel slopes that are required to simulate the Ell displays of the electroreduction of the aged species decrease as the temperature increases.?" This occurs when this type of experiment is carried out over a wide temperature range with any of the above mentioned electrochemical systems.The phenomenological Tafel slopes are then even less than the RTI2F ratio.If common conventional reaction mechanisms are valid in the whole temperature range for the cathodic reactions occurring on those electrochemical interfaces, it is reasonable that the Tafel slope increases accordingly with the temperature.Therefore, the straightforward application of conventional kinetic analysis, which probably by chance satisfies a single temperature experiment, cannot be extended to other temperatures.The kinetic data are derived for the electrodesorption of open circuit aged oxygen-containing monolayers on platinum and gold in the absence of chemical dissolution.It suggests that a term for the energy distribution of reactants in addition to the coverage terms should enter the kinetic equation representing the corresponding reaction model.The approach of the equilibrium energy distribution by the monolayer species implies a finite rate process for open circuit ageing, which interferes, not only all along the oxygen electrodesorption period under uncontrolled ageing but also during the monolayer electroformation process.When the ageing reaction rate is taken into account, the anodic E/I profiles for the oxygen monolayer e1ectroformation on platinum at the Pt(polycrystalline)/KHSO.(melt) interface can be precisely computed." At present only formal kinetic treatments in terms of reaction mechanisms have been advanced to explain the Ell responses for the electrochemically sorbed and desorbed oxygen monolayer on platinum and gold under the ageing controlled conditions, but open circuit ageing clearly shows the complexity of the phenomena at the molecular level.A relatively fast mobility of the first layers of species operates, not only at the solution side of the electrochemical interface, but even at the metal side.This is a fact which is also demonstrated through the application of the ageing techniques described further on. The open circuit potential ageing also furnishes the possible existence of either chemical or electrochemical dissolution processes occurring simultaneously with the electrochemical formation of the layer.The rate of chemical dissolution can be derived from the residual cathodic charge left after open circuit ageing at different times.The probable rate controlling process can also be determined.The layer dissolution, however, may also occur through a corrosion-type mechanism in the electrolyte solution.?" Finally, open circuit ageing offers the possibility of normalizing the potentiodynamic response of different electrode/electrolyte interfaces where any type of layer is formed, through a careful choice of the electrical perturbation variables. Potentiodynamic Ageing The perturbation programmes suitable to produce potentiodynamic ageing in the oxygen monolayer region are depicted in Fig. 2. The potential sweep rates as well as the switching potentials (EA,' and EA,.) of the wide potential range triangular perturbations and low potential range triangular perturbation (E:., and E~.) are conveniently adjusted.Figure 3 gramme depicted in Fig. 2. The same quality of platinum already described is employed.The electrolyte is made with a.r.H 2SO. and triple distilled water, which satisfied the purity criteria recommended for electrochemical kinetic studies."The E/I profiles (Fig. 3) obtained at four different triangular perturbation times (T) are easily compared with the conventional single triangular potential sweep E /I display (full trace).The regions of the hydrogen adatoms and of the oxygen containing species are well distinguished in Fig. 3. The potentiodynamic ageing of the oxygen elec- trosorbed monolayer within the potential range between E~c and E~.produces two main effects.At relatively short times, there is a net transformation of species, which are characterized by the electroreduction current peak at high positive potentials, into another more stable species.The electroreduction potential of the latter is not only more negative than that of the initial profile, but even more negative than the potential of the open circuit aged species obtained for the same system.As described elsewhere," potentiodynamic ageing attains its maximal efficiency when the intermediate perturbation entails the removal and reformation of nearly one half of the oxygen-containing monolayer.Under these circumstances the total cathodic charge remains constant during the experiments.But when the intermediate perturbation lasts longer, the cathodic charge increases causing a noticeable increase of the cathodic charge associated with the most stable oxygen-containing surface species.The potential range corresponding to the electroreduction of the latter overlaps to a considerable extent the potential range where the hydrogen electrosoption begins on platinum.This result corresponds well with the E /1 cathodic potentiodynamic display, run with the same electrochemical system after a prolonged anodization at high positve potentials." The single triangular potential sweep E /I profiles run immediately after the intermediate triangular potential perturbation reveal the multiplicity of current peaks associated with the electroreduction process.It is evident that for a particular set of perturbation conditions a definite average structural pta configuration is reached on the electrode surface.Therefore, the electrochemical processes can be more realistically represented by means of a more complex reaction scheme such as: Dynamic ageing is principally related to either reaction (lIla) or even more likely to the sum of reactions (III) and (lIla).The various forms of electrosorbed oxygen imply the occurrence of the complex cathodic Ell potentiodynamic display involving three different electroreduction current peaks if, in principle, the potentials corresponding to reactions (IV), (IVa) and (V) are sufficiently different.The potential of the more negative electrodesorption current peak, found after dynamic ageing, coincides with the ones obtained for the reduction of multilayer oxide films on platinum."These results imply a possible penetration of oxygen into the metal, probably assisted by the reconstruction of the surface through dynamic ageing, that is by the redistribution of the surface metal atoms brought about by forming and breaking Pt-O bonds. The results of the potentiodynamic ageing experiments furnish a definite idea about the movements and surface restructuring on the electrode during potentiodynamic ageing.This has been interpreted in terms of an assisted penetration of oxygen atoms at the metal lattice sublayer!'The frequency dependence of this phenomenon and the rate equation of potentiodynamic ageing" clearly indicate 'the mobile nature of the metal surface atoms coupled to the mobile oxygen-containing entities on the surface.The coupling of reacting species mobility with the perturbation frequency results in the maximal efficiency of the dynamic process. Potentiodynamic ageing furnishes additional kinetic data when the individual intermediate perturbation cycles are considered.The main information coming from the pertaining analysis points to either the possible accumulation or dissolution of some of the species participating in the reaction during the intermediate perturbation and the possible type of conduction mechanism at the interface."The former is immediately derived from the charge balance.The latter information comes directly from the slope of the individual cycles which may be either time-dependent or timeindependent.By properly adjusting the switching potentials of the intermediate perturbation, the conduction characteristics of the surface species participating in the reaction along the progressively changing potential are visualized. The Ell displays of the Pt(polycrystalIine)/l M H 2S04 interface in the hydrogen adatoms potential range after potentiodynamic ageing (Fig. 4) reveals the change of the distribution of sites of different energies for the hydrogen adatoms as a consequence of the dynamic perturbation."The potentiodynamic ageing of the electrochemical interface in the hydrogen adatoms potential range (Fig. 4) furnishes another example of the change of the distribution of the sites of different energies as a consequence of the dynamic perturbation.The overall charge during the intermediate perturbation remains practically constant, but the rapid removal and refilling of the hydrogen adatoms occurs with the simultaneous reorganization of the surface in favour of the adsorption sites of higher adsorption energies.It is worthwhile to note that a similar effect can be obtained by running the Ell potentiodynamic display after a long cathodization of the interface."This remarkable coincidence supports the idea that at least the first layers of the electrode side of the interface follow up the electrochemical changes without recovering their initial energetic configuration.When reaction (5) proceeds under potentiodynamic con- Potentiostatic Ageing Potentiostatic ageing can be conceived of in at least two different ways, namely, either by keeping the interface at a fixed potential during a certain lapse of time with a net flow of current or with practically no net flow of current.The former situation usually entails a further accumulation of reaction products, a fact which makes the interpretation of the results rather more difficult.The latter case is apparently simpler but may still have additional complications as compared to those ageing procedures described before.Thus, the null current condition contains the possibility of the occurrence of an electrochemical reaction at the interface.This is the case, for instance, of a corrosion process where the base metal is electrodissolved and a second species in solution is electroreduced.Furthermore, if the metal electrodissolution is determined by the slow dissolution of the film, the continuous electrochemical formation of the latter takes place during potentiostatic ageing.In any case, the actual composition of the film corresponds to a mixture of species formed at different times.Therefore, the E/I response obtained in the potentiodynamic electroreduction of the film should be intermediate between that of the non-aged film and that of the film aged under open circuit conditions.These behaviours are comparatively illustrated for the case of the Pt(polycrystaIline)/KHS04(melt) interface in Fig. 6.The electroreduction E/I profiles become wider and more asymmetric in shape as the ageing covers the open circuit ageing, the potentiostatic ageing and the non-ageing conditions. Potentiostating at a definite potential has been applied directly or indirectly to study different types of adsorption, either neutral species or ionic.":"The interaction of these species with the electrode surface depends primarily on the location of the potential of zero charge and on the potential applied to the electrochemical interface. The application of potentiostatic ageing is particularly useful to study the responses of the nickel hydroxide electrode in the potential range related to the overall reaction: The third interesting example of potentiodynamic ageing is that of the Au(polycrystalline)/1 M H 2S04 interface (Fig. 5).From thermodynamic data of the gold/hydrogen gas system it is deduced that the existence of hydrogen adatoms on gold is unfavourable."Despite these predictions, it has been recently found" that, after a vigorous cathodization in the hydrogen evolution region, the immediately anodic potential sweep shows an anodic current peak just in the region where the electrooxidation of, either hydrogen adatoms or dissolved hydrogen in the metal are expected.The latter response is considerably enhanced after potentiodynamic ageing of the Au/1 M H2S04 interface (Fig. 5).28In this case the ageing effect is promoted on the positive potential side, just in the potential range of the oxygen-containing monolayer electroformation and electrodissolution.Dramatic changes in the potential range of the hydrogen electrode are clearly observed, both in the cathodic reaction, whose rate increases one order of magnitude and in the hydrogen electrooxidation reaction, which is characterized by a relatively wide and asymmetric current peak."The maximum effect is again obtained when 80 percent of the oxide film is removed and reformed during potentiodynamic ageing.The successive potential cycles run, after potentiodynamic ageing, cleary show the progressive decrease of the charge pertaining to the hydrogen (dissolved and adsorbed) electrooxidation.The same E / I display depicted in Fig. 5 can also be obtained in a hydrogen gas saturated solution.However, the cathodic potential limit is located slightly more positive than the potential of the net hydrogen evolution.Furthermore, under comparable perturbation conditions, the height of the electrooxidation current peak decreases under hydrogen gas stirring. Thus, the activity of gold for the hydrogen electrode reaction can be explained in terms of the structural and energetic characteristics acquired by the first layers of gold atoms in the metal side of the electrochemical double layer.During potentiodynamic ageing the properties of those layers are definitely different from those which can be predicted from the bulk properties of the metal.Unfortunately, a quantitative description of the true surface structure, produced by potentiodynamic ageing, remains to be developed. Israel Journal of Chemistry 18 1979 Ni(OH1 = NiOOH + W + e. ---~0r--.-~-~~i.""'---~~--+~E~~~~~~-::::::;;;;;iii!!JII"~~:---1 :::::l <..> -0.1 ditions without any ageing (within the potential range of water stability) an apparently simple E/I display is obtained."But, when the electrochemical reaction proceeds from right to left after the system has been .. maintained at a certain E... and during different times E (Fig. 7), the electroreduction profile splits into two i= cathodic current peaks and a shoulder.For each E... the change from the single cathodic current peak to the multiple current peak display involves a constant cathodic charge.The overall electroreduction process occurs at more negative potentials as the potentiostating time at E... increases.Therefore, the potentiostatic ageing complemented by the potentiodynamic ageing reveals (Fig. 7) that the NiOOH electroformed species consists of at least three energetically different species.These, although probably involving a similar stoichiometry, are changing to attain an equilibrium configuration at the corresponding EA.. where the [Ni(OH),]*, represents the simplest stoichiometry for the compound bridging both main electrochemical and chemical processes participating in the square reaction model.The dashed arrows refer to possible reactions which are not detected under the present experimental conditions.This explains the apparent irreversible E/I potentiodynamic displays of the nickel hydroxide electrode, the dependence of the current peaks potential location on the perturbation conditions and the fast response of the electrochemical system as far as its application to the alkaline batteries is ment is carried out including a potentiostatic ageing at EA., during different times (Fig. 8).One immediately shows that depending on the ageing time of the Ni(OH)2 species at EA., a clear split of the anodic ElI display is observed.This reveals the existence of an interconversion of at least two Ni(OH)2 reactant species during ageing, yielding definite electrochemical reactions at different positive potentials.In any case, the interconversion processes occurring at different EA•e can be estimated from the charges playing part in each individual process at different potentiostating times.":"These results clearly demonstrate that many species are involved in the overall electrochemical reaction (5).Furthermore, they demonstrate that the reaction follows a complex pattern.From these data, together with the results derived from the triangularly modulated triangular potential sweep perturbations" allow the following reaction pathway for reaction (5): Israel Journal of Chemistry 18 1979 concerned.A similar type of behaviour has also been established for the Fe/alkaline electrolyte interface.F'" CONCLUSIONS The results derived from the different ageing experiments are quite relevant to electrochemical kinetics.The existence of films, either a monolayer or multilayers, is recognized for a relatively large number of electrochemical reactions.These films are certainly composite systems involving a number of non-equilibrated species.They have been commonly considered as static, consisting of well structured stoichiometric species.Accordingly, to attain either a single equilibrium configuration or a configuration involving equilibria among the various surface species, different chemical reactions occur simultaneously during either the film electroformation or its electroreduction.Therefore, as the rate of these processes may either keep up with or lag behind the proper electrochemical reactions, it is obvious that the direct comparisons made with results obtained under different perturbation conditions, either static or dynamic is, in principle, not strictly valid for deriving sound general mechanistic conclusions.Therefore, most of the kinetic data reported in the literature obtained at a certain preset time without controlling other perturbation variables are useless for fundamental electrochemical kinetics.These types of measurements have often been done in the study of electrocatalysis as well as in corrosion and passivation of metals.Hence, it is of paramount importance for the mechanistic interpretation of the kinetic results to take into account the time scale of the function used to perturb the electrochemical interface. The electrochemical response of the different aged layers indicates that the stoichiometric formalism of the surface species is actually more complex than that given by the stoichiometries of isolated molecules.Thus, for instance, the electrochemical reactions involving the electroformation of the oxygen containing monolayer at the Pt(polycrystalline)/KHSO.(meIt) interface is interpreted in terms of the following reaction sequence:" where n represents the number of available platinum surface atoms; n ~m, n ~PI; n ~P2; m §: PI and n = m + p, + P2' Pt. (0)", and Pt, (0)", denote respectively the unaged and the aged oxygen species contributing to the oxygen monolayer formation.The k's denote rate constants of the different reactions.Subscript numbers indicate electrochemical steps and "chem" and "ag" mean chemical and ageing reaction respectively.The dotted arrows are unlikely processes.At the initiation of the potentiodynamic formation of the monolayer, then n surface atoms are simultaneously available sites for the reaction with the water molecules.The reaction should be considered as a community process.Then, as the reaction proceeds, at any instant before the monolayer is completed, the surface layers consist of at least four different species, whose concentration ratio is timedependent both during the electroformation process and after the anodic process has been completed.When the monolayer is completed, the limiting compositions correspond to either n = Pi or n = P» if the monolayer formation implies a Pt/O atomic ratio equal to unity.Therefore, under no ageing, the average composition of the surface is time-dependent, while after the ageing reaction is completed it becomes time-independent.The electrosorption and electrodesorption processes certainly imply an alteration of the metallic bonding of the first layers of metal atoms in the lattice as compared to that of the bulk of the metal at equilibrium.As seen from potentiodynamic ageing, the first layers of atoms in the metal lattice are relatively mobile!' Apparently, under certain conditions, they move in phase with the dynamic perturb-ation.This suggests that for the present type of reactions the coupled bulk diffusion and surface diffusion processes may contribute to the penetration enhancement of the hydrogen and oxygen into the bulk of the metal.Although the ageing effects are evidenced through the layer formed on the electrode, it simultaneously reflects through the activation of the electrode surface so that the ageing effect can also be referred to the surface of the electrode, especially when the potential perturbation used removes a large percentage of the layer anodically formed. The ageing effects indicate that surface restructuring and cluster type reactions playa very important role in the kinetic characteristics of the electrochemical reactions.Structural rearrangement and disordering of solid surfaces such as surfaces of single crystals is frequently encountered in the literature of surface science.":"Further advances in the knowledge of those contributions are undoubtly important to define the true energetics of the electrochemical interfaces under working conditions.Electrode surface restructuring involves at least two contributions: the first one resulting from the mobility of metal atoms because of their delocalization during the electrosorption of the oxygen containing species, and the second one related to the anisotropy of the surface energy produced by the electrosorption process. The preceding analysis can be extended to most of the electrochemical processes involving film formation.Obviously, this means that most of the simple reaction formalisms postulated to interpret the mechanisms of these reactions must be critically revised.It should be pointed out, however, that many conventional reaction formalisms are useful as operational reaction models which can be easily handled through simple equations. Fig. 1 . Fig. 1.Potentiodynamic Ell displays covering a constant potential amplitude for the Pt/KHSO.(melt) at 227°C at 0.05 V s anodic potential sweep and 0.05, 0.07, 0.10, 0.15 and 0.20 V s" cathodic -potential sweep, including the open circuit ageing during T = 1.25 min.E, is the potential of the cathodic sweep initiation.The potential is referred to the hydrogen electrode potential in the molten electrolyte.The potential/time perturbation program including the cathodic (EA,C> and anodic (E •.• ) switching potentials are indicated in the figure.Electrode area 0.524 em'. Fig. 2 . Fig. 2. Different schemes of potential/time perturbation programs employed for the anodic and cathodic potentiodynamic ageings, respectively. 2 Fig. 3 . Fig. 3. Potentiodynamic E / I profiles run for the Pt/3.7 M H 2 S0 4 interface at 55°C.Intermediate perturbation at 0.4 s•t.Influence of the anodic switching potential.(1) T, = 1 min; (2) T 2 = 5 min; (3) T. = 30 min; (4) T. = 90 min.The full trace corresponds to the conventional triangular potential sweep E/I display.The potential/time perturbation program is included in the figure.The potentials are referred to the RHE. 12 Fig. 4 . Fig.4.Potentiodynamic Ell displays for the Pt/I M H 2S04 interface at 25°C, at 0.3 V s,' according to the E]t programme depicted in the figure; (-) stable profile; (...) the first trace obtained after the fast repetitive triangular potential sweeps at 3V/s" between 0.05 V and 0.13 V during a lapse T = 1 min; (---) the second trace.The hydrogen electrooxidation current peaks and shoulders are numbered I, II, III and IV, respectively.The potentials are referred to the RHE. Fig, 5 . Fig, 5., Potenti~dyn~mic Ell dis!'.lays .obtaine~with the Au/l M H2S04,int~rface at 25°C, at 0.2 V s-'.(1) without ageing; (2) potentiodynamic ageing at 0.2 V s dunng 30 rnm, The hydrogen electrooxidation current peaks are shown with further detail at the rhs of the figure.The anodic current increase obtained after 10 min (a), 20 min (b) and 30 min (c) of the potentiodynamic ageing is depicted.The perturbation program is also included in the 'figure.The potentials are referred to the RHE. 6 ~Fig. 6 . Fig. 6.Single triangular potential sweep Ell displays covering a constant potential amplitude for the Pt/KHSO.(melt), at 250°C, at a constant anodic potential sweep (0.05 V S-I) and different cathodic potential sweeps, as indicated in the figure.(a) Potentiostatic ageing at E, during 1 min.(b) Non-ageing conditions (conventional single triangular potential sweep Ell displays).
v3-fos-license
2018-11-15T22:18:45.180Z
2018-09-02T00:00:00.000
53291569
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2018/9160793", "pdf_hash": "5e5cb224b1e4192123bbd9cce6c578dc892cc336", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41494", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "sha1": "5e5cb224b1e4192123bbd9cce6c578dc892cc336", "year": 2018 }
pes2o/s2orc
Parameter Estimation in Ordinary Differential Equations Modeling via Particle Swarm Optimization Researchers using ordinary differential equations to model phenomena face two main challenges among others: implementing the appropriate model and optimizing the parameters of the selected model. The latter often proves difficult or computationally expensive. Here, we implement Particle Swarm Optimization, which draws inspiration from the optimizing behavior of insect swarms in nature, as it is a simple and efficient method for fitting models to data. We demonstrate its efficacy by showing that it outstrips evolutionary computing methods previously used to analyze an epidemic model Introduction In mathematical biology, parameter estimation is one of the most important components of model fitting procedure.With poorly estimated parameters, even the most appropriate models perform poorly.Although there is already an abundance of traditional algorithms in the literature, such as Gauss-Newton methods, the Nelder-Mead method, and simulated annealing (see [1] for a thorough review), as models get more complex, researchers need more versatile and capable optimization methods.Evolutionary computing methods are becoming more frequently used tools, reducing the computational cost of model fitting, and starting to attract interest among mathematical biologists. Compartmental models that are frequently used in infectious disease modeling have not escaped the computational cost versus complex model dilemma either.Since parameter estimation for compartmental models can be tackled by such evolutionary computing methods, it is reasonable to examine the goodness of fit versus the price of fit for these commonly employed models.Therefore, we will focus on the lesser-used evolutionary algorithms in comparison to Particle Swarm Optimization (PSO).In particular, Genetic Algorithms (GA) have been frequently used to optimize the parameters of ordinary differential equations (ODE) models [2][3][4][5][6][7].PSO has thus far been underutilized in this area.It is an optimization algorithm inspired by swarms of insects, birds, and fish in nature.Although PSO has been applied in a number of different scenarios [8], its performance on compartmental models has not yet been studied. In optimization, the computational cost of an algorithm is just as important as the quality of its output.Unfortunately for us, cost increases with quality.Evolutionary computing algorithms are especially susceptible to this effect, wherein small reductions in error must be paid for by disproportionate amounts of additional computation.To reduce cost or improve accuracy without compromising the other requires the application of an innovative technique to the problem.For example, Hallam et al. [9] implemented progressive population culling in a genetic algorithm to hit a given error target in fewer CPU cycles. Here, we intend to show that PSO is not only a viable technique for fitting ODE models to data, but also that it has the potential to outperform GA, which was proposed in [2] as a superior optimization method to the traditional ones for fitting ODE models.Hence, with this study, we aim to establish an even more viable tool for ODE model fitting.Specifically, we apply PSO to kinetic parameter optimization/fitting in 2 Journal of Applied Mathematics the context of ODEs, and we contrast the PSO and GA algorithms using the cholera SIRB example as common ground.The organization of this article is as follows: In Section 2, we introduce PSO.Section 3 contains a description of how PSO can be applied to ODE models.We illustrate an implementation of PSO to optimize an ODE model of cholera infections during the recent Haitian epidemic in Section 4. Finally, we provide concluding remarks in Section 5. Particle Swarm Optimization We are given an objective function : R → R with the goal of minimization.Based on reasonable parameter bounds for the problem at hand, we constrain our search space to the Cartesian product for lower bounds { } and upper bounds { }.We release a "swarm" of point particles within the space and iteratively update their positions.The simulation rules allow the particles to explore and exploit the space all the while communicating with one another about their discoveries.Since PSO does not rely on gradients or smoothness in general, it is well-suited to optimizing chaotic and discontinuous systems.This flexibility is an essential property in the domain of ODE models, whose prediction errors may change drastically with minute parameter variations.The swarm nature of PSO makes it amenable to parallelization, giving it a sizable advantage over sequential methods.The algorithm used for our computations is largely based on SPSO 2011 as found in [10].We provide implementation details in Section 2.Then, we discuss ODE models, as they are ubiquitous in the realm of modeling natural phenomena.We provide a justification of our procedure for hyperspherical sampling in the Appendix. Particle Swarm Optimization Implementation where = ( 1 , ⋅ ⋅ ⋅ , ) and = diag( 1 − 1 , ⋅ ⋅ ⋅ , − ).Hence, the particles search within the unit hypercube, which provides better hyperspherical sampling performance than does a search space whose dimensions could be orders of magnitude apart.The particles' positions are then affinely mapped to before evaluation by the objective function. A swarm consists of an ordered list of particles.This value is manually adjusted to fit the problem at hand, although more advanced methods may attempt to select it with a metaoptimizer or adaptively adjust it during execution.If is too low, then the particles will not be able to explore the space effectively; if it is too high, then computational costs will rise, without yielding a significant improvement in the final fitness.Each particle has attributes which are updated at every iteration.The attributes are shown in Table 1, and their initial values ( = 0) are given.We define (, ) to mean a value sampled uniformly from [, ].The particles are initially placed in the search space via Latin Hypercube Sampling (LHS).This helps avoid the excessive clustering that can occur with truly random placement, as we want all areas of the search space to be considered.The reader is directed to [11] for more details on LHS. The PSO algorithm derives its power from the communication of personal best positions among particles.During the initialization phase, each particle randomly and uniformly selects natural numbers in [1, ] (with replacement) and adds its own index to the list.Once again, is a tunable parameter of the algorithm; one might use = 3 as a good starting point.These numbers, with repeats discarded, form a set .They tell the particle which of its peers to communicate with when it discovers lower values of the objective function during the execution of the algorithm.This communication is also performed once to initialize the values before any motion has taken place.In addition, is continually updated based on the performance of the current configuration; this aspect of the algorithm is known as its adaptive random topology. Exploration and Exploitation. The degree of exploration in a given PSO configuration refers to the propensity of the particles to travel large distances within the search space and visit different regions.This must be properly balanced with exploitation, which is the tendency of particles to stay within a given region and examine local features to seek out even lower values of the objective function.The PSO specification provides a parameter to this end, and it may have to be manually or automatically tuned for best results on a particular function landscape. Algorithm. We present a description of the algorithm after initialization below.Each step is followed by a comment that may describe its motivation, practical implementation details, or pitfalls.Table 1 contains descriptions of the particle attributes.The parameters and (not to be confused with the in the SIRB model) used in the algorithm must be selected beforehand.They control the explorationexploitation balance and the "inertia" of the particles, respectively. (1) Apply a random permutation to the indices of the particles. Permutations can be selected uniformly at random with the simple Fisher-Yates shuffle. (2) For = 0 to − 1, We define to be the "center of gravity" of these points: , + ( − ), and + ( − ).This provides a mean position around which the particle can search for new locations.The larger is, the farther the particle will venture on average.Previous versions of SPSO had detectable biases along the axes of the coordinate system.This coordinate-free definition elegantly avoids such a problem.(b) Sample uniformly from the volume of a hypersphere with center and radius = ‖ − ‖.One might naively attempt this by generating a vector representing displacement from whose coordinates are (0, 1) each and scaling it to a length of (0, ).However, this does not produce a constant probability density.The proper method is to generate a vector whose coordinates are standard normal variates and scale it to a length of (0, 1) 1/ .Normal variates can be generated from a uniform source with methods such as the Box-Muller transform, and Ziggurat algorithm. The position 𝑥 󸀠 represents where the particle intends to travel next.The second term of V (+1) contains the corresponding displacement from its current position.However, we would also like to retain some of the particle's old velocity to lessen the chance of getting trapped in local minimum.To this end, we add an inertial term with a "cooldown" factor that functions analogously to friction.(d) Set ( + 1) = () + V ( + 1). The next position of the particle is its current position plus its updated velocity.(e) Clamp position if necessary.A multitude of conditions can be used to decide when to stop.Some common ones are as follows: If any component of a particle's position falls (i) The iteration count exceeds a threshold .For a fixed threshold, this method gives more computation time to larger swarms. (ii) The product of the iteration count and the swarm size (the number of objective function evaluations) exceeds .In this case, the threshold is a good representation of how much total computation time the swarm utilizes, as the computational resource limit is nearly independent of swarm size. (iii) The globally optimal objective function value (min ) falls below some threshold.(iv) The globally optimal objective function value has not improved by more than within the last iterations. (v) Any combination of the above. The output of the algorithm is then the value corresponding to the minimum value. The flowchart (Figure 1) summarizes the process of Particle Swarm Optimization. Discrete Parameters. If some (not necessarily all) of the parameters in the problem are discrete, PSO can be adapted by snapping the appropriate continuous coordinates in the search space to the nearest acceptable values during each iteration.This feature gives PSO great versatility, in that it can solve problems with entirely continuous, mixed, or entirely discrete inputs. Parallel Computing. Most real-world objective functions are sufficiently complex so that PSO will spend the vast majority of its time evaluating them, as opposed to executing the control logic.In our case, profiling showed that numerically solving an ODE at many points consumed more than 95% of the total CPU time.Therefore, in order to parallelize the algorithm, one needs only to evenly distribute these evaluations among different cores or possibly computers.The remaining logic of the algorithm can be executed in a single control thread that collates the fitness values and updates particle attributes before offloading evaluations for the next iteration. Randomness. Far too often in our experience, probabilistic numerical simulations are run with little thought to the underlying random number generator (RNG).As Clerc states in [10], the performance of PSO on some problems is highly dependent on the quality of the RNG.The default RNG in one's favorite programming language is likely not up to the task.By default, our code uses the fast, simple, and statistically robust xorshift128+ generator from [12].It outputs random 64-bit blocks, which are transformed using standard algorithms into the various types of random samples required.The use of a seed allows for exactly reproducible simulations (this is more difficult to ensure if a single generator is being accessed by multiple threads). ODE Models 3.1.Introduction.Suppose we have data and a corresponding system of ordinary differential equations which we believe models the underlying process.The system takes parameters and initial values (some known, some unknown) as input and outputs functions, which can then be compared to the data using some goodness of fit (GOF) metric (e.g., mean squared error).Competing models can be evaluated using frameworks introduced in [3].If we take the vector of unknown inputs to be a position within search space, then we can apply PSO in order to minimize the model's error.The choice of error function is an important consideration.Those GOF metrics that are superlinear in the individual deviations will preferentially reduce larger deviations over smaller ones.Similarly, sublinear functions will produce good correspondence on most data points at the expense of some larger errors. Compartmental Models. In epidemiology, a common tool for studying the spread of an infectious disease is the compartmental model.The prototypical example is the SIR model, in which the population is divided into three compartments: Susceptible, Infected, and Recovering.Differential equations describe how individuals flow between the compartments over time.The behavior of a specific disease is captured by parameters, and it is here that we use model fitting techniques to derive these parameters from data.The simplest form of SIR does not include birth or death (the dynamics of the disease are assumed to be sufficiently rapid), so it holds that the population size = + + is constant. More advanced models include more compartments and incorporate factors such as demography.In the field, it is often easiest to measure the number of infected individuals; thus a viable GOF metric may compute the distance between time series infection data and the corresponding points on the () curve for a given set of parameters. Chubarova [13] implemented GA for fitting ODE models to cholera epidemic data.Although better errors were achieved than similar studies that used traditional methods, the computational cost required was substantial.This led to our consideration of a technique which could achieve similar accuracy with fewer cycles, namely PSO.One of the models studied in [13] was SIRB, which is a refinement of SIR: it includes birth and death rates for the population and a compartment representing the concentration of bacteria in the water supply.We present equations for SIRB and describe the roles of its parameters in Section 4. SIRB. The SIRB model is a system of four nonlinear firstorder ODEs that capture how quickly individuals progress through the susceptibility-infection-recovery cycle and how bacteria are exchanged between the population and the water supply: The parameters and their ranges are listed in Table 2 (derived from [2] with some later corrections from the authors).An upper bound of NA means that the value is constant and not optimized by PSO.The bacteria being studied in this case are of the genus Vibrio.They decay from a state of hyperinfectivity (HI) to a non-HI state (only the former is considered infectious in this model) at a rate of .The parameter has units of cells per milliliter, while the rest have units of inverse days. Model Fitting. We apply PSO to fit the SIRB model to epidemic data from [14] using our own C code and the rkf45 ODE solver in the GNU Scientific Library.Since the data were obtained from hospitals, we must correct for the fact that not every infected person received care.We introduce the parameter ℎ as the proportion of recorded infections.Instead of presuming a certain value, we allow ℎ to be optimized by PSO.The initial values for the system for Department Nord are (0) = 549,486, (0) = 1/ℎ, (0) = 0, and (0) = 0 , where 0 is another parameter to be optimized by PSO.The particles explore an 8-dimensional search space with generic position vector = ( 0 , ℎ, , , , , , ) . We measure GOF using the mean absolute deviation: where {( , )} is the time series infection data.Figure 2 shows a plot of ℎ() and versus for an example run of the PSO algorithm.The tail behavior of our model is typical in that it fails to account for cyclical recurrence of the disease.Akman et al. address this phenomenon in [2].We performed 12 runs of the algorithm with the parameters = 1.193, = 0.721, = 40, and = 3 (suggested values from [10]).We chose a stopping condition of two million objective function evaluations.We found this condition experimentally by observing that fitness values rarely, if ever, improved beyond this point.Our results are given in Table 3.An individual run only takes around 8 minutes on a quad-core Intel Celeron J1900 @ 1.99 GHz (compiled with icc -O2).The same optimization problem, approached using GA in [13], took between approximately 1 and 6.4 days to run on two hexa-core Intel Xeon X5670s @ 2.93 GHz (see Table 4).Considering the similarity of the fitness values to the evolutionary computing results, PSO appears to be an attractive method for model fitting, especially for the case of expensive objective functions.It uses smaller populations and fewer timesteps due to the fast convergence of the fitness values, which distinguishes it from GA. Code for PSO and the SIRB model with sample data is available at [11]. Concluding Remarks The use of PSO is a novel approach in ODE modeling, particularly in the field of epidemiology.Where previous algorithms such as GA have taken excessively long to run in the face of complex models, PSO demonstrates its effectiveness by providing comparable results in an impressively small fraction of the time.It has a structural advantage over GA that cannot be attributed to the complexity of the control logic; PSO requires many fewer objective function calls to reach the same level of accuracy.We implemented PSO to optimize an ODE model given cholera data.Although there are other global optimization methods that are also used in fitting parameters of ODE models, such as simulated annealing or scatter search, we restricted our comparison only to the performance of GA, since it was extensively used for the same infectious disease model.We achieved similar model errors in a substantially shorter period of time, as explained above, due to the fundamental differences of operation between PSO and GA.Additionally, we compared the performance of PSO with that of DIRECT [15][16][17][18][19] per the suggestion of a referee.DIRECT evaluated the objective function approximately six thousand times in the same amount of time it took PSO to perform two million evaluations, and it appeared to converge to a worse fitness value. As with most optimization algorithms, the remaining challenge of PSO is algorithm parameter selection.In the context of our study, this is similar to selecting the right GA parameters, such as population size, number of generations, cross-over rate, and mutation rates; that all may have an impact on the accuracy and runtime.One possible improvement is to add a metaoptimizer to choose the algorithm parameters without human intervention.A fruitful potential topic of investigation is to extend PSO to approximating unknown functions (such as probability distributions) rather than merely selecting parameters.The same distribution can also be produced in a simpler fashion by repeatedly generating a vector with components of (−, ) and rejecting it if ‖‖ > .However, the expected number of tries per sample is equal to the ratio of the volume of a -cube to that of an inscribed -ball, which grows too rapidly with . Table 4 : GA run statistics.Time to make elites (s) Time to make parameter sets (s) Total time (s) Sample size Stopping threshold Number of elites Max generations Parameter sets created Fitness 8680 find that () must be uniformly distributed on the surface of the unit -sphere.Since a -ball with radius ≤ 1 has volume proportional to , we would like to find a random variable supported on [0, 1] such that ( () ≤ ) ∝ ⇒ ( ≤ ) ∝ .(A.1)It is easily shown that = (0, 1) 1/ has a cumulative distribution function of () = .Setting = () produces the desired result. Table 1 : Particle attributes indexed by position in list. outside [0, 1], we clamp it to the appropriate endpoint and multiply the corresponding component of its velocity by −0.5.This inelastic "bouncing" contains particles, while still allowing exploitation of the periphery of the search space.(f) Increment .(g) If ( ()) < ( − 1), set () = () and (i) Regenerate topology if global fitness min did not improve from previous step.If the global fitness of the swarm has not improved at the end of the iteration, then the sets are updated by regenerating them as during the initialization phase.(j) Check stopping condition.See below for example conditions. () = ( ()).Else, carry forward values from previous iteration.If the value of the objective function at the current position is better than the personal best value from the previous iteration, set the personal best position to the current position and the personal best value to the current value.(h)Communicate with neighbors.When a particle finds a value of the objective function lower than its (which corresponds to the position ), it informs all of its neighbors by updating their values of these attributes to its current objective function value and position. Table 2 : Parameter ranges for cholera model. Table 3 : Parameter values from PSO and corresponding fitness.
v3-fos-license
2020-05-21T09:13:37.052Z
2020-05-20T00:00:00.000
218765707
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1128/jcm.00348-20", "pdf_hash": "5dc792097b090b01375337155dede9a16546f60c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41495", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d7f099784ac14eb9171bc48b318cb016d2c3275e", "year": 2020 }
pes2o/s2orc
Evaluation of Serological Tests for Detection of Antibodies against Lumpy Skin Disease Virus Lumpy skin disease (LSD) is an emerging, transboundary viral pox disease affecting cattle of all ages and breeds. The serological assay for monitoring immunity following vaccination is a virus neutralization test (VNT/OIE) that determines the neutralization index (NI). The first validated enzyme-linked immunosorbent assay (ELISA; IDVet) has become commercially available, facilitating large-scale serosurveillance for LSD. Although the VNT is labor intensive and time consuming, it is still the recommended test by the OIE. mines the outcome. Immunity to CaPV infection is predominantly cell mediated (11), because most progeny viruses remain inside the infected cells. By spreading locally and directly from cell to cell, the virus is out of the reach of circulating antibodies. The extracellular enveloped virions, which are released by budding from infected cells, may infect neighboring cells or escape into the blood and be disseminated throughout the body (11). Transmission of LSDV is achieved mechanically by blood-feeding arthropods such as mosquitoes (Aedes aegypti) (12), stable flies (Stomoxys calcitrans), and ticks (Amblyomma hebraeum and Rhipicephalus appendiculatus) (13,14). A natural resistance to LSDV infection is known in cattle, and subclinical LSDV infections are common (11). LSDV causes significant economic losses, mainly by inducing severely reduced milk production, weight loss, abortion, infertility, and hide damage (15). For successful LSD control, vaccination of all susceptible animals is considered to be the main pillar, supported by other control measures such as stamping out, animal movement restrictions, and vector control (16). In vaccinated animals, antibodies appear 10 days postvaccination and reach a peak 30 days later. A local response to the vaccine usually correlates with good antibody production (4). As with infection with virulent wild-type virus, some bovines are refractory to LSD vaccination, failing to develop a local reaction or detectable levels of antibodies (4). The serological assay for monitoring immunity following vaccination recommended in the OIE terrestrial manual is a virus neutralization test (VNT/OIE) that determines the neutralization index (NI). The first validated enzyme-linked immunosorbent assay (ELISA) (manufactured by IDVet) has become commercially available, facilitating largescale serosurveillance for LSD (11). This ELISA is able to detect antibodies against capripoxviruses (LSDV, SPV, and GPV) from approximately 20 days until 7 months postvaccination. In this study, we modified the virus neutralization test by using Madin-Darby bovine kidney (MDBK) cells and compared the performances of the method with the recommended VNT/OIE test and available ELISA. For this purpose, we used blood sera received for a surveillance program for lumpy skin disease in 2018. The cattle population in Croatia was vaccinated with two live vaccines, Lumpyvax, MSD, and Lumpy skin disease vaccine for cattle, Onderstepoort Biological Products, in 2016 and 2017 during the preventive vaccination campaign approved by the European Commission 1 January 2016. In the beginning of the 2018, the preventive vaccination campaign was stopped and a surveillance program for LSD was started, with an aim to monitor immunity in vaccinated cattle (17). MATERIALS AND METHODS Animal ethics. Blood samples were taken from privately owned bovines within the frame of the monitoring for LSD ordered by the Ministry of Agriculture, the Veterinary and Food Safety Directorate. The sampling was performed in line with the principles of good veterinary practice and in full respect of animal welfare. Since this study does not include any animal experiments, the Board of Ethics (Croatian Veterinary Institute) decided that no formal approval was required and that the study is in accordance with national legislation (18). Blood samples. Bovine blood samples (n ϭ 291) received for the LSD surveillance program in 2018 were used for the evaluation of the VNT on MDBK cells (VNT/MDBK) and its comparison with ELISA (IDVet, France). Of 291 samples, 80 samples were tested by VNT/OIE and used for comparison to VNT/MDBK. Blood samples were centrifuged at 2,000 rpm (939 ϫ g) for 10 min. Serum was transferred to 2-ml tubes (Eppendorf, Germany) and stored at Ϫ20°C until analysis. Serum samples were inactivated at 56°C for 30 min for VNTs. Virus. LSDV (Neethling vaccine strain) isolated from a skin nodule on a vaccinated animal as previously described (19) was used for VNT tests. In brief, the sample of nodule skin obtained by biopsy was homogenized in Dulbecco's modified Eagle's medium (DMEM) with sterile sand and centrifuged at 3,000 rpm (2,100 ϫ g). Supernatant was filtrated trough 0.45-m pores (Merck Millipore, USA). A suspension of 5 ϫ 10 6 MDBK cells placed in T25 flasks (Nunc, Thermo Fisher Scientific, USA) was infected with 1 ml of supernatant. Flasks containing infected cells were incubated 1 h at 37°C. After the incubation period, the medium composed of 89% DMEM high glucose (Thermo Fisher Scientific, USA) supplemented with 10% fetal bovine serum (FBS) (Thermo Fisher Scientific, USA) plus 1% penicillin-streptomycinamphotericin B (Sigma-Aldrich, USA) (basal medium) was added and incubated at 37°C. The cells were examined daily for the presence of a viral cytopathic effect (CPE). After 72 h, the flasks were frozen and thawed and the content was centrifuged at 3,000 rpm (2,100 ϫ g) for 10 min. The supernatant (5 ml) containing virus was used to infect 8 ϫ 10 6 cells in suspension in T75 flasks (Nunc, Thermo Fisher Scientific, USA) according to the above-described procedure. Virus was passaged in total eleven times. Aliquots of supernatant containing virus were titrated in 96-well plates according to standard procedures. Virus titration. Virus stock titration was performed in 96-microwell plates (Nunc, Thermo Fisher Scientific, USA). Virus was titrated using 10-fold dilution (10 Ϫ1 to 10 Ϫ10 ). Two hundred seventy microliters of DMEM high glucose was added in wells in six columns. In six wells in first row, 30 l of virus stock was added. After thorough mixing with an automatic pipette, 30 l of first dilution was transferred to the next row. The procedure was repeated until a final dilution of 10 Ϫ10 was reached. One hundred microliters of prepared dilutions was transferred to the corresponding wells of a new 96-microwell plate (Nunc, Thermo Fisher Scientific, USA). Complete medium for MDBK cell cultivation was removed from the T75 flask (Nunc, Thermo Fisher Scientific, USA). The cells were trypsinized, and 100 l of the cell suspension (10 5 /ml) was added to each well. Four wells were used for cell control and four wells for virus control. The plate was incubated at 37°C with 5% CO 2 for 72 h and inspected daily for the presence of a cytopathic effect (CPE). Cell control wells had to demonstrate the absence of CPE, and characteristic CPE in the form of lumps on the cell layer had to be present in wells for the virus control. Virus titer was calculated according to the Spearman-Karber method. The results are expressed as decimal logarithms (D 50 ). Positive-and negative-control sera. As positive-and negative-control sera, we used national positive and negative controls. National negative-control serum was prepared using serum samples obtained from the animals before vaccination against LSDV during our previous study (15). National positive-control serum was prepared using serum samples obtained from the animals 4 weeks after vaccination against LSDV. Negative-and positive-control serum samples were tested with ELISA and VNT/MDBK. ELISA. Serum samples (n ϭ 291) were tested using ID Screen Capripox Double Antigen ELISA (IDVet) for the detection of antibodies against capripox viruses according to the manufacturer's instructions. Virus neutralization test. Serum samples (n ϭ 80) were tested using the VNT procedure prescribed by the OIE terrestrial manual (2017), which determines the neutralization index as the preferred method. The virus strain was titrated against a constant dilution of test serum. In that way, a larger volume of test serum is required, but the difficulty of ensuring 100 for the 50% tissue culture infective dose (TCID 50 ) is neutralized. The test was performed in 96-well plates (Nunc, Thermo Fisher Scientific, USA) using lamb testis (LTe) cells according to the available OIE procedure. Virus neutralization test on MDBK cells. Bovine serum samples (n ϭ 291) were analyzed using a modified VNT procedure to determine LSDV antibody titer. The test was performed in 96-microwell plates (Nunc, Thermo Fisher Scientific, USA). Each test contains a control plate used for virus back titration and for positive and negative serum titration and a plate for testing serum samples. For the virus back titrations, 150 l of DMEM high glucose (Thermo Fisher Scientific, USA) was added to appropriate wells in columns 1 to 4 of the control plate. One hundred microliters of DMEM high glucose (Thermo Fisher Scientific, USA) was added to appropriate wells in columns 5 to 12. Fifty microliters of the working virus was added to four wells in row H (columns 1 to 4), and 4-fold serial dilutions (up to 1/65,536) were made. Fifty microliters of positive serum sample was added to four wells in row H (columns 5 to 8), and 50 l of negative serum sample was added to four wells in row H (columns 9 to 12). Threefold dilutions were made (one-third up to 1/6,561). Serum samples to be tested were examined on new 96-microwell plates. One hundred microliters of DMEM high glucose (Thermo Fisher Scientific, USA) was added to each well of the 96-microwell plate for serum sample dilution. Fifty microliters of the examining serum samples was added in duplicate wells in row H, and 3-fold dilutions were made (until 1/6561). It is possible to test 6 samples per plate. After all dilutions were made, 50 l of the working virus was added to all wells containing diluted positive, negative, and tested serum samples and in the wells for infection control. The plates were incubated at 37°C with 5% CO 2 for 1 h. Following the incubation period, 50 l of MDBK cells (5 ϫ 10 5 /ml) was added and incubated at 37°C with 5% CO 2 for 72 h. The plates were inspected for the presence of cytopathic effect (CPE). The absence of CPE in wells with a dilution of one-third was considered positive for the presence of antibodies. Titer was calculated according to Spearman-Karber method. The results are expressed as decimal logarithms (D 50 ). RESULTS Virus neutralization test. Of 80 bovine serum samples, 59 (73.75%) tested positive and 21 (26.25%) tested negative ( Comparison of the performance of the ELISA, VNT/MDBK, and VNT/OIE. The performances of VNT/MDBK and ELISA were compared to that of VNT/OIE on 80 samples in total. The highest number of positives was detected by ELISA (n ϭ 64), followed by VNT/OIE (n ϭ 59) and VNT/MDBK (n ϭ 56). The compatibility of results obtained by VNT/MDBK and VNT/OIE resulted in a kappa index of 0.90 with overall proportion agreement of 0.96. No false positives were detected with VNT/MDBK. Three samples had an NI of 1.5 in VNT/OIE, and in VNT/ MDBK, their neutralization activity was detected in 50% of the wells at a dilution of one-third. Since that value was below the chosen cutoff, they were considered negative. Agreement between VNT/MDBK and VNT/OIE was achieved in 56 positive and 21 negative samples. The sensitivity of the VNT/MDBK compared to that if VNT/OIE was 95%, and specificity was 100% ( Table 2). The compatibility of results obtained by ELISA and VNT/MDBK was compared on 291 samples in total and resulted in a kappa index of 0.834 with overall proportion agreement of 0.955. Agreement between ELISA and VNT/MDBK was achieved in 238 positive and 40 negative samples (Table 2). In total, 12 positives were detected with ELISA that were VNT/MDBK negative, and one sample that was ELISA negative was VNT/MDBK positive. The sensitivity of VNT/MDBK compared to that of ELISA was 95%, while specificity was 97.56% (Table 2). DISCUSSION We described a modified VNT using MDBK cells and compared it with the current gold standard VNT/OIE and commercially available ELISA. Although VNT is labor intensive and time consuming and requires biosafety level 3 (BSL3) containment in disease-free countries (16), it is still the recommended test in the OIE terrestrial manual. Our aim was to modify the VNT that measures LSDV antibody titers and to evaluate its performance in comparison with VNT/OIE and commercially available ELISA. The aim was also to adopt a VNT method that overcomes certain difficulties related to the use of LTe cells in VNT/OIE. Specifically, the use of LTe cells in the VNT/OIE makes it more time consuming than the use of MDBK cells in a modified VNT. It was already described that OA3Ts cells exhibit a shallow exponential growth phase until 96 h postseeding (19), resulting in longer expansion time. Furthermore, besides differences in growth kinetics, differences in the morphology of cells with low and high passage numbers were described. Cells with lower passage numbers (p17 to p27) were observed in greater numbers and had a uniform cell morphology, while cell viability at each passage level was determined to be consistently greater than 97% (19). Under our conditions, similar morphological changes and expansion characteristics were observed. The VNT/OIE test procedure is time consuming, and the incubation period takes up to 9 days. Under our conditions, in VNT/OIE, the first signs of the CPE occurred 72 h postinfection in wells infected with high levels of virus (log 10 4 and log 10 5). The specific changes were not clearly evident in wells with lower levels of virus (log 10 1.5 to log 10 3.5) 72 h postinfection (p.i.). The specific CPE became evident 96 h p.i. In VNT/MDBK, the first signs of CPE were evident 48 h p.i., and after 72 h, the specific changes were clearly evident as described also by Samojlović et al. (20). Furthermore, the test results 72 h p.i. are in line with the results observed 96 h p.i. (data not shown). The CPE caused by LSDV in MDBK cells was clearly different from the LSDV CPE in LTe cells. The effects of LSDV on MDBK manifested as cell proliferation and accumulation in the forms of lumps on the cell monolayer, as already demonstrated (20). Two previous studies (16,20) also described VNT for LSDV antibody titer estimations. The methods were technically similar to those for the VNT/MDBK used here, but the VNT/MDBK is the only one that compared the results to the gold standard VNT/OIE. Differences related to VNT described by Samojlović et al. (20) are reflected in several factors, such as the virus used, dilution fold, and slight difference in cell number. Samojlović et al. (20) used wild-type virus isolated during an LSD outbreak in Serbia, while we used a vaccine strain isolated from a skin nodule on a vaccinated animal (19), which was passaged eleven times in MDBK cells with clear cytopathic effect. The use of an isolated vaccine strain for VNT proved to be very useful when there are no cases of infection with the wild type in the country. For sample dilutions (2-fold), Samojlović et al. (20) used Eagle MEM with HEPES buffer and with 10% fetal bovine serum (FBS). The use of DMEM high glucose for sample dilution (3-fold), as described here, proved to be satisfying and lowered the amount of FBS used and thus the overall costs of the test. Additional differences observed are related to the cell number used and the performance of 8-well replicates instead of two-well replicates described here. Samojlović et al. (20), one-tenth in the VNT used by Milovanović et al. (16), and one-third in our method. The dilution of one-tenth as the cutoff value might cause low-positive samples to be missed. The highest number of positives detected by ELISA was not unexpected and was already reported. A mismatch of detected positive and negative cattle was seen to occur between ELISA and VNT in 26 cases in the study by Milovanović et al. (16). The same nonconformity of serological tests was reported by Babiuk et al. (21), which was explained by the detection of different anticapripoxvirus antibodies with the different tests used. The strong correlation between VNT/OIE and VNT/MDBK indicates the suitability of VNT/MDBK for the detection of the LSDV-specific neutralizing antibodies. We are aware that the results would be more confident if the comparison included a larger sample size. Three samples tested by VNT/OIE expressed values at the limit of positivity (NI, 1.5). The neutralization activity of those samples was also detected by VNT/MDBK in 50% of the wells. Since the detected neutralization activity occurred below the chosen cutoff value, those samples were considered negative. In addition, the antibody level following vaccination can be low or below the limits of detection of serological methods. As already stated by Samojlović et al. (20), the virus neutralization test is considered to be the most specific serological method, but it is not sensitive enough to detect antibodies in each animal that has been in contact with the virus. Furthermore, when assessing the results of serological tests for LSD diagnostics, some specifics of immune responses to LSDV infection must be kept in mind. The immunity is predominantly cell mediated, and vaccinated animals or those showing mild disease may develop only low levels of neutralizing antibodies, which are often below the detection limits of currently available serological tests (11). The vaccinated animals do not mount an antibody response following virulent challenge, supporting the role of cell-mediated immunity in protection from capripoxvirus disease (16). Despite those immunological specificities, serological assays are recommended as suitable methods to investigate relatively recent outbreaks and can be used to demonstrate the diseasefree status of a country provided that testing is carried out at regular intervals. In addition to VNT as the gold standard, immunoperoxidase monolayer assays (IPMAs) and indirect fluorescent antibody tests (IFATs) can be used for serological surveys. Although VNT is labor and time consuming, it can be modified slightly to increase the number of samples tested on one plate and to reduce the time to read the results (22). The current knowledge on the time span of antibody detection after vaccination is heterogeneous (16). A significant increase of capripoxvirus-specific antibody titers is described to be seen from day 21 up to day 42 after vaccination, and antibodies remain detectable for approximately 7 months. An immunological study conducted in cattle after vaccination with LSDV showed that detection of specific antibodies is limited to 40 weeks postvaccination (23), while Milovanović et al. (16) reported the detection of antibodies up to 46 to 47 weeks after vaccination. The maximum duration of protection has been reported to be 22 months, and the immune status of a previously infected or vaccinated animal cannot be related directly to serum levels of neutralizing antibodies. Antibodies against CaPV can usually be detected for 3 to 6 months after infection, but further studies are required to investigate the long-term persistence of CaPV antibodies postinfection (11). Conclusions. Serological assays are recommended as suitable methods to investigate relatively recent outbreaks and can be used to demonstrate the disease-free status of a country provided that testing is carried out at regular intervals. Results obtained with VNT/MDBK demonstrated a strong correlation to those of VNT/OIE and ELISA, which indicates its suitability for LSDV neutralizing antibody detection.
v3-fos-license
2022-01-18T14:45:39.144Z
2022-01-18T00:00:00.000
246008976
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-022-05018-0", "pdf_hash": "38d8c0fdbe4422450d5511cadcffeeb4311f7e42", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41497", "s2fieldsofstudy": [ "Medicine" ], "sha1": "38d8c0fdbe4422450d5511cadcffeeb4311f7e42", "year": 2022 }
pes2o/s2orc
The University of California-Los Angeles (UCLA) shoulder scale: translation, reliability and validation of a Thai version of UCLA shoulder scale in rotator cuff tear patients Background UCLA Shoulder Scale is a useful evaluation tool to assess the functional outcome of shoulder after treatments. It has been translated into several languages. The objectives of this study were to translate UCLA Shoulder Scale into Thai language and validate the translated version in patients with rotator cuff tear. Methods This study consists of 2 phases: 1) Development of the Thai version of UCLA Shoulder Scale and 2) Validation of the translated version. The UCLA Shoulder Scale was translated into Thai according to the international guideline. Seventy-eight subjects with a mean age of 71 ± 11.5 took part in the study. All had shoulder pain and rotator cuff tear according to MRI from 2019 to 2020. Four patients were excluded due to incomplete questionnaires. The data from 21 patients whose symptoms in shoulder joint had not changed within 14 days were analyzed with the UCLA Shoulder Scale test-retest using intraclass correlation (ICC), Standard Error of Measurement (SEM) and Minimal Detectable Change (MDC). The Thai version of UCLA Shoulder Scale was compared to the validated Thai versions of American Shoulder and Elbow Surgeons (ASES), Western Ontario Rotator Cuff (WORC) and Shortened version of The Disability of the Arm, Shoulder and Hand (QuickDASH) shoulder scores. Results Thai version of UCLA Shoulder Scale was developed following the guideline. Moderate to strong correlations were found using Spearman’s correlation coefficient between pain, function and total score of Thai version of UCLA Shoulder Scale. The reliability of total UCLA Shoulder Scale was excellent (ICC = 0.99, 95% CI 0.97–1.00), whereas agreement assessed with SEM and MDC (0.18 and 0.50 respectively) demonstrated a positive rating. The validity analysis of total UCLA Shoulder Scale (Thai version) showed moderate to strong correlations with total ASES, total WORC and QuickDASH (Thai versions). The Thai version of UCLA Shoulder Scale showed no floor and ceiling effects from the results. Conclusion The Thai version of UCLA Shoulder Scale is a reliable and valid tool for assessing the function and disability of the shoulder in Thai patients who have rotator cuff tear. Page 2 of 11 Thamyongkit et al. BMC Musculoskeletal Disorders (2022) 23:65 Background Functional improvement is the most important goal after rotator cuff tear treatment. Pain and motion restriction range could lead to patients' disability. Since reliability and accuracy are important for measuring tools, developing an appropriate one is needed to evaluate the patients undergoing/receiving shoulder treatment. During the past 20 years, various scoring systems have been used in clinical evaluation and research to represent treatment outcomes such as American Shoulder and Elbow Surgeons (ASES) shoulder score, Western Ontario Rotator Cuff (WORC), The Disability of the Arm, Shoulder and Hand questionnaire (DASH), shortened version of DASH (QuickDASH), and especially the University of California-Los Angeles (UCLA) Shoulder Scale [1][2][3][4]. A few Thai versions of shoulder scoring systems were used to evaluate shoulder function [5][6][7]. Reliability and accuracy are important for measuring tools. UCLA Shoulder Score, originally published in 1981 in Clinical Orthopaedics and Related Research, was initially intended to assess clinical outcomes after total shoulder arthroplasty [8]. This assessment tool has later been thoroughly studied and widely used in the research area. Now that UCLA Shoulder Scale has been mainly used to evaluate outcomes in patients after surgery, it has been translated into many different languages such as Portuguese, Italian, Turkish and Polish [9][10][11][12]. However, to our best knowledge, it has not been translated into Thai language following the international guidelines, Linguistic Validation Manual for Patient-Reported Outcomes Instruments [13]. The objectives of this study were to develop the Thai version of UCLA Shoulder Scale from the English version and to evaluate its psychometric properties seeing that the Thai version of UCLA Shoulder Scale could be useful for clinical and research purposes concerning Thai population. Methods Our study was divided into two phases (Fig. 1). The first step was to develop the Thai version of UCLA Shoulder Scale from the standard English version [14]. The permission for translation was granted from the publisher. The second was to validate the Thai version of UCLA Shoulder Scale and compare psychometric properties with common shoulder scoring systems including the UCLA Shoulder Scale contained two parts of questions: physician and patient sections. The physician section was based on physical examination. It consisted of two single-item sub-scales, which included "active forward flexion" (maximum of five points and completed by physicians), and "strength of forward flexion" (maximum of five points and completed by physicians). In contrast, the patient self-completed section of UCLA Shoulder Scale consisted of three single-item sub-scales, "pain" (maximum of ten points and completed by patients), "satisfaction" (maximum of five points and completed by patients), and "function" (maximum of ten points and completed by patients). Scores ranged from 0 to 35 with a score of 0 indicating worst shoulder function and 35 indicating best shoulder function. Phase1: Development of the Thai version of UCLA Shoulder Scale The patient self-completed section of UCLA Shoulder Scale was translated and adapted in accordance with the guidelines [13]. The process comprised six stages: The team assessed the consistency of the original English version and the Thai version. Consistency was assessed via a 6-level scale from 0 to 5 (0 = inadequate, 5 = fully adequate). When questions were marked level 3 or lower, they were discussed by the team to make appropriate changes for the translation. 6. Cognitive debriefing -The clarity, understandability and acceptability of the Thai version of UCLA Shoulder Scale were tested on five Thai patients who had rotator cuff tear for a minimum period of 3 months. This group of patients filled out comprehensive assessment score, which was used to assess whether the given questions were fully comprehensible or not. Comprehension was assessed via a four-level scale (0 = totally incomprehensible, 3 = fully comprehensible). In cases where questions were considered incomprehensible, the assessing patients were asked to give the reasons for the lack of understanding. The group of authors (ST, TW and NS) analyzed and revised the questions to create the final Thai version of UCLA Shoulder Scale. Phase 2: Validation of the UCLA Shoulder Scale (Thai version): Tests for psychometric properties Study participants were enrolled from the department of orthopaedics in a single university-based hospital. Seventy-eight subjects who were diagnosed with rotator cuff tear according to MRI and had failed from conservative treatment between 2019 and 2020 were eligible for the study. All patients had at least 6 months of consistent shoulder symptoms. They were above 18 years old, native Thai speakers who signed the informed consent to participate in the study. Exclusion criteria were applied with patients who had previous shoulder fractures or surgeries, osteoarthritis, shoulder dislocations, scapular fractures, clavicle or upper limb fractures; patients who had rheumatoid arthritis or neurological conditions; and patients who cannot fully understand nor signed the informed consent. Four patients were excluded due to incomplete questionnaires. Each patient was evaluated by both Thai versions of UCLA Shoulder Scale and WORC score pre-operatively. The study was evaluated only in patients who had rotator cuff tear because they had both pain and disability which need accurate evaluation tools. Test-retest reliability was also assessed in these 21 patients who completed the UCLA Shoulder Scale for two times. The interval between test and retest was 2 weeks. Internal consistency The internal consistency of multi-item sub-scales was not assessed in UCLA pain and function sub-scales because these two sub-scales consisted of a single item. However, correlations between each measurement in the UCLA Shoulder Scale were evaluated using Spearman's rank correlation coefficient (SCC). Test-Retest reliability and agreement The intraclass correlation (ICC) was used to assess the reliability of UCLA Shoulder Scale. This was calculated from the group of 21 patients who had completed the UCLA Shoulder Scale for two times. According to the guidelines from the literature, we assumed a positive rating for reliability when the ICC is ≥0.70 [13]. Agreement is the property related to the absolute measurement error by the instrument when two or more measurements repeated in the same condition. Standard Error of Measurement (SEM) and the Minimal Detectable Change (MDC) were calculated to assess the agreement [15]. SEM was calculated using the formula: SEM = SD √(1-R), where SD represents Standard Deviation of the sample and R represents the reliability parameter (ICC). MDC was calculated using the formula: MDC=SEMx1.96× √2, where 1.96 derives from the 0.95% CI of no change, and √2 shows two measurements assessing the change. We gave a positive rating for agreement if the MDC was smaller than Minimal Important Change (MIC). Additionally, we defined MIC = 1, which was the smallest number scale difference in this scoring system [16]. The data calculation was done in the group of 21 people who had completed the UCLA Shoulder Scale for two times. Content validity Content validity refers to the degree that the instrument covers the content that it is supposed to measure. Indexes of Item-Objective Congruence (IOC) was used to evaluate content validity of Thai version of UCLA Shoulder Scale. The IOC of each item was calculated using summation of score from each orthopaedic surgeons (ST, TW, KC, PT and SV) divided by the number of surgeons. Floor or ceiling effect was considered to be present if more than 15% of the respondents achieved the lowest or highest possible score, respectively [15]. Floor and ceiling effects were calculated from the group of 74 patients for UCLA Shoulder Scale, ASES shoulder score, WORC and QuickDASH. Construct validity Construct validity was evaluated to ensure that scores in Thai version of UCLA Shoulder Scale is consistent with the concepts that are being measured [15]. To evaluate the construct validity, we analyzed the correlation between the Thai version of UCLA Shoulder Scale and Thai version of ASES Shoulder Scale, WORC and QuickDASH [7,17,18]. Construct validity of the Thai version of UCLA Shoulder Scale was evaluated by the Spearman's correlation coefficient (SCC). Correlation coefficients: r < 0.30 = low, 0.30 < r < 0.70 = moderate, and r > 0.70 = high, were used to assess the validity [19]. Statistical analysis The level of statistical significance was assumed a priori at α < 0.05. Shapiro-Wilk test show that the results had a non-normal distribution. Spearman's correlation was used to evaluate the correlation between each measurement in UCLA Shoulder Scale. The test-retest reliability was analyzed using intraclass correlation (ICC), two-way random-effects model [20]. Based on a systematic review study [21], appropriate sample size is at least 15 subjects for our test-retest reliability (the sample size of about 5 times the number of items). The internal consistency was measured using Cronbach's α. The sample size was based on the general recommendations of Altman of at least 50 subjects in a method comparison study [22]. Statistical analyses were performed using SPSS 11.0 for Windows (SPSS, Chicago, IL, USA). A p-value < 0.05 was considered statistically significant. Phase 1: Development of the Thai version of UCLA Shoulder Scale According to the guidelines [13], the translation and adaptation to develop the Thai version of UCLA Shoulder Scale was carried out in six stages. for medication for mild to moderate pain in Thailand. (Table 1) -Stage V: The consistency of Thai version was assessed by the team of five orthopaedic surgeons, who currently practice in patients with shoulder pain. The assessment was made via a 6-level scale: when the consistency level was marked 3 or lower, the words used in the questionnaire were corrected according to the agreement of authors and orthopaedic surgeons. -Stage VI: The Thai version of UCLA Shoulder Scale was tested with a group of five Thai patients who were diagnosed with rotator cuff tear and had been suffering from it for at least 6 months. The group was composed of three women and two men aged from 62 to 75 years old. After the analysis of the answers was received from the group of five patients, an average comprehensive assessment score of 2.41 was obtained. Phase 2: Validation of the UCLA Shoulder Scale (Thai version) Seventy-four participants completed the Thai version of UCLA Shoulder Scale as well as the Thai versions of ASES and QuickDASH questionnaires. Their demographic data were shown in Table 2. The correlation between each measurement in UCLA Shoulder Scale had moderate to strong correlations (0.43-0.76) ( Table 3). The UCLA Shoulder Scale was compared between test 1 and test 2 (re-test) in a group of 21 patients. There was a significant difference of total UCLA Shoulder Scale (p < 0.05). However, the difference was small in relation to the initial result, which was − 0.62. The ICC for the total UCLA was 0.99 and the domains ranged between 0.93 Table 1 The changes made to the UCLA Shoulder Score (Thai version) during Stage IV Table 4). Thai version corrected after backward translation (CV) Indexes of Item-Objective Congruence (IOC) was used to evaluate content validity of patient self-completed section in Thai version of UCLA Shoulder Scale (Table 5). Floor and ceiling effects were not presented (< 15%) in a group of 74 patients for UCLA, ASES, WORC and QuickDASH. ( Table 6). The construct validity of the Thai version of UCLA Shoulder Scale was assessed using Spearman's correlation coefficient. The total UCLA Shoulder Scale moderately correlated with total ASES, WORC and QuickDASH scores (p < 0.01). There are moderate correlations between UCLA domain of pain, ASES domain of pain, WORC domain of symptoms and QuickDASH. Also, there are moderate correlations between UCLA domain of function, ASES domain of function, WORC domain of work and QuickDASH (Table 7). Figures 2, 3 and 4 showed a scatter plot of total UCLA vs total ASES, total WORC and QuickDASH, respectively. Discussion The study demonstrated the process of translation and validation of Thai language version of UCLA shoulder score. The results show good validity and reliability of the translated version. Furthermore, in terms of construct and convergent validity, there were moderate to strong correlations between each item in UCLA Shoulder Scale. The scale also had moderate correlation with ASES regarding pain vs. pain dimension (SCC = − 0.536, p < 0.01). These results are comparable to the previous studies [23,24], which showed significant correlation of the original UCLA Shoulder Scale with other shoulder scoring systems including ASES, DASH, Oxford Shoulder Score, and Constant Shoulder Score. The reliable and comprehensible questionnaire is needed for effective evaluation of patients' functional status and treatment outcomes. UCLA Shoulder Scale itself has some concerns regarding double-barreled items and allocated points of each item, which might cause some difficulties for respondents to pick an appropriate answer for different items. Despite the fact that UCLA Shoulder Scale was developed at the time when the modern psychometric test had not yet been established, it has still been widely used for functional evaluation of the shoulder in clinical practices and research. There are studies on psychometric properties of other language versions of UCLA Shoulder Scale. The results show comparable reliability and validity to the original English version [9,10,12]. UCLA Shoulder Scale is accepted to be a useful tool because it is relatively quick and easy for respondents to complete the questionnaire compared to other tools. UCLA Shoulder Scale has a patient self-completed section which needs the patients to understand each question comprehensively. For this reason, the questionnaire for patients is usually translated into local language. However, to standardize the questionnaire, the translation process comprised multiple steps. This study focused on the process of literal translation of UCLA Shoulder Scale into Thai language. Both translation of questionnaire into Thai and backward translation were performed complying with international guidelines [13]. We assigned orthopaedic surgeons to do the translation in stage I of development because they understand and are familiar with the language used in physician-patient conversation. To minimize the translation error, a professional English language translator, orthopaedic surgeons and patients suffering from rotator cuff tear were involved in the translation process. In the process of questionnaire development, three items were changed during phase IV. These issues were identified and resolved in the course of the team's discussions. In the process of content validation, we assessed by examining floor and ceiling effects and skewness of distribution. In the previous studies on translating other shoulder scoring systems into Thai language, there was a good content validity with negligible floor and ceiling effects [5,6,25]. However, the Thai version of UCLA Shoulder Scale in our study had a good content validity with no floor and ceiling effects as demonstrated in Table 4. In this study, the floor and ceiling effects ranged between 0 to 1.4% and skewness ranged between − 0.295 and 0.309. The strength of this study is that it is the first Thai language version of the UCLA Shoulder Scale which was translated in compliance with the international guidelines [13]. The study also provided the evidence of accuracy and psychometric properties of the Thai version of UCLA Shoulder Scale. Nevertheless, our study still had some limitations. First, the study had a relatively small number of participants completing the testretest reliability. The larger sample size could increase the reliability and accuracy of the study result. Second, the participants were enrolled in a single universitybased institute which might not represent the entire Thai population. Despite the mentioned limitations, the study showed that the Thai version of UCLA Shoulder Scale had fair to good correlation with other scoring systems. This correlation level is similar to the English version of UCLA Shoulder Scale [16,23,26]. Conclusion The Thai language version of UCLA Shoulder Score constitutes a valuable tool to evaluate shoulder function in patients who have rotator cuff tear. The study demonstrated good validity and reliability of the Thai version of UCLA Should Scale. This shoulder functional scoring system could be the useful evaluating tool in the aspects of further clinical and research use because of its clarity and comprehensibility for Thai patients.
v3-fos-license
2021-12-14T14:20:26.228Z
2021-12-14T00:00:00.000
245125699
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2021.767041/pdf", "pdf_hash": "e1ee856dd6dde3b7ddac208d2e7fee98f7b10c75", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41499", "s2fieldsofstudy": [ "Biology" ], "sha1": "e1ee856dd6dde3b7ddac208d2e7fee98f7b10c75", "year": 2021 }
pes2o/s2orc
Neuroinflammation in Amyotrophic Lateral Sclerosis and Frontotemporal Dementia and the Interest of Induced Pluripotent Stem Cells to Study Immune Cells Interactions With Neurons Inflammation is a shared hallmark between amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). For long, studies were conducted on tissues of post-mortem patients and neuroinflammation was thought to be only bystander result of the disease with the immune system reacting to dying neurons. In the last two decades, thanks to improving technologies, the identification of causal genes and the development of new tools and models, the involvement of inflammation has emerged as a potential driver of the diseases and evolved as a new area of intense research. In this review, we present the current knowledge about neuroinflammation in ALS, ALS-FTD, and FTD patients and animal models and we discuss reasons of failures linked to therapeutic trials with immunomodulator drugs. Then we present the induced pluripotent stem cell (iPSC) technology and its interest as a new tool to have a better immunopathological comprehension of both diseases in a human context. The iPSC technology giving the unique opportunity to study cells across differentiation and maturation times, brings the hope to shed light on the different mechanisms linking neurodegeneration and activation of the immune system. Protocols available to differentiate iPSC into different immune cell types are presented. Finally, we discuss the interest in studying monocultures of iPS-derived immune cells, co-cultures with neurons and 3D cultures with different cell types, as more integrated cellular approaches. The hope is that the future work with human iPS-derived cells helps not only to identify disease-specific defects in the different cell types but also to decipher the synergistic effects between neurons and immune cells. These new cellular tools could help to find new therapeutic approaches for all patients with ALS, ALS-FTD, and FTD. Inflammation is a shared hallmark between amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). For long, studies were conducted on tissues of postmortem patients and neuroinflammation was thought to be only bystander result of the disease with the immune system reacting to dying neurons. In the last two decades, thanks to improving technologies, the identification of causal genes and the development of new tools and models, the involvement of inflammation has emerged as a potential driver of the diseases and evolved as a new area of intense research. In this review, we present the current knowledge about neuroinflammation in ALS, ALS-FTD, and FTD patients and animal models and we discuss reasons of failures linked to therapeutic trials with immunomodulator drugs. Then we present the induced pluripotent stem cell (iPSC) technology and its interest as a new tool to have a better immunopathological comprehension of both diseases in a human context. The iPSC technology giving the unique opportunity to study cells across differentiation and maturation times, brings the hope to shed light on the different mechanisms linking neurodegeneration and activation of the immune system. Protocols available to differentiate iPSC into different immune cell types are presented. Finally, we discuss the interest in studying monocultures of iPS-derived immune cells, co-cultures with neurons and 3D cultures with different cell types, as more integrated cellular approaches. The hope is that the future work with human iPS-derived cells helps not only to identify disease-specific defects in the different cell types but also to decipher the synergistic effects between neurons and immune cells. These new cellular tools could help to find new therapeutic approaches for all patients with ALS, ALS-FTD, and FTD. INTRODUCTION Inflammation is a pathological hallmark shared by many neurodegenerative diseases. Its physiological function is to defend our organism against various insults implying different cell types and molecular pathways. In many neurodegenerative diseases, insults may come from the different disease-affected cells that can degenerate and die or that can secrete abnormal proteins that become immunogenic. Today, it is well documented that inflammation is not just an inert bystanding secondary reaction. Its modulation could be of interest in a therapeutic perspective, especially for patients with sporadic forms of diseases. Modulating the inflammatory response could then be a strategy to slow disease progression. This review focuses on amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD), two neurodegenerative diseases with some overlapping clinical presentations, pathological mechanisms, and genetics. Firstly, we will present the current knowledge about inflammation in patients, new hypotheses brought by animal studies and the current state of clinical trials targeting inflammation. Next, we will discuss involvements of both innate and adaptive immune responses and the sequence of inflammatory events. This sequence of events could be a key to identify a time window that could be precisely targeted to twist the immune system in the right direction and slow down disease progression. With a detailed picture of inflammatory events in ALS and FTD, the current possibilities offered by the induced pluripotent stem cell (iPSC) technology to generate different human immune cell types and to study their intrinsic defects will be described. With the emergence of more integrated cellular approaches with different iPS-derived cell types, interactions and synergistic effects between immune cells and neurons could be deciphered and bring new insights for innovative therapeutic approaches for ALS and FTD. AMYOTROPHIC LATERAL SCLEROSIS AND FRONTOTEMPORAL DEMENTIA PATHOLOGIES ALS and FTD disorders are two ends of a spectrum of neurodegenerative diseases. ALS clinical presentation is characterized by progressive paralysis of voluntary muscles due to loss of both upper and lower motor neurons (MNs), leading in most cases to the death of patients by respiratory failure. ALS shares clinical and pathological features with FTD, a type of dementia characterized by impaired judgment and executive skills. In FTD, the loss of neurons in the frontal and temporal cortices, sometimes accompanied with a loss of cortical MNs, correlates clinically with the symptoms of FTD (Neumann et al., 2006;Burrell et al., 2016). Mean survival is 3-5 years for ALS patients and up to 50% of the patients develop frontal lobe dysfunction or language impairment. Among FTD cases, some studies claim only 5-10% of patients with ALS signs (Rosso et al., 2003;Goldman et al., 2005;Johnson et al., 2005;Seelaar et al., 2007), while some others reach 50% (Lipton et al., 2004;Mackenzie and Feldman, 2005). These differences are certainly due to variability in disease definitions and assessments by clinicians. Some more recent studies indicate that FTD-ALS has a particularly poor prognosis with a survival of 2-5 years (Kansal et al., 2016). At the genetic level, several genes with numerous pathogenic variants, are causal for ALS, ALS-FTD, and FTD (Belzil et al., 2016). The same genetic mutation can result in either ALS, FTD or both pathologies, suggesting roles of disease-specific modifiers. Also, as most mutated genes encode ubiquitously expressed proteins, all cell types can in theory be affected by the expression of the mutated protein, thus contributing to the complexity of the disease (Ferrari et al., 2011). While mutations in the progranulin gene (GRN) and the microtubule-associated protein tau (MAPT) have been identified as major causes of familial FTD (Greaves and Rohrer, 2019), some other genes were linked only to ALS, including kinesin family member 5A (KIF5A) and SOD1 (superoxide dismutase 1). Of particular interest in the ALS-FTD spectrum, mutations in the C9orf72 gene which were identified in 2011, and made the link between both disorders (Ferrari et al., 2019). In the Western hemisphere (and not in Asia), hexanucleotide repeat expansions in the C9orf72 gene have been identified in up to 40% of familial ALS (fALS) patients and 20% of fFTD patients, and in ∼6% of sporadic ALS (sALS) and sFTD patients (DeJesus-Hernandez et al., 2011;Renton et al., 2011). Moreover, recent studies shed light on the role of some genes mutated in both ALS and FTD (TBK1, OPTN, and SQSTM1) and that are implicated in innate immune related functions (Abramzon et al., 2020), warranting the importance of the contribution of immune cells in the pathology. INFLAMMATION IN AMYOTROPHIC LATERAL SCLEROSIS AND FRONTOTEMPORAL DEMENTIA: A SHARED PATHOLOGICAL HALLMARK First Observations in Post-mortem Tissues of Patients Post-mortem studies brought the earliest observations suggesting of the presence of inflammatory signs in ALS and FTD patients. Several ALS post-mortem case reports have described lower numbers of MNs in the spinal cord and to a lower extend of Betz cells in the cerebral cortex, accompanied by increased microglial activation and astrogliosis (Brownell et al., 1970;McGeer et al., 1993;Nihei et al., 1993;Schiffer et al., 1996;Saberi et al., 2015;Chiot et al., 2020). There was no microgliosis in the dorsal horns of the spinal cord, strongly suggesting a specific response of microglial cells toward degenerating MNs. More recent studies identified the presence of immature and activated dendritic cells (DC) in ventral horns and corticospinal tracts of ALS patients (Henkel et al., 2004) as well as the presence of activated CD68+ monocytes/macrophages/microglial cells and of CD4+ and CD8+ lymphocytes in the vicinity of MNs (Kawamata et al., 1992;Henkel et al., 2004). At the periphery, demyelination, axonal degeneration, macrophage activation, abnormal motor end plates, axonal sprouting, and atrophic muscles were described (Bjornskov et al., 1984;Tandan and Bradley, 1985;Chiot et al., 2020). In FTD patients, inflammatory signs are less obvious compared to ALS patients. Asymmetrical convolutional atrophy in frontal and anterior lobes were observed (Ikeda, 2000;Tolnay and Probst, 2001;Bright et al., 2019). In the gray matter, microvacuolation and gliosis in laminae I-III were seen in conjunction with neuron loss, while neurons of lamina V were reported to be only mildly affected. Rare dystrophic neurites were described. In the white matter, mild gliosis was observed in subcortical fibers and loss of myelin was sometimes observed. Thanks to these first studies, inflammatory signs were identified in ALS and FTD post-mortem tissues. Nevertheless, it is difficult to define when inflammation begins and if this is an early or late event as post-mortem tissues represent rather an end-stage of the pathology. However, the question of the sequence of the inflammatory events is crucial as inflammation can be beneficial or harmful for neurons depending on the disease stage. Today, this question is an open question that needs more investigation. Human Studies to Decipher the Involvement of Inflammation in Amyotrophic Lateral Sclerosis and Frontotemporal Dementia In humans, studies were conducted at several levels including brain imaging studies and biofluids analysis. Imaging studies are valuable tools to assess cerebral changes, spreading patterns, network-wise propagations, and can also be used to detect inflammation. These imaging tools could even be used to detect biomarkers to define the conversion transition from a pre-symptomatic stage to a clinically manifest disease (Chew and Atassi, 2019;El Mendili et al., 2019). Positron Emission Tomography (PET) is a functional imaging technique using radioactive ligands to measure, amongst others, changes in metabolic processes. Different radioligands targeting specific cellular substrates are available for different imaging purposes depending on the studied cellular event. TSPO, which corresponds to the 18 kD translocator protein, is highly expressed on activated microglia and astrocytes (Lavisse et al., 2012;Betlazar et al., 2018) and is helpful to visualize inflammation and/or gliosis. Several generations of TSPO radio-ligands exist, which differed regarding their binding specificities. The latest in use are [11C]PBR28 and [18F]DPA-714 which bind with higher specificities to TSPO in comparison to the previous generation of radioligands (Kreisl et al., 2010;Narayanaswami et al., 2018). However, recent studies showed the presence of a polymorphism affecting the TSPO binding affinity, assessing the importance to take into account this parameter in studies involving heterogeneous populations of ALS or FTD patients. A new PET probe [18F]CB251 was recently published and seems to be more specific for TSPO regardless of polymorphisms (Kim et al., 2020). This new probe will be of particular interest for future studies. The overwhelming majority of imaging studies in patients are done in the brain. Studies in control subjects and ALS patients revealed an increased binding of [18F]DPA-714 (Corcia et al., 2012) or [11C]-PBR28 (Schain and Kreisl, 2017) only to the motor cortex regions with positive correlations with the Upper Motor Neuron Burden Scale (UMNB) and negative ones with the Amyotrophic Lateral Sclerosis Functional Rating Scale -Revised (ALSFRS-R) (Zürcher et al., 2015;Alshikho et al., 2016Alshikho et al., , 2018Ratai et al., 2018). In FTD and ALS-FTD patients, increased TSPO binding were observed in cortical frontal, mesial temporal, subcortical regions, prefrontal cortex, hippocampal, and para-hippocampal regions Turner et al., 2004;Miyoshi et al., 2010;Chew and Atassi, 2019). Despite numerous attempts to image the spinal cord in vivo (Bede et al., 2012) technological constraints (i.e., respiration, cardiac movements, and small cross-sectional area) have hindered reliable quantitative spinal cord imaging in patients (El Mendili et al., 2019). Amongst the scarce imaging studies assessing ALS patient's spinal cords, most of them used magnetic resonance imaging (MRI) techniques. Studies of metabolic changes in ALS patients' spinal cords using whole body PET/computed tomography (CT) images are recent. Two independent studies identified increased [18F]fluorodeoxyglucose (FDG) uptake in the spinal cord of patients. [18F]-FDG is thought to reflect cell metabolism without specificity for glial cells, but the observed hyper-metabolism in the spinal cord was suggested to represent an increased inflammation and gliosis due to accumulating glial cells in reaction to degenerating neuronal cells (Bauckneht et al., 2020). Further studies using specific radioligands will have to confirm these observations. PET studies opened new perspectives regarding our understanding of pathology and inflammation in patients. To go a step further, it might now be possible to assess temporal changes during the disease course. Objectives are to observe cell reactivity as well as spreading of inflammation and gliosis in ALS and FTD patients' brains and spinal cords. In the largest longitudinal ALS PET study, 10 patients underwent [11C]-PBR28 PET scans twice over a 6 months period. Results showed a stable [11C]-PBR28 uptake over this period of time , suggesting a plateau of glial reactivity shortly after symptoms onset. These first results are very encouraging suggesting that the inflammation does not increase with time. Other longitudinal studies are now necessary to support these results. If asymptomatic subjects of patient's families could be included in these studies, this could help understand the crucial question of when the neuroinflammatory response starts. As a whole, imaging studies are particularly interesting as they offer insights in the status of brain and spinal cord pathological tissues at spatial and temporal levels. They also bear hope as a tool to detect biomarkers. Nevertheless, imaging studies still hold limitations for studying precisely microglial activation in ALS and FTD patients. As mentioned previously, TSPO is not highly cell-specific (Lavisse et al., 2012), which limits data interpretations (Vivash and O'Brien, 2016). Thus, other radioligands targeting microglia more specifically are currently developed and first studies have already shown PET imaging with a tracer targeting the pro-inflammatory phenotype of activated microglial (Janssen et al., 2018;Narayanaswami et al., 2018). The next step is now to develop a tracer for the anti-inflammatory phenotype of microglia. This would allow for imaging of the different microglia activation states during disease evolution. Circulating Inflammatory Cytokines and Chemokines in Amyotrophic Lateral Sclerosis and Frontotemporal Dementia The presence of circulating inflammatory cytokines and chemokines in the blood and the cerebrospinal fluid (CSF) was extensively studied in ALS patients and to a lesser extent in FTD patients. Recently, studies showed that the pro-inflammatory and multifunctional IL-6 cytokine was increased in both ALS and FTD patients in comparison to controls (Galimberti et al., 2015;Gibbons et al., 2015;Ngo et al., 2015;Lu et al., 2016;Tortelli et al., 2020). Furthermore, the IL-6 level was suggested to be correlated with disease progression in ALS . Apart from IL-6, ALS and FTD patients have distinct cytokines and chemokines circulatory profiles (see below and Table 1 that presents the different publications studying circulating inflammatory molecules in CSF and blood of ALS or FTD patients). In ALS, a large panel of pro and anti-inflammatory cytokines and chemokines were described as deregulated in patients (reviewed in Moreno-Martinez et al., 2019). However, results are not always consistent amongst the different studies, probably because of the heterogeneity of the studied populations. Heterogeneity exists at different levels. First, in most reports the studied population has a mean ALSFRS-R with a large standard deviation and patient's biofluids are analyzed at one single time point. Thus the inflammation status of the different patients may be different, making the results hard to interpret. Of note, the disease state is most of the time measured with the ALSFRS-R, but it does not necessarily reflect the pathological state at the cellular level. To date among the most promising biomarkers for ALS are neurofilaments heavy and light chains, measurable in CSF and blood and being used in many trials (Oeckl et al., 2016;Xu et al., 2017;Benatar et al., 2018;Poesen and Van Damme, 2019). Second, some studies include patients diagnosed with the El Escorial criteria that categorize the disease as "possible, " "probable, " or "definite." This classification could allow establishing if there is a correlation between the diagnosis categories and the amount of circulating factors. However, some other studies take all diagnostic cases as one single group of patients and do not take into account the diagnosis, making any correlation impossible. Third, the site of onset of the pathology (spinal vs bulbar) may also be a confounding factor if the two groups are not considered separately. Indeed, progression of the disease is very different between the two forms and so are probably the inflammatory events over time. Next, most studies include sporadic cases or compare familial and sporadic cases, which cannot be seen anymore as different groups. Better defining patient groups are needed. As we now know the importance of some mutated genes in the immune system, retrospective studies could be interesting to compare subgroups based on their genotypes. (v) Finally, CSF and blood are distinct compartments as the CSF is part of the central nervous system (CNS) while circulating peripheral molecules are in the blood. Both compartments may contain different types and amounts of cytokines and chemokines depending on the state of the pathology. Few longitudinal studies were conducted and no significant differences in cytokines and chemokines secretions were observed between two pathological time-points in patients (Saleh et al., 2009;Ehrhart et al., 2015;Prado et al., 2018). As in most cases recruited patients were already in advanced stages of the disease, a hypothesis is that if inflammation is an early event, analysis was probably conducted too late. Having the opportunity to do this kind of analysis as early as possible after diagnosis could bring new insights of the inflammatory appearance and evolution with disease progression. Amongst other factors measured in ALS patients' biofluids, immunoglobulins (IgG), metabolic proteins, adipokines, ironrelated proteins, and oxidative stress markers were found deregulated in patients (Mitchell et al., 2010;Ngo et al., 2015;Blasco et al., 2017). These deregulations reflect metabolic and inflammatory dysfunctions which may be critical actors of pathogenesis. In vitro and in vivo studies demonstrated that IgG of ALS patients induced a selective MN's death, presumably involving calcium (Yi et al., 2000;Obal et al., 2002;Pullen et al., 2004;Demestre et al., 2005), supporting a direct role of immunological responses on the selective MN loss. In FTD patients, most studies focused on CSF analysis. Analyses revealed altered secretions of inflammatory chemokines and cytokines as those found in ALS patients, such as IP-10, TNFα (Sjögren et al., 2004), TGFβ, CCL-2 (also called MCP-1) (Galimberti et al., 2015), RANTES (Galimberti et al., 2008), and IL-8 (Galimberti et al., 2006). Some cytokines seemed more specific to cognitive aspects of the pathology as they were found deregulated in Alzheimer's disease (AD) and FTD patients such as IL-11 (Galimberti et al., 2008), IL-15 (Rentzos et al., 2006b), and IL-12 (Rentzos et al., 2006a). Interestingly, in clinical cases with FTLD-TDP and FTLD-tau (2 different FTD forms) pathologies, differences in some neuropeptides and chemokines secretions (IL-23 and IL-17) were observed, a signature that could help to distinguish the two forms when antemortem followups are done (Hu et al., 2010). Regarding blood analysis, no statistical differences of FTD patients' serum cytokines compared to controls were found (Galimberti et al., 2015). On the other hand, lipid assays showed increase triglyceride levels in patients correlated with body mass index, while HDL cholesterol levels were negatively correlated with this index (Ahmed et al., 2014). More recently a serum lipidomic study focused more precisely on lipids implied in 3 key aspects of FTD pathology: inflammatory processes, mitochondrial dysfunction, and oxidative stress (Phan et al., 2020). Amongst other findings, the data revealed specific increased lipid implicated in inflammatory responses, LPC and PAF (lysophosphatidylcholine and platelet-activating factor) which are known as second messengers or immediate response molecules and which act on various immune targets (T lymphocytes, microglia, macrophages, and neutrophils) (Chang et al., 2003;Han et al., 2004;Scholz and Eder, 2017). In contrast, they found a decrease in o-acyl-w-hydroxy fatty acids (OAHFA) which is known to exert anti-inflammatory effects. The significant TABLE 1 | Publications reporting CSF and blood circulating inflammatory molecules in ALS and FTD patients. Compartment ALS FTD inverse correlation between LPC and OAHFA variations suggests increased inflammation in FTD patients. Taken together, these studies show that inflammatory events occur in ALS and FTD patients. What is now needed is a better understanding of the actual start and evolution of these inflammatory events in both pathologies, which would be important for targeted immunomodulating therapies. THE ROLE OF THE IMMUNE SYSTEM IN AMYOTROPHIC LATERAL SCLEROSIS AND FRONTOTEMPORAL DEMENTIA Inflammation is a multifaceted reaction implying different cell types and molecular pathways. Figure 1 shows the different immune cells that can play roles during disease progression both in the CNS and the PNS. Below we describe the different cell types that are thought to play a role in inflammatory mechanisms in ALS and FTD, and how they might impact the disease progression. Innate Immunity in Amyotrophic Lateral Sclerosis and Frontotemporal Dementia Innate immunity is the first line defense of the organism. It provides an immediate immune response against a nonself pathogen and is conserved amongst vertebrates and invertebrates. It encompasses a group of cell types that recognize attacks on the organism and facilitate the clearance of external pathogens. These cell types have specific functions depending on their tissue location and surrounding environment. Table 3 lists published data indicating the proposed implications of the different immune cell types both in animal models of ALS and FTD and in patients. The expression of the different targets was indexed through researches with proteinatlas.org. COX-2, cyclooxygenase-2; CSF1R, colony stimulating factor 1 receptor; IL-1R1, interleukin-1 receptor 1; IL-6R, interleukin-6 receptor; NF-κB, nuclear factor kappa-light-chain-enhancer of activated B cells; NRF2, nuclear factor erythroïd-2-related factor 2; PDE, phosphodiesterase; PGE2, prostaglandin E2; PI3K, phosphoinositide 3-kinase; PPARγ, peroxisome proliferator-activated receptor gamma; ROCK, Rho-associated protein kinase; S1P, sphingosine-1-phosphate receptor; TKR, tyrosine-kinase receptor; TLR-4, Toll-like receptor 4. Phagocytes Amongst phagocytes different subpopulations exist. Tissue resident macrophages are heterogeneous populations that can differ by their embryonic origin but also their tissue location where they acquire specific functions and distinct profiles. Macrophages are antigen presenting cells (APC). The main role of APC is to process antigens and present them by major histocompatibility complex (MHC) at cell surface. The MHC-antigen complex is recognized by the T cell receptor (TCR), allowing T cell activation. APC also express many costimulatory molecules that participate in T cell activation. APC regroup several immune cell types in different tissues, including DC, macrophages, B cells and Langerhans cells (Kambayashi and Laufer, 2014). Macrophages are less potent than DC but still enough to activate neighboring adaptive immune cells. Macrophage activation states are multiple (Xue et al., 2014). Depending on the insult they encounter, they can release different cytokines, chemokines, and other factors to drive inflammation by activating neighboring cells but also attracting blood immune cells to the site of injury. At the same time, their key phagocytic activities allow them to clean up cellular debris and thus decrease local inflammation. Microglia are the CNS resident macrophages but have a distinct embryonic origin compared to the majority of peripheral macrophages (Ginhoux et al., 2010;Schulz et al., 2012;Kierdorf et al., 2013;Gomez Perdiguero et al., 2015). Since decades, the understanding of the exact roles of microglia in the CNS is a field of intense research. These cells display extended prolongations allowing them to patrol the whole brain parenchyma within a matter of hours, making them very active cells in the CNS. While microglia have been long studied for their roles during brain development, their homeostatic roles in the adult are less clearly defined. However, during neuronal injury or neurodegeneration, they function as macrophages and appear to be the first line of response to neuronal suffering. Today, accumulating evidence points to important roles of microglia in ALS and FTD. However, for a long time, the lack of specific marker to distinguish peripheral macrophages (able to invade the CNS) from microglia, made it difficult to study the two cell populations individually. Thus, especially for in vivo studies in ALS rodent models, often, both cell types were studied as a single population. In 2006, two independent studies demonstrated that decreasing expression of mutant human SOD1 (hSOD1) only in macrophages/microglia in a mutant hSOD1 ALS mouse model led to slowing of disease progression and increased survival (Beers et al., 2006;Boillée et al., 2006). In ALS, the degenerating spinal MN projects its longest part, the axon, into the periphery, and thus, the axon would be in contact with reacting macrophages and other peripheral immune cells, while the soma in the CNS would be in contact with microglia. This particularity of the spinal MN led to the idea that in ALS it might be a promising approach to target directly the peripheral immune system and the macrophages along the peripheral motor nerves to slow neurodegeneration. This could be potentially an easier therapeutic approach than targeting microglia in the CNS, and by targeting neuroinflammation could be used for all ALS cases. This idea was applied in a recent article that demonstrated that gene expression patterns were very different during the disease in microglia and macrophages in mutant hSOD1 mice (Chiot et al., 2020) showing that the two cell types can play different roles in the disease. In this study, replacement of peripheral mutant SOD1 macrophages by more neurotrophic macrophages decreased both sciatic nerve macrophage activation and CNS microglial activation (Chiot et al., 2020). This is particularly interesting as it shows that by modulating macrophages at the periphery it is possible to act on CNS inflammation. Interestingly, Chiot et al. (2020) demonstrated that when this replacement was done at a presymptomatic stage, disease onset was delayed, but this was not sufficient to increase life expectancy, whereas when the macrophage replacement was done at disease onset, it was able to increase survival of ALS mice. The necessity of a precise timing for the macrophage replacement in this ALS model could give insights into the time window that has to be targeted to act on peripheral inflammation in ALS. Whereas most of the studies regarding the roles of phagocytes were done with the mutant SOD1 mouse model, in which phagocytes showed deleterious effects on disease progression, a recent study suggested potential neuroprotective effects of reactive microglia in one TDP-43 mouse model (Spiller et al., 2018). These results show that the involvement of phagocytes is context-dependent and thus has to be studied in several ALS models and in humans with different disease forms to understand their exact roles during the disease course. In FTD and ALS, two of the most commonly found mutant genes (GRN, C9orf72) encode proteins that have critical roles in phagocytosis and endocytosis which are important for microglial functions (Baker et al., 2006;Cruts et al., 2006;Shatunov et al., 2010;DeJesus-Hernandez et al., 2011;Mok et al., 2012). Progranulin acts as an inflammatory modulator and its expression is significantly up-regulated in microglia in several models of neuronal injury (Moisse et al., 2009;Naphade et al., 2010;Tanaka et al., 2013). In GRN knockout mice aberrant microglial activation upon stimulation and during aging was reported (Yin et al., 2009;Martens et al., 2012;Lui et al., 2016), leading to increased synaptic pruning or even neuronal loss. GRN deficient mice harbor obsessive-compulsive behaviors which seem to imply the TNFα signaling pathway predominant in microglial cells in the CNS (Krabbe et al., 2017). This major immune reaction can be alleviated through restoration of progranulin level. Indeed, an overexpression of the latter in a mouse model of sciatic nerve injury led to accelerated axonal regrowth, restoration of neuro-muscular junctions, and recovery of sensory and motor functions (Altmann et al., 2016). Regarding C9orf72 studies, its deficiency in an animal model showed a hyper activation of the innate immune system with increased expression of IL-6 and IL-1β in microglia and upregulated inflammatory genes in the spinal cord (ORourke et al., 2016). Moreover, several studies reported a defective lysosomal system with important accumulations in innate immune cells (Atanasio et al., 2016;Sullivan et al., 2016;Moens et al., 2017). Together these results show a prominent role of C9orf72 in microglial cells. The phagocytic machinery is essential for clearance of cell debris and maintenance of homeostasis. Disturbance in the correct functioning of this machinery could lead to aberrant neuroinflammation and thus could directly contribute to the disease process in ALS and FTD rather than being a simple secondary event to neuronal degeneration. Nonetheless, these results need to be interpreted cautiously as both models do not match haploinsufficiency found in ALS and FTD patients. Aside from these studies in animal models, most human studies have used circulating blood monocytes, as access to tissue resident macrophages and microglial cells are unlikely possible in live patients. In ALS patients, different studies have assessed peripheral monocyte populations. An increase of classical and a decrease of non-classical monocytes was observed not only in sALS and fALS patients, in line with findings in the mutant hSOD1 mice, but also in pre-symptomatic ALS mutant carriers (Butovsky et al., 2012;Zondler et al., 2016), suggesting that this phenotype is an early event in the pathology. Moreover, circulating monocytes of ALS patients presented functional defects in phagocytosis and migration, and appeared to be skewed toward a more proinflammatory phenotype with a TNFα protein release positively correlating with progression rates, and a IL-6 protein release positively correlating with disease burden (Zondler et al., 2016;Zhao et al., 2017;Du et al., 2020). Deep RNA sequencing of ALS monocytes revealed unique inflammation-related gene profiles, defects in migration and in the lysosomal pathway (Zondler et al., 2016;Zhao et al., 2017). While aforementioned studies were carried out on blood monocytes, a recent study observed that ALS monocytes-derived macrophages had increased IL-6 and TNFα secretion levels when activated toward a proinflammatory state, suggesting that blood monocyte-derived macrophages kept at least some of their in vivo characteristic defects when in culture (Du et al., 2020). Other studies are now needed to better understand the inflammatory states of ALS macrophages at different time-points of the pathology, and this could be assessed by using monocytes, easy accessible from alive patients. In FTD patients, a study revealed increased monocytes in the CSF of patients with no change in the proportion of non-classical compared to classical monocytes (Pawlowski et al., 2018). Elevated monocyte levels correlated with structural cerebral defects in FTD typical affected regions assessed by structural MRI (Pawlowski et al., 2018). Further studies would be important to validate these alterations at a larger scale. Dendritic Cells Dendritic cells orchestrate innate and adaptive complex systems. DC are the most efficient APC to activate T cells into specific lineages because they display more MHC classes and costimulatory molecules compared to others APC (Inaba et al., 1997). In the periphery, immature DC patrol to find non-self antigens throughout tissues. The encounter with a pathogen or a danger signal triggers their maturation, antigen presentation on MHC-II and migration to secondary lymphoid organs where they activate CD4 or CD8 T cells. At steady-state level, DC can present self-antigens to T cells to avoid immune responses against self-proteins and induce tolerance. This process leads to the induction of regulatory T cells (Tregs) that express and secrete anti-inflammatory molecules to reduce inflammation (Banchereau and Steinman, 1998). In sALS and fALS patient's post-mortem tissues, immature and mature DC were predominantly observed in the degenerating corticospinal tracts, with increased mRNAs levels coding for DC surface markers and for the inflammatory chemokine CCL-2 (Henkel et al., 2004). Interestingly, patients with a rapid disease progression had more transcripts compared to patients with a slower progression. This suggests an active recruitment of DC by the inflamed CNS of ALS patients. A hypothesis is that besides the known CCL-2 sources (principally attributed to macrophage and microglia) and with a role of CCL-2 as a chemoattractant for monocytes and T cells, DC could be an additional CCL-2 source, therefore participating in inflammatory cell recruitment. In agreement with this hypothesis is that several studies described an increase of CCL-2 in ALS patient's CSF (Baron et al., 2005;Kuhle et al., 2009;Tateishi et al., 2010;Gupta et al., 2011). Also in the blood of ALS patients, a subpopulation of circulating DC showed an increased production of CCL-2 in response to LPS (Rusconi et al., 2017). Further analysis of circulating DC in blood of ALS and FTD patients would be of great interest if the different subpopulations of DC could be identified. cDC1 have the ability to highly cross-present antigen to CD8 T cells, while cDC2 is the most efficient DC subset to polarize naive CD4 T cells (Haniffa et al., 2013). pDC are a main actor of antiviral responses by the production of type 1 interferon (IFN-1) (Dzionek et al., 2000). Finally, it was shown that monocyte-derived DC (MoDC) can arise during inflammation. MoDC have been first observed in mice (León et al., 2007) and then in humans under inflammatory physiological and physiopathological conditions (Segura et al., 2013;Tang-Huau et al., 2018). Mast Cells Mast cells are long-lived tissue resident cells, implicated in many different inflammatory responses. As they encounter specific antigens, they activate and release numerous inflammatory mediators (i.e., histamine, cytokines, lysosomal enzymes, ATP, and serine proteases). They are generally located near structures mediating visceral sensory or neuroendocrine functions or close to the blood-brain-barrier (BBB). In the spinal cord under normal conditions, mast cells are present at the dura but not in the cord parenchyma (Michaloudi et al., 2008). Mast cells are also first line effectors through which pathogens can affect the gut-brain axis (Budzyński and Kłopocka, 2014;Conte et al., 2020). In the past years, mast cells received increased attention, as they appeared to be an early responder to injury (Skaper et al., 2014). Studies have shown that when they are activated they are important mediators of the microglial inflammatory response, astrocyte activation and potential neuronal degeneration (Skaper et al., 2014;Zhang et al., 2016;Jones et al., 2019). They are also able to disrupt and permeabilize the BBB leading to toxins and immune cells penetration, exacerbating the inflammatory response. In post-mortem spinal cords of ALS patients, increased Cox-2 mast cells -Cox-2 is a key mediator of the inflammation (Chen, 2010) -were detected while they were not present in controls (Graves et al., 2004), Recently, mast cells were described near the altered microvascular elements and surrounding MN cell bodies (Kovacs et al., 2021). In the mutant hSOD1 rat model, accumulation of mast cells was observed in ventral roots and spinal cords (Trias et al., 2018). Interestingly, mast cells were also found in the periphery and in particular in muscles, with infiltrations and degranulations correlating with paralysis progression (Trias et al., 2017). Mast cells are thought to be recruited along the degenerating nerve by the Stem Cell Factor (SCF) secreted from reactive Schwann cells and reactive macrophages (Trias et al., 2019). Very recently, a report showed that mast cells were interacting with astrocytes and MNs expressing SCF in the mutant hSOD1 mouse model (Kovacs et al., 2021). Pharmacological inhibition of CSF-1R and c-kit with Masitinib [a tyrosine kinase inhibitor targeting the SCF receptor (c-kit) and the platelet derived growth factor (PDGF)] showed reduced immune cell infiltration and amelioration of neuromuscular junction (NMJ) integrity, suggesting an implication of mast cells in the axonopathy in periphery in the ALS pathology (Trias et al., 2016(Trias et al., , 2020. Based on these encouraging results suggesting that mast cells participate to inflammatory reactions in ALS both in the CNS and PNS, clinical trials were launched with Masitinib (ongoing trials from AB Science). Whether mast cells are a driver or an amplifier of the ALS pathology remains to be determined. Their "early" implication in the pathology is suggested as at the symptomatic phase in mutant hSOD1 rats, mast cells have already increased in number. Assessing their presence at the pre-symptomatic stage would be of great interest to assess their earlier involvement. In FTD patients, nothing is known about mast cells participation, but it would not be aberrant to see their increased reactivity as was observed in AD patients (Harcha et al., 2021). Natural Killer Cells Natural killer (NK) cells are classified as innate cytotoxic lymphocytes and are mainly known for their ability to kill virus-infected cells and tumor cells (Abel et al., 2018). Few papers reported an increase of NK cells in ALS patients' blood (Gustafson et al., 2017;Jin et al., 2020). Very recently, Garofalo et al. (2020) described an infiltration of p46-positive NK cells in the spinal cord and motor cortex of patients with sALS. They also showed NK cell recruitment in the CNS of SOD1 G93A mice. CCL-2 was shown to be involved directly in it or through other cell recruitment, since CCL-2 neutralization led to a decrease of NK cell infiltration. Interestingly, these data identified CCL-2 as an early damage signal of neural tissue. The depletion of NK cells in SOD1 G93A mice, but also in TDP43 A315T mice, increased survival and delayed paralysis onset. Functionally, control and SOD1 G93A NK cells had a cytotoxic effect on SOD1 G93A MNs, but not on control MNs, suggesting a kill-me signal coming from mutant MNs. Regarding the impact on other immune cells, they showed that NK-depleted SOD1 G93A mice had a decreased number of microglia in spinal cord ventral horns, with a less inflammatory profile. NK depletion also induced an increase of Tregs (see below) in the ventral horns (Garofalo et al., 2020). More studies deserve now to be done to better understand the involvement of NK cells in ALS. In FTD patients, one study showed that NK cell percentages were not modified in patients' blood samples (Busse et al., 2017). Altogether, these data support the critical role of innate immunity in ALS-associated neurodegeneration and to a lesser extend in FTD. Phagocytes, mast cells, DC, and NK cells are activated in early disease phases and thus may participate through positive or negative impacts on neuron degeneration. Adaptive Immunity in Amyotrophic Lateral Sclerosis and Frontotemporal Dementia The adaptive immune system, unlike innate immunity, is highly specific to the particular pathogen that induces it and can provide long lasting immune protection. This type of immunity is strictly confined to vertebrates as it arises in evolution less than 500 million years ago (Alberts et al., 2002). Amongst the cells that constitute this second line of immunity, major actors are T and B cells that can be subdivided in many subclasses that can play different inflammatory roles and serve different purposes during inflammatory events. T Lymphocytes The T lymphocyte population is composed of two main subpopulations: CD4 and CD8 T cells. They are both characterized by the expression of CD3 and T-cell receptor (TCR) at their membrane surface. During T cell activation, the TCR recognizes antigen peptides presented by APC on MHC-II for CD4 T cells and MHC-I for CD8 T cells. In addition to this TCR/MHC/peptide complex signal, co-stimulatory molecules interactions (the main one being CD28 binding to CD80/CD86) and cytokines are necessary for T cell activation (Kapsenberg, 2003). CD4 T Cells CD4 T cells, also called T helper (Th) cells, help to set up an appropriate immune response against the encounter pathogen. To this aim, they provide signals to other immune cells that influence their activation and thus guide the immune response according to the pathogen to target. Naive Th cells are activated and polarized by APC, mainly DC, in secondary lymphoid organs. They are classified in several subsets, the main ones being Th1, Th2, Th17, and Treg cells. They are primarily defined by their transcription factor patterns of expressions and their cytokine productions ( Table 2). Master transcription factors Secreted cytokines In ALS, Th cells are described as major players in inflammation and disease progression. In the 1990s, CD4 T cell infiltrations were observed in spinal cords of ALS patients in proximity to degenerating areas (Kawamata et al., 1992;Engelhardt et al., 1993). At this time, the different Th cell subsets had not been all discovered. More recent studies on patient's blood samples and on some patient's CNS tissues focused more precisely on the different Th subpopulations and amongst the most deregulated subpopulations, Treg cells received a lot of attention. These cells were found significantly reduced in ALS patient's blood compared to controls (Mantovani et al., 2009;Henkel et al., 2013;Saresella et al., 2013;Sheean et al., 2018;Jin et al., 2020). In their publication describing significant decreases of FoxP3 mRNAs in circulating leukocytes of ALS patients, Henkel et al. (2013) showed that low FoxP3 mRNA levels correlated with a rapid progression of the disease and a poor survival of patients. Interestingly, similar results were observed in the SOD1 G93A mouse model with decreased Treg cell numbers and increased ones of other Th subsets along disease progression (Beers et al., 2011;Zhao et al., 2012). Recently, Beers et al. (2017) went one step further focusing on the impact of the disease on Treg functions. Treg are known to suppress both innate and adaptive immune reactions detrimental to the host and in particular they can suppress the activation/expansion of neurotoxic T lymphocytes. They demonstrated that Treg isolated from blood of ALS patients were less effective in their capacity to suppress proliferation of responder T lymphocytes. In addition, Treg cells of rapid progression patients (1-2 years, disease duration) exhibited an even more reduced suppression capacity compared to Treg cells of slow progression patients (4-6 years, disease duration). Thus, Treg cell suppressive capacity was inversely correlated with the disease progression speed . All these results strongly suggesting a direct role of Treg in ALS led to the development of therapeutic proposals to induce Treg production in patients (see below). Th1 (or IFN-γ-producing CD4 T cells) and Th17 (or IL-17-producing CD4 T cells) cells are increasingly considered as playing key roles in ALS inflammation. In most studies, these two Th subsets were usually studied together because of their proinflammatory properties, despite their distinct immunological functions. Th17 cells are already known to be pathogenic in inflammatory diseases such as multiple sclerosis or inflammatory bowel disease (Wu and Wan, 2020). In SOD1 G93A mice abnormal CD4 T cell activation and abnormal Th17 cell numbers were described in the draining cervical lymph nodes prior to the onset of neurological symptoms (Ni et al., 2016). In ALS patient's blood, both Th1 and Th17 cells were shown to increase (Saresella et al., 2013;Jin et al., 2020) and IL-17 was measured in higher concentration (Rentzos et al., 2010). At the level of the CNS, Henkel et al. (2013) showed an increase of Th1 mRNA markers in the spinal cord of ALS patients. Interestingly, they described an increase of T-bet mRNA in patients with rapid and slow disease progression but an increase of the IFN-γ mRNA only in patients with rapid progression of the disease. Very recently, a functional study focused on the effect of IL-17A on control or FUS-mutated MNs. The authors showed that IL-17A decreased MN survival and altered the neurite network in a dose-dependent manner and that the treatment with anti-IL17A and anti-IL-17A receptor helped to rescue dying MNs. All these observations were specific to IL-17A, as IL-17F treatment was not toxic to MN (Jin et al., 2021). Few studies reported a decrease of Th2 cells or Th2-associated markers in blood of ALS patients (Henkel et al., 2013;Jin et al., 2020). In the same way as for Treg cells, an inverse correlation of GATA-3 and IL-4 mRNA expressions was found with disease rate progression (Henkel et al., 2013). On the contrary, Saresella et al. (2013) showed an increase of CD4 GATA3 cells in patient's blood and no change in CD4 IL-4 cells, while another study described a positive correlation between the percentage IL-13 cells within CD4 T cells and the disease progression rate and an inverse correlation between the percentage IL-13 cells within the CD4 T cell population and the ALSFRS-R score (Shi et al., 2007). These results are of particular interest as the two major Th2 cytokines, IL-4 and IL-13, have been shown to increase CCL-2 production by activated primary rat microglia and by an activated human monocytic cell line THP-1 (Szczepanik et al., 2001). Whether Th2 cells or Th2 cytokines have a neuroprotective or a neuro-inflammatory role seems to depend on the context. Taken together, these data show that the composition of the total T cell population is modified in ALS patients compared to controls. The balance between the different subsets of Th appears to evolve over disease progression. Interestingly, specific correlations seem to exist between expressions of specific mRNA Th subtypes and ALS patient's disease progression rate (Henkel et al., 2013). These markers could be of great interest as new biomarkers to monitor disease progression rate in ALS patients. In FTD patients, while one study described a decrease of total T cell percentages in blood samples (Busse et al., 2017), another study described a specific decrease of cytotoxic T lymphocyte antigen-4 (CTLA-4) CD4 T cells and no change of CD28 CD4 T cell frequency in patients' blood (Santos et al., 2014). CD28 is a co-stimulatory immune checkpoint that promotes T cell activation, proliferation and response, whereas CTLA-4 is an immune checkpoint molecule exerting an inhibitory function on T cell proliferation and function. These two molecules compete for the same CD80/CD86 receptor (Chambers et al., 2001). Hence, this CTLA-4 deficiency could suggest a possible exacerbated activation of these CD4 T cells in FTD patients. Despite these studies, little is known about the role CD4 T cells in FTD. Given their rising importance in ALS, it would be worth investigating Th subsets in more details. CD8 T Cells The main function of CD8 T cells, also called cytotoxic T cells (CTL), consists in the elimination of cells infected with intracellular pathogens like viruses but also in elimination of tumor cells. CD8 T cells have been shown to play a determinant role in neurodegenerative diseases like multiple sclerosis (Huseby et al., 2012), but studies about their involvement in ALS remains sparse. Similarly to Th cells, CD8 T cell infiltration were observed in the spinal cord and in the brain of ALS patients (Kawamata et al., 1992;Engelhardt et al., 1993;Fiala et al., 2010). Contradictory results exist on the percentages of CD8 T cells in ALS patient's blood. Some studies showed an increase (Rentzos et al., 2012;Jin et al., 2020) while others suggested a decrease (Mantovani et al., 2009) or no change (Murdock et al., 2017). In SOD1 G93A mice, CD8 T cells were shown to progressively infiltrate spinal cords (Chiu et al., 2008;Beers et al., 2011;Figueroa-Romero et al., 2019), while Beers et al. (2008) observed CD8 T cell infiltration only at disease end-stage in SOD1 G93A /CD4 −/− mice. In 2018, a role of CD8 T cells and MHC-I in disease progression was described. In SOD1 G93A mice defective for CD8 T cells and expressing little MHC-I, a dual role of MHC-I was shown with specific pathogenesis in the CNS and the PNS. In the CNS, without CD8 T cell infiltration and MHC-I microglia, inflammation was reduced, paralysis of forelimbs was delayed and survival of mice was extended. On the contrary in the PNS MN's axons stability was affected leading to the acceleration of muscle atrophy (Nardo et al., 2018). This suggested that MHC-Idependent interaction between either CD8 T cells or microglia is a key factor triggering neuroinflammation, and also that a slowdown of disease progression may be obtained through activation of the MHC-I signaling in the periphery to protect the axonmuscle connectivity. In FTD, CD8 T cells were detected in the cortex of patients with tau P301L mutation (Laurent et al., 2017). Same observations were made in a mouse model of tauopathy and interestingly the depletion of peripheral T cells using an anti-CD3 antibody abolished CD8 infiltration in the cortex but also restored cognitive capacity of mutant mice, suggesting a crucial role of these T cells in cognitive impairment (Laurent et al., 2017). B Lymphocytes B cells are the center of adaptive humoral immunity and, by antigen-specific immunoglobulin production, they mediated extracellular pathogen elimination. Evidences of B cell implication in ALS is very limited. Auto-antibodies against proteins of spinal cord cells were detected in the serum and the CSF of some ALS patients (Niebroj-Dobosz et al., 2004Puentes et al., 2014). Auto-antibodies have also been detected in SOD1 G93A mice without any impact on disease progression (Naor et al., 2009). It remains unclear if auto-antibodies may be a consequence of MN death and if they could participate to MN degeneration and inflammation aggravation. A recent study investigated the impact of regulatory B (Breg) cell adoptive transfer on disease progression in SOD1 G93A mice (Pennati et al., 2018). Similarly to Treg, Breg are immunosuppressive B cells involved in immune homeostasis and tolerance maintenance (Rosser and Mauri, 2015;Peng et al., 2018). In mutant mice, Breg adoptive transfer increased Treg cell percentage in the CNS at 5 months after the transfer, but no impact on the survival was reported (Pennati et al., 2018). It would be of particular interest to understand why and how Breg cells impact Treg cells. As Treg cells are decreasing overtime in ALS patients, being able to increase their numbers, even in an indirect manner, could be of interest to delay disease progression. In FTD patients, only one study reported a decrease of B cell percentage in the blood of patients (Busse et al., 2017). More investigations are now necessary to understand their role in FTD. Indeed, and more globally, current studies about immunity in FTD were more focused on dementia and FTD phenotypes were mainly compared to AD phenotypes. Considering the common spectrum of ALS and FTD, more comparative studies would help to understand whether these two neurodegenerative diseases share some immune deregulation and how inflammation really affects disease progression in FTD. In conclusion, what is better understood today is which type of immune cell could be implicated in the disease, where it could play a role in the disease (Figure 1 and Table 3) and what they could secrete to protect/attack neurons or to eliminate debris and dead cells (Table 1). However, which cells are responsible for a specific secretion as well as which cells respond to specific factors remains largely elusive. Direct interactions between neurons and immune cells are still not well understood, as it remains impossible to have access to human brain and spinal cord tissues in alive patients. CLINICAL TRIALS TO MODULATE THE IMMUNE SYSTEM The complexity of the whole immune system and the numerous cell types involved in the different inflammatory pathways make difficult the understanding of the real impacts of therapeutic drugs on disease phenotypes. However, with the urgent need to find a cure or even a treatment to delay disease progression, clinical trials were launched and since the late 1990s several immune modulatory drugs or cell therapies have been tested in clinical trials for ALS. Immune Modulatory Drugs As shown in Table 4 and Figure 2, a large majority of considered immune modulatory drugs in ALS clinical trials are repositioned molecules previously tested in other diseases harboring inflammation. Their mechanisms of action are not presented here as they were described previously in details (Khalid et al., 2017). Table 4 lists trials according to their progress. Among tested molecules, most are antagonists of pro-inflammatory pathways reported to be deregulated in ALS patients. While some tested molecules induced adverse effects, most of them demonstrated safety and tolerability but their efficacy remains limited or negative. To date, most of the proposed molecules have a very large panel of targets and act on signaling pathways shared by many cells. While the two molecules accepted on the market (Riluzole and Edaravone) claim to act on neuroprotective functions for MNs, it turns out that for immune modulating trials the targets are broad with possible uncontrolled impacts on the disease pathology (Table 4). Today, eleven trials are either recruiting and/or active, targeting one or several molecules and pathways. Among these trials, one ongoing phase II aims to target T cells with low IL-2 dose (ld-IL-2). This trial has already shown some encouraging results with increased Treg cells (Camu et al., 2020). In another phase II trial, RNS60, an anti-inflammatory drug, targets the p-Akt prosurvival pathway and NF-κB (Paganoni et al., 2019). Compared to the ld-IL-2 trial the latter has 6 defined cell targets and 3 possible others, a non-specificity that could be responsible for uncontrolled modulations of several pathways. Fortunately, adverse effects were not reported. A Phase II/III trial (Ibudilast) uses a small molecule already tested in patients with multiple sclerosis. Contradictory results were reported and further studies are now necessary to understand the exact impact of Ibudilast on patients (Chen et al., 2020;Babu et al., 2021). Finally, a phase III trial (Ravulizumab) uses a monoclonal antibody targeting the C5 complement to decrease inflammation. Disappointing results were announced in August 2021 and the trial was discontinued because of a lack of efficacy. There are several reasons that may explain the failure of each trial and they have to be taken into account in future trials. First, most of the clinical trials include patients with heterogeneous disease states [in terms of disease form (sporadic and familial), durations, and site of onset]. These are among reasons why most studies do not include ALS-FTD patients. Mechanisms involved in the pathological progression at different disease steps may vary between patients. Thus modulating inflammation with such heterogeneity may erase effects attributable to one subgroup. Edaravone is a good example. No beneficial effect was found from the initial clinical trial conducted in a heterogeneous ALS population. However, post hoc analysis demonstrated beneficial effects in a clinical (Beers et al., 2006;Boillée et al., 2006). − Decrease of sciatic nerve macrophage activation and CNS microglia activation when replacing SOD1 G93A by more trophic macrophaged in the periphery in SOD1G93A mice (Chiot et al., 2020). − Delay of SOD1 G93A mouse disease onset when replacing SOD1 G93A macrophages by more trophic ones at the pre-symptomatic stage (Chiot et al., 2020). − Hyper-activation of the innate immune system and increased expression of IL-6 and IL-1β in mice with C9orf72 deficiency (ORourke et al., 2016). Microglia Animal model − Slow down disease progression and increased survival when decreasing human mutant SOD1 expression in macrophage and microglia of mutant SOD1 mice (Beers et al., 2006;Boillée et al., 2006). − * Increased expression of ionotropic P2X7 in activated microglia in post mortem spinal cords, which serve as receptor for neuronal degeneration inducing inflammatory reaction (Yiangou et al., 2006). Main intermediate between innate and adaptive immunities Dendritic cells Human patient − Observation of DC in cortico-spinal tracts of patients (Henkel et al., 2004). − Higher number of DC transcripts in the spinal cord of patient with a rapid disease progression compared to patient with a slower disease progression (Henkel et al., 2004). − Decrease numbers of DC in patient blood (Rusconi et al., 2017). No information Innate immunity Mast cells Animal model − Accumulation of mast cells in the spinal cord of SOD1 G93A mouse and rat models (Kovacs et al., 2021). No information Human patient − Observation of mast cells in spinal cord of patients (Graves et al., 2004). − Observation of mast cells in the gray matter of spinal cord tissues, near to altered microvascular elements and surrounding motor neuron cell bodies (Kovacs et al., 2021). NK cells Animal model − NK cells infiltration in SOD1 G93A mouse CNS (Garofalo et al., 2020). − Decrease numbers of microglia and increase of Treg cells in spinal cord's ventral horns of NK-deficient mice (Garofalo et al., 2020). No information Human patient − Observation of NK cells in spinal cord and motor cortex of patients with sporadic form of ALS (Garofalo et al., 2020). No change of NK cell percentage in patients' blood (Busse et al., 2017). Adaptive immunity CD4 T cells Animal model − Increase numbers of Th1 and Th17 cells at late stage of the disease and Treg cells decrease during disease progression in SOD1 G93A mice (Beers et al., 2011;Zhao et al., 2012). No information Human patient − Observation of CD4 T cells in spinal cords of patients (Kawamata et al., 1992;Engelhardt et al., 1993). − Lower FoxP3 (Treg) and Gata3 (Th2) mRNA levels in patients with rapid disease progression compared to patients with slow disease progression (Henkel et al., 2013). − Higher IFN-γ (Th1) level in patients with rapid disease progression compared to patients with slow disease progression (Henkel et al., 2013). − Abolishment of CD8 infiltration in spinal cord and restoration of cognitive capacity in a THY-Tau22 mouse model when T cells are depleted in periphery (Laurent et al., 2017). − No consensus about an increase or a decrease of the numbers of CD8 T cells in patients' blood (Mantovani et al., 2009;Rentzos et al., 2012;Murdock et al., 2017;Jin et al., 2020). − Observation of CD8 T cells in the cortex of FTD patients with tau P301L mutation (Laurent et al., 2017). − No impact of B cell deficiency on disease progression (Naor et al., 2009). − Increase percentages of Treg cells at a specific timepoint after Breg cell adoptive transfert, however, no impact on mouse survival (Pennati et al., 2018). No information Human patient − Detection of autoantibodies against proteins of spinal cord' cells in CSF of patients (Niebroj-Dobosz et al., 2006). − Decreased B cell percentages observed in patients' blood (Busse et al., 2017). subset of ALS patients (Okada et al., 2018;Abraham et al., 2019). Second, the inflammatory reaction in patients is suspected to switch from protective to deleterious during disease progression, suggesting that might exist a therapeutic window to ensure treatment efficacy in patients. This parameter remains very difficult to monitor as there are no validated biomarker able to assess the disease state. Fortunately, recent advances in imaging techniques bring new hope for future trials. For instance, a recruiting phase II clinical trial will evaluate the brain microglia response in ALS patients by TSPO binding following multiple doses of BLZ945, an antagonist of the CSF1 receptor, using PET with [11C]-PBR28 (NCT04066244). Other markers closely monitored and already measured in patients are concentrations of neurofilaments light chain (NF-L) and heavy phosphorylated chains (pNF-H) in blood and CSF (Gaiani et al., 2017;Benatar et al., 2018;Huang et al., 2020). A consensus on pNF-H as an early biomarker should give the possibility to better follow the effects of any drug on specific markers (Benatar et al., 2020). The future relies on the urgent need to identify specific biomarkers linked to different ALS forms and to be able to identify when a molecule or another can have a positive impact on the disease progression in each patient. Finally, as 90% of ALS cases are sporadic, the diagnosis is done when symptoms are already present and when inflammatory events are already well advanced. In this situation, having the possibility to follow markers in patients before any symptom appearance could bring very important insights. Chipika et al. (2020) have performed a systematic review of existing presymptomatic studies. They showed that studies suffered from small sample size, lack of controls, and rarely cohorts were followed until symptom manifestations. Objectives of these kind of studies are to be able to propose earlier diagnosis and thereby earlier therapeutic interventions or enrollment in clinical trials with homogenous sub-groups of patients. Personalized therapies are currently emerging, as exemplified by the antisens oligonucleotide Tofersen for SOD1 ALS patients (Miller et al., 2020) and even for presymptomatic SOD1 carriers in a new ongoing trial (NCT04856982 1 ). Cell Transplantation This second therapeutic approach implies transplantation of specific cell types in patients. More than 30 trials are listed on https://clinicaltrials.gov. Morata-Tarifa et al. (2021) provide in 2021 a meta-analysis of stem cell therapy in ALS. Among cell types available for transplantation, mesenchymal stem cells (MSC) have been the most widely tested autologous transplants through different techniques and route of administration (i.e., spinal cord, frontal lobe, and intrathecal) (Mazzini et al., 2008;Deda et al., 2009;Kim et al., 2014). MSC represent interesting cell candidates as they are easy to culture in vitro and are able to be differentiated into bone stroma, cartilage, ligament, and fat depending on the surrounding molecules (Deans and Moseley, 2000;Verboket et al., 2018). Moreover, these cells express cytokines and growth factors that could brought in therapeutic contributions for neuronal protection and act on local inflammation (Djouad et al., 2007;Suzuki et al., 2008). Besides MSC, bone marrow cells (BMC) were also transplanted in patients (Deda et al., 2009;Ruiz-López and Blanquer, 2016). These cells are a mixture of several cell types, containing lymphocytes, monocytes, and progenitor cells, giving them a strong regenerative potential (Verboket et al., 2018). Preclinical studies in animal models have shown safety and tolerability after transplantation of both cell types, as well as beneficial effects on pathogenic signatures (Kim et al., 2010; The interim analysis showed no tendency in favor of the verum group. Therefore it was decided to stop the study prematurely. Gibson et al., 2006;Cho et al., 2010;Cox et al., 2013;Brooks et al., 2015Brooks et al., , 2016Brooks et al., , 2017Ha et al., 2019;Oskarsson et al., 2020;Babu et al., 2021 Another study conclude that 100 mg/day in ALS, no significant reductions in motor cortical glial activation over 12-24 we, CNS neuroaxonal loss (NFL measure) over 36-40 we. Future pharmacokinetic and dose finding studies required. Frontiers in Molecular Neuroscience | www.frontiersin.org Table 4), but in some cases we completed data with theoretical targets (depending on the expression of the target on specific cell types indexed with proteinatlas.org, and for which no information in the literature was found). COX-2, cyclooxygenase-2; CSF1R, colony stimulating factor 1 receptor; IL-1R1, interleukin 1 receptor 1; IL-2, interleukin-2; IL-6R, interleukin 6 receptor; NF-κB, nuclear factor kappa-light-chain-enhancer of activated B cells; NRF2, nuclear factor erythroïd-2-related factor 2; PDE, phosphodiesterase; PGE2, prostaglandin E2; PI3K, phosphoinositide 3-kinase; PPARγ, peroxisome proliferator-activated receptor gamma; ROCK, Rho-associated protein kinase; S1P, sphingosine-1-phosphate receptor; TKR, tyrosine-kinase receptor; TLR-4, Toll-like receptor 4. Lewis and Suzuki, 2014). In ALS trials, conclusions of the therapies have been difficult to draw, especially because of the small number of patients included (Mazzini et al., 2008;Deda et al., 2009;Karussis et al., 2010). Analysis of Morata-Tarifa et al. (2021) showed that the best results were obtained with intrathecal injections of MSC in patients but with only transient positive effects on clinical progression (ALSFRSscores). This being, a major drawback of these transplantations has been that after all interventions respiratory functions were negatively impacted. As described in the previous section, a clinical trial administrating IL-2 was launched in order to force the production of Tregs in ALS patients. No adverse effects were reported and an increased Tregs number was observed (Camu et al., 2020). Thus, transposition to cell therapy has been considered. Thanks to protocols allowing Tregs ex vivo expansion (Alsuliman et al., 2016), a phase I clinical trial was launched to test further transplantation of ex vivo expanded Tregs. First results suggested safety with no clear conclusion as the study was under-powered for efficacy and did not have placebo or controls. However, from the results observed in the 3 transplanted patients, Tregs transplantation induced an increase in Tregs percentages, amelioration of Treg suppressive functions and a slowing down in the decrease of the ALSFRS-score. Unfortunately, these effects disappeared between rounds of Treg cell injections (Thonhoff et al., 2018). Nonetheless, a phase II is currently undergoing including 12 patients with placebo controls, despite cumbersome protocols for patients. The use of such autologous cell transplantation therapies could be very interesting as they appear more adapted to a large number of ALS patients. However, for all cell types proposed, it seems imperative today to come back to preclinical studies to optimize cell type choices and administration routes and to better understand cell distributions and their mechanisms of action. iPSC TO MODEL IMMUNE REACTIVITY IN AMYOTROPHIC LATERAL SCLEROSIS AND FRONTOTEMPORAL DEMENTIA As described above, immune reactions implying several cell types arise during disease progression in patients with ALS and FTD. As inflammatory responses can be beneficial or deleterious depending on signals in their environment, performing longitudinal studies would be very informative. Moreover, deciphering specific mechanisms involved in direct or indirect interactions between affected neurons and the different immune cells, that could be new targets of clinical trials, have to be done. However, whereas blood sampling may allow access to some immune cells of patients, biopsies of the brain and the spinal cord regions in which neurons are affected and in which tissue resident or infiltrated cells have to be analyzed, are not possible. Thanks to the iPSC technology described 14 years ago, researchers can now have access to human pluripotent cells that can be differentiated in theory into any cell type of the body. "In theory" as we know now that each protocol to differentiate iPSC into specific cell subtypes necessitates years of technical developments. Nevertheless, iPSC offer the unique opportunity to study intrinsic defects in iPS-derived mutant cells and interactions between different cell types in 2D and 3D co-cultures. iPS-Derived Neurons Until now, the vast majority of studies focused on iPS-derived neurons. Many reviews already present the unique capabilities of iPS-derived neurons to capture some key features of ALS and FTD in MNs and cortical neurons, respectively (Bohl et al., 2016;Lee and Huang, 2017;Hawrot et al., 2020;Lines et al., 2020). At first, protocols allowed only production of generic neurons with very poor purity. As technology advanced, many neuronal subtypes were able to be generated allowing to better understand the dichotomy between the ubiquitous expression of mutant genes in patients' cells and the death of specific subtypes of neurons (Sances et al., 2016). Nevertheless, there are still improvements in progress to obtain in culture the most specific neuron subtypes. For example, for MNs, it is today possible to characterize in cultures if generated MNs are those of the lateral or medial motor column (LMC or MMC) (Amoroso et al., 2013). Recently Mouilleau et al. (2021) deciphered mechanisms allowing the expression of HOX genes involved in the identities of MNs along the rostro-caudal axis of the spinal cord. These different findings open the way to generate specific cervical to lumbar MNs of the LMC or MMC that may be differentially affected in ALS patients. Interactions between neurons and astrocytes were also reported in some publications (Lee and Huang, 2017;Lines et al., 2020) and different results were reported regarding toxicity of astrocytes toward MNs depending on the studied ALS form, showing the necessity to pursue investigations. Macrophages and Microglial Cells Thanks to concomitant advances in the knowledge of macrophage biology, origin and diversity, protocols to generate macrophages have been progressively improved. For ALS, two subtypes of macrophages are of interest: microglial cells, the macrophages of the CNS, and peripheral macrophages located along the MN axon from the ventral root to the NMJ. For FTD, only microglial cells are of interest at first sight. To obtain macrophages/microglia from patients, first protocols were based on human monocytes isolated from blood and activated with cytokines in vitro. However, it has been shown that macrophages/microglia gradually loose their tissue identity in culture. It is now known that these macrophage-like cells are not capable to fully model tissue-resident macrophage populations that arise from embryonic precursors (Lavin et al., 2014;Hagemeyer et al., 2016;Lee et al., 2018). Moreover, macrophage activation exists as a spectrum of phenotypes and functional states challenging to reproduce in culture (Ginhoux et al., 2016). As an alternative option to blood cells and as an infinite source of patient-specific cells, embryonic stem cells (ESC) and iPSC were tested for their capacity to be differentiated into macrophage-like cells. A great difficulty for such iPSC protocol developments was the capacity to generate tissueresident macrophages and in particular macrophages of the CNS. Indeed, microglia and peripheral macrophages have different developmental origins and are in very different environments in vivo, suggesting that different protocols must be set up to mimic the diverse macrophage subtypes. Many reports and reviews describe and compare protocols allowing the generation of functional macrophage-like cells (Haenseler and Rajendran, 2019;Hasselmann and Blurton-Jones, 2020;Hedegaard et al., 2020;Lyadova et al., 2021). Clearly, iPSC require specific environmental signals for proper differentiation and maintenance of their identity (Bohlen et al., 2017;Gosselin et al., 2017), suggesting that for ALS and FTD studies, iPS-derived microglia have to be co-cultured with either MNs or cortical neurons, respectively. iPS-derived macrophage/microglia monoculture phenotypic defects were observed in different models (Haenseler and Rajendran, 2019). Zhao et al. (2020) studied iPS-derived macrophages from C9ORF72 (G4C2)n patients and control subjects and compared their immune-suppressive functions. Their results demonstrated that immunosuppressive functions of ALS macrophages were similar to controls, suggesting that an ALS mutation had no influence on this function in macrophages. Some other reports described (i) 2D co-cultures of microglia with neurons to mimic their neural environment, (ii) microglia generation in brain or spinal cord organoids, or (iii) xenotransplantation of microglial-like cells (Hasselmann and Blurton-Jones, 2020;Fattorelli et al., 2021;Tan et al., 2021). Current reports in 2D cultures between neurons and iPS-derived microglia studied their morphology and migration, their inflammatory responses and clearance capacity. Similarly, in 3D cultures and in organoids, observations were about migration, activation, proliferation, and physical interactions of microglia with other cells (Haenseler and Rajendran, 2019). Most of current studies focused on microglia addition to organoids and modeling of Alzheimer's or Parkinson's defects, making the proof that the different iPSC based models could bring new knowledge for ALS and FTD. Recently, the generation of microglia-like cells in human sensorimotor organoids derived from ALS iPSC was reported (Pereira et al., 2021), proving that these cells can arise and be phenotypically studied in these models. Dendritic Cells, Mast Cells and Natural Killer Cells To study DCs, it is possible, like for macrophages, to generate primary DC from peripheral blood-derived monocytes, and then to mature them into distinct DC subsets in vitro (Romani et al., 1994;Sallusto and Lanzavecchia, 1994). In their review of 2020, Ackermann et al. (2020) described the different protocols to generate DC and their subpopulations from iPSC. DCs were shown to be morphologically and functionally similar to their in vivo counterparts. Interestingly, the generation of DC offers the possibility to investigate their interactions with T cells derived from the same patient, as iPS-derived DC were shown to be able to stimulate allogeneic naïve T cells and autologous antigen-specific CD8+ T cells. These new cellular tools will be very helpful considering the limited availability of DC in blood and tissues. Few protocols exist to differentiate iPSC into mast cells (Kovarova et al., 2010;Igarashi et al., 2018;Kauts et al., 2018;Ikuno et al., 2019). First protocols were long, with low yield, and a production of immature cells. Thanks to developmental studies (Gentek et al., 2018;Li et al., 2018), the most recent protocol uses a sequential co-culture system which allow shortterm cultures and efficient production of scalable quantities of functional mature mast cells, exhibiting the strongest innate immune responses (Bian et al., 2021). Like macrophages, mast cells maturate in tissues under the influence of the local environment, suggesting that co-cultures with neurons might be the more relevant study model for ALS and FTD. For NK cells, a recent review summarizes protocols to obtain these cells from human iPSC (Shankar et al., 2020). iPS-derived NK cells were shown to be phenotypically similar to primary NK cells as well as regarding their effector function. CD56 is a marker often used to define human NK cells (murine NK cells are CD56 negative). Two subpopulations (dim and bright) have been identified and play different roles (CD56 bright cells exhibit high cytokine secretion while CD56 dim exhibit high cytotoxicity). As it is unclear if these both cell types are generated from iPSC, further investigation will be necessary, before using those cells to conduct disease modeling. Adaptive Immune Cells Whereas protocols to generate cells of the innate immune system were set up quite rapidly, the differentiation of iPSC into cells of the adaptive immune system appeared much more challenging (Montel-Hagen and Crooks, 2019). For B lymphocytes, first protocols just emerge and allow only to model the earliest stages of B lineage development (French et al., 2015;Böiers et al., 2018;Richardson et al., 2021). For T lymphocytes, whereas the first stages of iPSC differentiation into T progenitors were rapidly set up, some cell specificities are difficult to reproduce to obtain in vitro mature T lymphocytes. The particularity of the T-cell lineage is that the first stages of their differentiation occurs in the thymus. To reproduce the thymus environment and differentiate iPSC into T progenitors, co-cultures were performed between iPSC and stromal cells that express a Notch ligand. Notch was shown to be essential to induce T cell differentiation of progenitors, at the expense of B-cell differentiation. Recently, a stromal-cell free protocol was developed to generate the T cell lineage from blood-derived CD34-positive cells (Nianias and Themeli, 2019;Netsrithong et al., 2020), a strategy that has now to be transposed in iPSC differentiation protocols. Additionally, lymphocytes express at their surface different receptors forming a repertoire (TCRs for T cell receptors). The latter is crucial for the response of T lymphocytes in an antigen-specific manner to an unlimited number of unknown pathogens encountered throughout life. Interestingly iPS-derived T progenitors were shown to express a broad TCR repertoire (Chang et al., 2014), suggesting that each iPSC clone has its own TCR repertoire. Current limitations of the iPSC protocols rely in the maturation of T lymphocytes and their positive selection. This selection consists in the maturation of double positive CD4-CD8 lymphocytes into simple positive cells. Whereas iPSC protocols not clearly demonstrated their ability to mimic this positive selection, a recent improvement was proposed with the generation of 3D artificial thymic organoids which allowed the generation and maturation of conventional T cells . Having in mind the different roles played by T cell subtypes in neurodegenerative diseases, it will be crucial now to generate IPS-derived Th1, Th2, Th17, or Treg cells. Once generated, those cells will be of great interest for ALS and FDT modeling. Modeling Immune Reactivity With Induced Pluripotent Stem Cell In ALS and FTD, the original mono-centered pattern was for the neuron to degenerate and send out pain signals that alert cells in the environment. Thus for a long time the majority of studies focused on neurons analysis and how to avoid their degeneration (in patients, in animal models, and in cellular models) without too much consideration for other cells. In this context, iPSC-derived neurons made it possible to study different forms of ALS and FTD, including sporadic cases, and to show the progressive appearance of intrinsic defects in patients' neurons. This is a major advantage of the iPSC technology that has made it possible to show different chronologies of appearance of neuronal dysfunctions from one patient to another (Fujimori et al., 2018) and that cannot be followed individually in patients. Thanks to iPSC modeling approaches, new therapeutic targets were identified and are currently being tested in patients (Wainger et al., 2014). At the same time, genetic studies have shown that the majority of genes mutated in the different forms of ALS and FTD are expressed ubiquitously suggesting that all cells in the body can be altered in the pathology. The diagram has therefore evolved into an increasingly off-center diagram in which the neuron is no longer the central element but one element among others. Understanding the interactions between cells has therefore become a crucial issue in the understanding of neurodegenerative diseases in order to find new therapeutic avenues. Among the cells of interest are immune cells known to be activated in ALS and FTD. Unfortunately, as for neurons, these are cells that are difficult to access in patients. Thanks to protocols derived from iPSCs, it is now possible to study not only the intrinsic defects of these cells but also their interactions with neurons. It becomes possible to study inflammatory responses according to the normal or degenerative states of neurons, with the perspective of understanding how inflammation evolves and could therefore be controlled. These analyses, combined with those made in longitudinal studies in patients, could help identify the therapeutic window in which to intervene in the most beneficial way for patients. CONCLUSION AND PERSPECTIVES The main conclusion from decades of studies on patients and animal models is that ALS and FTD are heterogeneous diseases at several levels and have to be seen as a group of different diseases rather than a single one, at least regarding mechanisms leading to neuron degeneration. To date ALS or FTD patients are often still classified according to external clinical signs and this is mainly due to a lack in early biomarkers that could help to better identify subgroups of patients. What we hope having shown in this review is the high complexity of inflammatory events intermingled one to another, a complexity that may be extended more largely to all neurodegenerative diseases. Many studies revealed some inflammatory signs in ALS and FTD patients, animal models and also now in iPSC models. Nonetheless we still do not really understand if the inflammation is a driver or consequence of the pathology. Although some key aspects start to be lighted, questions remain: when exactly the inflammation is beneficial or detrimental? Which immune cells are responsible for the inflammatory responses? How the inflammatory events may differ between diverse genetic forms, especially in patients harboring mutations in genes known to be linked with immune functions? A disappointing observation made today is that decades of clinical trials with immunomodulatory molecules have failed to prove their beneficial effects. Some explanations linked to pre-clinical protocols may explain these failures, such as the lack of power in numerous studies and the use of one single animal model that might not recapitulate all disease forms. Also, the timing of drug administration is often in early or presymptomatic stages in pre-clinical studies, a stage that cannot be targeted in enrolled patients. In this review we wanted also to focus on the tested immune modulatory drugs together with their unstudied potential targets that could impact the disease course. Indeed, most of the tested molecules target generic molecules, making difficult the monitoring of the real effects of the therapeutics. Our conclusion is that we still lack diagnostic tools and knowledge regarding the inflammatory events in ALS and FTD. More investigations are necessary before being able to design a treatment that could efficiently modulate inflammatory events. The multicellular implication is now widely accepted knowing that most genes are ubiquitously expressed. Thus, the use of a single drug targeting one pathway may be outdated. More personalized medicine begins to appear as a mean of integrating all factors of heterogeneity. Until recently, ALS and FTD animal models allowed the study of genetic forms of the diseases. However, a lack of models recapitulating sporadic cases, remains. Additionally, the impossibility to have access to patient's CNS cells and the rapidity of the neurodegenerative processes make difficult longitudinal studies with patients. In this context, iPSC offer unique opportunities to bypass these limitations. With the development of iPSC differentiation protocols to generate immune cells, researchers have now at their disposal tools to ask questions about interactions between ALS-or FDT-affected neurons and immune cells in a human context. These models may allow also to study a sequence of events. Of course, these protocols are still new and imperfect: cultures are not always pure, some cell subtypes can still not be generated and iPSCs are known to have a more embryonic than adult genomic memory (as opposed to transdifferentiated cells that do not go through an embryonic stage for their generation). This being, the current literature since several years has made the proof that it is possible to model diseases like ALS and FTD with iPS-derived neurons. In the recent years, several drugs have been tested and approved for clinical trials based on preclinical studies on iPS-derived neuronal cells (Fujimori et al., 2018;Okano et al., 2020). With regard to the innate and adaptive immune cells derived from iPSC, first papers showed that the generated cells were functional, more or less mature, and some functional intrinsic defects were shown, validating the use of these cells to identify further unknown mechanisms. To go one-step further, co-cultures are necessary to study interactions with neurons. Since few years, co-cultures, 3D cultures (e.g., microfluidic chips) as well as 3D brain and spinal organoids (Hor et al., 2018;Tan et al., 2021) are developed and available to study interactions between either MNs or cortical neurons affected mainly in ALS and FTD, respectively, and the various immune cells. Even if iPSderived macrophages/microglia were the first generated immune cells, there is no report yet about iPSC modeling of ALS or FTD macrophage/microglia co-cultures with neurons. In their recent study, Pereira et al. (2021) describes the generation of human sensorimotor organoids with iPSC from ALS patients. They showed that these organoids contain microglia-like cells, but their study was not focused on the phenotypes of these cells, but rather on defects at the level of NMJs (Pereira et al., 2021). Another study describes the generation of cerebral organoids from iPSC of 3 FTD patients mutated in the tau gene (Bowles et al., 2021). Signs of neurodegeneration, fewer excitatory neurons and elevated levels of inflammation were observed in 6-month old cerebral organoids, showing that it was possible to recreate much of the damage seen in FTD. Moreover, they were able to prevent neuron death with an experimental drug. Even if microglial cells were not studied directly in these organoids, an inflammatory response was observed suggesting that this pathological feature could be modeled in such 3D models. Moreover, it was recently shown that exposure of brain models to serum mimics age associated BBB breakdown , providing a platform to look for effective treatments for disorders with an age component like ALS and FTD. As a final word, the iPSC technology grants new prospects to better understand and model inflammatory events in different FTD and ALS disease forms. Co-cultures are offering a new era of researches with more integrated models which we hope will lead to new ways to perform drug screening and offer adapted therapeutic opportunities for ALS and FTD patients. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication. FUNDING EL had a fellowship from the French Ministry of Research and the doctoral school "Cerveau, Cognition, Comportement" (ED3C). LK had a salary from Agence Nationale de la Recherche (ANR-AAPG2020 Neurovitas-ALS). This review was supported by Sorbonne Université, the Paris Brain Institute (ICM), the Institut National de la Santé Et de la Recherche Médicale (INSERM), and the Centre National de la Recherche Scientifique (CNRS).
v3-fos-license
2023-02-22T16:17:35.747Z
2023-02-01T00:00:00.000
257068625
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "339554688ea2fe733fdd192be796fc74103c8ca0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41500", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Materials Science" ], "sha1": "aa7ef0b9936f2c3b0b5c21c05f24d3d7a7c07273", "year": 2023 }
pes2o/s2orc
Synthesis of 3D Porous Cu Nanostructures on Ag Thin Film Using Dynamic Hydrogen Bubble Template for Electrochemical Conversion of CO2 to Ethanol Cu-based nanomaterials have been widely considered to be promising electrocatalysts for the direct conversion of CO2 to high-value hydrocarbons. However, poor selectivity and slow kinetics have hindered the use of Cu-based catalysts for large-scale industrial applications. In this work, we report on a tunable Cu-based synthesis strategy using a dynamic hydrogen bubble template (DHBT) coupled with a sputtered Ag thin film for the electrochemical reduction of CO2 to ethanol. Remarkably, the introduction of Ag into the base of the three-dimensional (3D) Cu nanostructure induced changes in the CO2 reduction reaction (CO2RR) pathway, which resulted in the generation of ethanol with high Faradaic Efficiency (FE). This observation was further investigated through Tafel and electrochemical impedance spectroscopic analyses. The rational design of the electrocatalyst was shown to promote the spillover of formed CO intermediates from the Ag sites to the 3D porous Cu nanostructure for further reduction to C2 products. Finally, challenges toward the development of multi-metallic electrocatalysts for the direct catalysis of CO2 to hydrocarbons were elucidated, and future perspectives were highlighted. Introduction The rapid increase in atmospheric greenhouse gas concentrations has sparked research into the development of highly active and selective electrocatalysts for the electrochemical conversion of CO 2 [1,2]. The operation of a typical CO 2 catalyst system involves the electrochemical reduction of dissolved CO 2 at the cathode, with a balancing oxidation reaction at the anode, which is often the electrolysis of water to O 2 [3]. In aqueous media, oxygen is the only product generated during the anodic reaction (referred to as OER); however, the cathodic products often undergo multiple intermediate reactions that depend on multiple key factors related to the catalyst properties (e.g., composition, surface morphology, chemical nature) [4,5]. To date, promising catalysts have been shown to generate C 1 products (e.g., CO, HCOO -) with a high level of activity and selectivity, which has further motivated research into the development of catalysts that enable the direct and scalable reduction of CO 2 to more valuable fuels (C 2 and C 2+ ) [6][7][8]. Within the transition group elements, Cu is the only known element that has the capacity to directly electro-catalyze CO 2 into multi-carbon products. Consequently, this has attracted considerable interest within the CO 2 reduction reaction (CO2RR) research field over the last decade [7,9]. The reduction of CO 2 to C 2 and C 2+ hydrocarbons via heterogeneous catalysts is heavily dependent on the interactions between reactants or intermediate products, as well as the active sites of catalysts (i.e., *CO) [1,4,10]. For the initiation of reactions, adequate binding energies must be present to prevent the premature desorption of intermediates, while allowing further reduction toward higher-order hydrocarbons. Rossmeisl et al. described a the tuning of Cu-based porous structures through the use of secondary metals (Au, Ag, Zi, Sn, etc.) were also described by Kottakkat et al. Notably, the simultaneous bimetallic electrodeposition of Ag and Cu ions decreased the overpotential for CO production [28]. For this study, a class of 3D Cu-based porous nanostructures was systematically designed and evaluated for the electrochemical reduction of CO 2 in a nearly neutral pH electrolyte, with a specific focus on the time and applied current density required for deposition. Further, the optimized deposition parameters in the first phase of this study were used in conjunction with a sputtered Ag thin film to maximize the utility of the 3D porous structure, where the spillover effects of CO intermediates at Ag sites could be harvested to target additional reduction at the pore walls of available Cu sites. While the FE of 33% achieved for HCOOH showed enhanced selectivity on a Cu foam catalyst, the modified catalyst with Ag exhibited remarkable selectivity for the generation of ethanol with an FE of 35% at −1.0 V vs. RHE. The electrochemical kinetics of the Cu/Ag nanostructures was investigated through the impedance (EIS) and Tafel analyses to gain further insights into the enhanced selectivity and activities of the novel catalyst. Electrode and Electrolyte Preparation A pristine Cu sheet was cut into 1 × 1 cm 2 plates, with each plate spot welded to a pure Cu wire to allow connectivity with electrochemical instruments. Subsequently, the as-prepared substrates were coated with a chemically inert two-part epoxy (3M™ Canada) to insulate all exposed surfaces except for the 1 cm 2 Cu face. The samples were then thoroughly washed in an acetone bath and sonicated for 20 min, followed by rinsing with isopropyl alcohol and MilliQ water to eliminate any industrial coatings and oil-based agents. Next, the substrates were mechanically polished using a MicroPolish Aluminaimpregnated felt pad (0.05 µm, Buehler) to a mirror finish. Finally, the samples were chemically etched in a 2 M nitric acid solution for 180 s, rinsed thoroughly, and dried under a stream of Ar gas prior to use in the synthesis procedure. To assess electrochemical performance of the catalysts, a solution of 0.1 M potassium bicarbonate (KHCO 3 ) was freshly prepared before each experiment. The electrolyte solutions were initially purged with Ar gas and then purged continuously with the reactant gas (CO 2 ) until a pH of 6.8 was obtained, indicative of CO 2 -saturated 0.1M KHCO 3 electrolyte. To assess the activities of the catalyst in the HER regime, 0.1 M of potassium sulfate (K 2 SO 4 ) solution was employed as the electrolyte to assess the hydrogen evolution (HER) activity, with a balanced pH of 6.8 using sulfuric acid (H 2 SO 4 ). Electrochemical Measurements Voltammetry and impedance spectroscopy experiments were conducted using a potentiostat (SI 1287A, Solartron) equipped with an impedance analyzer system (Model 1260A, Solartron), which were configured in parallel. CorrWare and ZPlot software packages were utilized for data collection and processing. For all experiments, the positioning of counter and reference electrodes remained constant to ensure minimal deviations. EIS analyses were performed in the frequency range between 1.0 MHz and 1.0 mHz, with an amplitude of 10 mV. To fit the Nyquist plots, The ZView software package was employed to fit the Nyquist plots. For all measurements, an Ag/AgCl (3.0 M KCl) (CHI 111, CH Instruments) was utilized as the reference electrode (RE), while a custom-made Ti/Ta 2 O 5 -IrO 2 with a surface area of 3 × 1 cm 2 was used as the counter electrode (CE) for boosted stability. All recorded potentials in this study were converted to the Reversible Hydrogen Electrode (RHE) reference scale according to the following equation: Product Characterization To characterize the generated gaseous products, the cathode chamber of the H-cell was continuously purged in-line to a sensor-equipped (TCD and FID) gas chromatography (GC) system (Multiple-gas-#5, SRI Instruments), which allowed for the detection of carbonaceous products, as well as hydrogen in the fluent stream. A customized gas-tight Teflon H-cell was employed for the product analysis. At the conclusion of the experiments, the electrolyte in the cathodic chamber was immediately collected and sealed for further characterization and analysis using a 1 H NMR spectrometer (600 MHz Bruker Avance III NMR, Bruker). A 350 µL volume of the catholyte was added to a 350 µL internal reference of 0.05 wt.% 3-(trimethylsilyl)propionic-2,2,3,3-d 4 acid sodium salt in D 2 O. To measure the total amount of the liquid-phase products generated from CO2RR, a chemical oxygen demand (COD) analysis was also conducted using 174-334 accu-TEST standard range (5-150 mg L −1 ) twist fcap vials, where the same catholyte sample batch for NMR was introduced to a vial that contained a chromic acid solution [29,30]. After the reaction, an assessment was performed using a portable spectrophotometer (HACH-DR 2800) operated at a 420 nm wavelength. For COD analysis, the formed aqueous hydrocarbon compounds undergo oxidation according to the following: where a, b, and c refer to the stoichiometric ratio of carbon, hydrogen, and oxygen, respectively, in the collected products. For each oxygen molecule, four electrons were transferred following the equation below: As a result, the total consumed charge (Q COD ) for the present carbonaceous products could then be calculated using the following equation: where F and V represent the Faraday constant and the volume of the solution used, respectively. Fabrication of Cu Foam under Different Deposition Times In a time-controlled deposition study, Cu foams were created using a solution containing Cu 2+ ions under an applied constant current with varying deposition times in a two-electrode electrochemical system. A solution containing H 2 SO 4 (200 mL, 2 M) and CuSO 4 ·5H 2 O (200 mL, 0.1 M) was prepared at a 1:1 volumetric ratio to give final concentrations of 50 mM CuSO 4 and 1.0 M H 2 SO 4 . A customized H-cell with two chambers separated by a glass frit was filled with 25 mL of the solution, followed by 25 min of continuous purging with Ar gas. This ensured that HER and Cu reduction were the primary reactions in this system under an applied potential. A DC power supply (E3634A, Keysight Technologies) was calibrated with a constant current (CC) program for different deposition times. Following preparation, the Cu samples were positioned in the cathodic chamber, and the electrochemical deposition proceeded under a constant current for 10, 30, 50, 80, and 100 s (Table S1). Under an applied voltage, hydrogen gas was vigorously generated and desorbed from the surface. The formation of observable bubbles close to the cathode surface provided a soft 3D template for the deposition of Cu. Fabrication of Cu Foam under Different Deposition Currents To systematically identify the optimal deposition current for the highest CO 2 reduction performance, the total charge transfer for the highest-performing time-controlled sample was retained to assess the effects of the applied current on the electrochemical performance. The highest-performing sample was synthesized based on 80 C of charge transfer (Table S2). As such, 80 C was used to calculate the deposition time at 200, 400, 600, 800, and 1000 mA to retain the total charge transfer under different currents. The measured voltages to apply the target currents were 4.3 V, 6.7 V, 9.6 V, 10.6 V, and 12.1 V, respectively. The as-prepared Cu foam samples were gently soaked in a MilliQ water bath, followed by drying in a stream of Ar gas at ambient room temperature. Synthesis of Ag-Modified Cu Foam To introduce a secondary metal at the lowest level of the 3D porous Cu structure, a 200 nm Ag thin film was deposited onto the pretreated Cu substrate using a magnetron. The coated 1 cm 2 sample was then insulated following the same procedure described above. Next, the Cu nanostructures were grown onto the Ag-modified substrate using the optimized deposition parameters derived from the results obtained thus far. The sputtered layer was quantified at a thickness of~200 nm atop the initial Cu substrate, which ensured adequate structural integrity to prevent delamination during the catalytic reactions. Surface Characterization In a typical bottom-up electrochemical deposition nucleation and growth system, the thickness of the grown film is closely correlated with the duration of the applied potential, a trend that is expected to be observed under different deposition lengths in the synthesis of Cu nanostructures. This relationship deviates from linearity as a function of the deposition time, where the in operando increase in surface area dynamically reduces the applied current density. As a result, the reconstruction of surficial nanostructures can easily occur, which affects the availability of active sites. Chorkendorff et al. reported on an electropolished porous Cu-based structure that had an FE of 14% for C 2 H 4 and 5% for CH 4 [31]. Similarly, Palmore et al. described the use of a soft-template porous Cu, which yielded formate as its main product with an FE of 29% [27]. As such, a comprehensive investigation of the influences of deposition parameters on the catalytic activities of this catalyst subgroup might reveal the range of different products reported in the literature. For our first analysis, nanostructured Cu samples were grown to examine the direct impacts of the electrodeposition time and current on the CO 2 reduction performance. The SEM images of mirror-finished Cu substrate were provided for comparison ( Figure S1). The as-prepared samples were also imaged using an FE-SEM to characterize the surface nanostructures and investigate the morphological variations due to changes in the studied parameters (i.e., deposition time and current). Considering the time-dependent samples, low magnification imaging revealed that the surface level structures were not uniform across the entire sample until 50 s of deposition ( Figure S2). Additionally, an increase in the deposition time to 80 s demonstrated the presence of a uniform surface structure across the entire substrate. Notably, it was observed that further increases in the deposition time yielded larger pore diameters, as well as pore wall thicknesses. High magnification SEM images revealed dendritic growth, predominantly for the sub 50 s deposition times. Longer deposition times exhibited a shift toward the anisotropic densification of the initial dendritic branches, which increased the pore wall thickness ( Figure S3). The catalysts underwent vertical scanning interferometry (VSI) to obtain quantitative measurements of the porous structures by profiling their surface structures, which enabled the precise 2D elucidation of their physical properties. In addition, the optical roughness Nanomaterials 2023, 13, 778 6 of 17 factor was calculated using the root mean squared (RMS) average between the height deviations and the mean surface area using the following equation: where M and N are the dimensions of the scanning area in the X and Y pixel positions, for which Z height values are optically measured. According to Figure S3, the measured film thickness and, alternatively, pore depth, demonstrated a linear relationship with deposition time (up to 80 s). Additional deposition time (to 100 s) yielded a negligible increase in pore depth, which suggested a diminishing rate of Cu ion deposition. This was due either to insufficient molarity, dominating the HER as the result of a much larger available surface area, or the shift toward the densification of dendritic branches rather than layered vertical growth [27]. Additionally, an analysis of the roughness factor obtained through VSI depicted an increasing trend in parallel with the deposition times. This suggested the formation of new layers atop initial structures, which hinted at the densification and prolonged growth of the dendritic nanostructures that comprised the pore walls. Topological maps of the time-controlled samples revealed that the pore radii steadily increased with longer deposition times of up to 80 s; however, further deposition had the effect of reducing them. The SEM images of the samples deposited for different durations ( Figure S4) suggested that the microstructured morphologies were dependent on the applied current. Initial observations revealed that the porous structures were highly ordered and repeatable across the surface of the sample. High magnification SEM showed that lower currents produced nanostructures with rough and stepped edges, while increased currents resulted in round and softer corners/edges within the pore walls. Considering the deposition mechanism of a bottom-up growth approach, the applied current dictated both the dynamic rate of template formation and metal ion deposition alongside mass-transport limitations. While the deposition rate of metal ions initially reached diffusion limitations, in operando increases in the surface area and the reduction in localized current densities stabilized hydrogen bubbles to form larger aggregates, leading to larger pores over prolonged deposition times or under higher currents. VSI analysis of the samples revealed a backward trend in surface roughness and pore depth ( Figure S5). As seen in Figure 1, the synthesis of samples under a 200 mA deposition current showed the largest R q and a pore depth of 60 µm, both of which were reduced by increasing the deposition current. Further, it was proposed that higher deposition currents contributed significantly to the HER, thereby potentially sterically impeding the availability of Cu 2+ ions at the double layer. A summary of the VSI analysis of the pore profiles can be found in Table S3. As revealed through roughness and pore depth analysis, the rapid generation and aggregation of hydrogen at the cathode surface influenced the creation of the smoother edges and steps observed under larger applied currents. Among all the samples deposited under different currents, the radii at the bottoms of the pores remained at~40 µm. To validate the presence and stability of the Ag thin film throughout the deposition process, energy dispersive spectroscopy (EDS) was employed to obtain the elemental map of the pores. Figure 2a displays the EDS mapping of the f-Cu/Ag/Cu sample, showing the elemental composition of two candidate pores, confirming the Ag sites present at the bottom of the 3D f-Cu porous nanostructure. XRD was utilized to characterize the crystallinity of the as-prepared f-Cu nanostructures. As shown in Figure 2b, major reflections were detected at angles of 29.7°, 36.7°, 42.6°, 61.5°, and 73.7°, which were indexed to the copper (I) oxide planes of (110), (111), (200), (220), and (311), along with reflections at 43.5°, 50.7°, and 74.3°, which were indexed to the metallic Cu planes of (111), (200), and (220), respectively. Data for the Cu2O (05-0667), Cu (04-0836), and Ag (04-0783) were obtained from the JCPDS database and simulated for comparison. The XRD fingerprints of all investigated samples were identical for the most part, and exhibited outstanding facets between the various parameters tested. Figure 2 displays the crystallographic spectra of the f-Cu sample, showing numerous surface oxide (111) and (200) facets. As these data were acquired ex situ, oxide formation during transport was unavoidable; thus, the detection of copper oxides was largely attributed to this factor. Furthermore, the XRD pattern of the Ag-modified Cu substrate exhibited peaks for both Cu 0 and Ag. Similarly, metallic Cu 0 was also found on the as-deposited f-Cu sample extracted from the polycrystalline Cu substrate as a powder. Due to the high overpotential deposition, less stable facets converged toward more stable facets through reconstruction. Overall, a range of crystalline facets was detected on the as-deposited Cu nanostructures, with a larger number of (111) and (200) that corresponded to other dominant Cu facets. To validate the presence and stability of the Ag thin film throughout the deposition process, energy dispersive spectroscopy (EDS) was employed to obtain the elemental map of the pores. Figure 2a displays the EDS mapping of the f -Cu/Ag/Cu sample, showing the elemental composition of two candidate pores, confirming the Ag sites present at the bottom of the 3D f -Cu porous nanostructure. XRD was utilized to characterize the crystallinity of the as-prepared f -Cu nanostructures. As shown in Figure 2b, major reflections were detected at angles of 29.7 • , 36.7 • , 42.6 • , 61.5 • , and 73.7 • , which were indexed to the copper (I) oxide planes of (110), (111), (200), (220), and (311), along with reflections at 43.5 • , 50.7 • , and 74.3 • , which were indexed to the metallic Cu planes of (111), (200), and (220), respectively. Data for the Cu 2 O (05-0667), Cu (04-0836), and Ag (04-0783) were obtained from the JCPDS database and simulated for comparison. The XRD fingerprints of all investigated samples were identical for the most part, and exhibited outstanding facets between the various parameters tested. Figure 2 displays the crystallographic spectra of the f -Cu sample, showing numerous surface oxide (111) and (200) facets. As these data were acquired ex situ, oxide formation during transport was unavoidable; thus, the detection of copper oxides was largely attributed to this factor. Furthermore, the XRD pattern of the Ag-modified Cu substrate exhibited peaks for both Cu 0 and Ag. Similarly, metallic Cu 0 was also found on the as-deposited f -Cu sample extracted from the polycrystalline Cu substrate as a powder. Due to the high overpotential deposition, less stable facets converged toward more stable facets through reconstruction. Overall, a range of crystalline facets was detected on the as-deposited Cu nanostructures, with a larger number of (111) and (200) that corresponded to other dominant Cu facets. ECSA and DLC The electrochemically active surface area (EASA) may be obtained by measuring the double-layer capacitance (DLC) when performing cyclic voltammetry in a purely capacitive (non-faradaic) voltage region. The impacts of electrodeposition parameters on the EASA were explored through scan rate-dependent cyclic voltammetry in a non-faradaic potential window (Figure 3a). The capacitance of the double layer was then approximated by linear fitting of the current density against the scan rate. Subsequently, the capacitance values were normalized against a known smooth finished surface to correlate the capacitance to quantitative surface roughness values (Figure 3b), in contrast to the geometric analyses performed by SEM and VSI. A summary of all DLC measurements is found in Table S4. ECSA and DLC The electrochemically active surface area (EASA) may be obtained by measuring the double-layer capacitance (DLC) when performing cyclic voltammetry in a purely capacitive (non-faradaic) voltage region. The impacts of electrodeposition parameters on the EASA were explored through scan rate-dependent cyclic voltammetry in a non-faradaic potential window ( Figure 3a). The capacitance of the double layer was then approximated by linear fitting of the current density against the scan rate. Subsequently, the capacitance values were normalized against a known smooth finished surface to correlate the capacitance to quantitative surface roughness values (Figure 3b), in contrast to the geometric analyses performed by SEM and VSI. A summary of all DLC measurements is found in Table S4. ECSA and DLC The electrochemically active surface area (EASA) may be obtained by measuring the double-layer capacitance (DLC) when performing cyclic voltammetry in a purely capacitive (non-faradaic) voltage region. The impacts of electrodeposition parameters on the EASA were explored through scan rate-dependent cyclic voltammetry in a non-faradaic potential window (Figure 3a). The capacitance of the double layer was then approximated by linear fitting of the current density against the scan rate. Subsequently, the capacitance values were normalized against a known smooth finished surface to correlate the capacitance to quantitative surface roughness values (Figure 3b), in contrast to the geometric analyses performed by SEM and VSI. A summary of all DLC measurements is found in Table S4. The EASA analysis of time-dependent f -Cu electrodes revealed an increase in the electrochemically active surface area with longer deposition times, which agreed with the geometric trend measured via SEM ( Figure S2). A mirror-finished Cu plate was utilized as the reference roughness with a double-layer capacitance of 6.29 µF cm −2 . Compared to the as-deposited sample prepared under a 30 s deposition time, a 10× increase in the EASA was measured through this method, followed by a 50x increase when 100 s had elapsed. These measurements aligned with the optical measurements and calculations obtained by VSI. The roughness observed through VSI analysis was also confirmed by these measurements (Figure S6). A 200 mA deposition current demonstrated the highest capacitance at 510.6 µF cm −2 and a surface roughness (RMS R q ) of 30.0 µm cm −2 , which yielded an 80x increment in the availability of active sites compared with pristine Cu. A further increment in the deposition current lowered the capacitance, which was also confirmed through optical surface measurements using VSI. Topological and EASA studies suggested that the deposition time was the core parameter to influence the availability of surface sites between the fabricated samples. While maintaining the total charge transfer at 80 coulombs, a higher deposition current was correlated with shorter deposition times, and demonstrated depreciating EASA. As a visual reference, VSI models of time-controlled samples were compiled to generate a brief video that illustrated bottom-up growth with various deposition time windows ( Figure S7). Oxide Passivation Layer The electrodeposition of Cu ions generates a highly reactive Cu surface where spontaneous oxidation can occur immediately upon exposure to air or disconnection from a bias source. As the reduction of the surface oxide layer during CO2RR can skew efficacy measurements, the catalysts were subject to a pretreatment protocol to normalize the passivation layer prior to CO 2 reduction tests. In the recent literature, the intentional oxidation of Cu catalysts (often referred to as 'oxide-derived') has demonstrated potential improvements in selectivity to make C 2 H 4 and C 2 H 6 . However, some studies have reached conflicting conclusions as to the influence and stability of oxide species at high overpotentials, where it is thought that most surficial oxide-derived species undergo in situ reduction prior to the catalysis of reactants. Huang et al. reported on a study where an initial oxide mass contribution of 10% from Cu 2 O was reduced to 5% following prolonged CO 2 electrolysis [32]. Further work by Dau et al. demonstrated that surface oxide species were highly unstable and mostly reduced to Cu 0 prior to the onset potential for CO 2 reduction (~0.7 V vs. RHE) [28]. While the presence of surface oxides is unavoidable in reactive samples, the ambiguity of surface oxides between samples must be considered. As such, the prepared samples underwent a second-stage pretreatment cycle to ensure that the surface oxides between samples were normalized. Cyclic voltammetry (20 cycles) in the potential range below the reduction of CO 2 at a scan rate of 100 mV s −1 was performed prior to the CO 2 reduction experiments. Pretreatment cycling was conducted under the same conditions as CO2RR to enable CO 2 reduction experiments without the need to exchange electrochemical vessels. CO2RR Performance To benchmark the electrocatalytic performance of the synthesized catalysts, linear sweep voltammograms (LSVs) were performed in a solution containing a CO 2 -purged 0.1 M KHCO 3 electrolyte with a final electrolyte pH of 6.8. LSVs were also performed in an Ar-purged 0.1 M K 2 SO 4 to evaluate the catalytic performance under favorable HER conditions. The current profile obtained from the two systems was then used to roughly approximate the expected faradaic efficiency (Figures S8-S10). The LSV experiments showed an increase in catalytic activities with longer deposition times of up to 80 s. In contrast, the 100 s deposition time sample (f -Cu 100s ) performed at a lower current density than those with shorter deposition times (Figure 3). While mass transport limitations may limit high surface area catalysts, in this case, higher currents were achieved with the 80 s deposition sample, which confirmed higher mass transport headroom in this system. However, these limitations were further amplified in 3D matrices where the availability of protons could be prevented due to the undesirable diffusion pathway between the double layer and active internal pore sites. Further, comparisons of the onset potentials for CO 2 reduction between the deposition time-controlled samples revealed that they decreased as a function of the deposition time ( Figure S8). While pristine Cu foil showed a CO 2 reduction onset of −0.8 V vs. RHE, the 80 s deposition time had an onset potential of −0.2 V, which differed significantly from the pristine sample. Notably, however, the f -Cu 100s sample deviated from this observed trend by showing a more negative CO 2 reduction onset potential than the other samples. An analysis of all time-controlled samples revealed a strong preference for the 80 s deposition time, as this sample exhibited the highest currents and lowest CO 2 reduction onset potentials. Therefore, 80 s was determined as the first optimized input for our systematic approach to assess additional parameters. As such, 80 coulombs was used as the optimized total charge transfer for the fabrication of current-controlled f -Cu electrodes. Through the analysis of the fabricated samples thus far, an optimal charge transfer of 80 coulombs was demonstrated to show the highest catalytic performance. As such, this value was used to resolve a range of deposition currents to investigate the influence of current density on the reduction performance of the catalyst. Under the constant applied current studied in the second stage, the maximum charge transfer was locked to the same coulombs for 80 s deposition (80 coulombs), which resulted in a sample with the following parameters: 200 mA for 400 s, 400 mA for 200 s, 600 mA for 134 s, and 800 mA for 100 s. Figure S9 displays the electrochemical performance of the current-controlled samples in the CO 2 reduction setup. Initial observations showed slight differences between the samples when compared to the time-controlled samples. While the measured HER remained constant across the tested samples, the CO 2 reduction performance was shown to have a deviation of~700 µA when tested under −1.0 V vs. RHE. The current-dependent samples exhibited relatively equivalent catalytic performance ( Figure 4). The obtained LSVs under CO 2 -purged conditions exhibit a noisier current profile at higher applied overpotentials. This behavior might be caused by the generation and liberation of gas products. It is worth noting that poor diffusion of generated products away from the active sites present within the 3D porous structure might further impede the stable electrochemical process. Additionally, a shoulder profile is observed near −1.09 V vs. RHE in current controlled samples, which was attributed to in situ blockage of surface sites due to generated gaseous products remaining on the catalyst surface. to promote the desorption, re-adsorption, and further reduction of intermediates by the primary metal (Cu) was further investigated. Throughout the analyses and imaging of the as-deposited samples, it was observed that the bottom of the porous structure retained the initial substrate material quite well (Figure 2a). This revelation inspired us to utilize the substrate region of the 3D nanostructure to incorporate a secondary metal, where potential spillover effects could be enhanced due to the 3D structure of the pores. Using magnetron sputtering, a thin film of Ag (~200 nm) was deposited onto the pretreated Cu substrates where the underutilized surfaces available at the bottom of each pore could be potentially endowed to serve as CO generation sites. For this arrangement, the formed CO2 reduction intermediates at Ag sites (*CO) were in proximity to active Cu sites, which enabled the further reduction of CO to higherorder hydrocarbons. To the best of our knowledge, this configuration of Ag within 3D porous Cu nanostructures is the first studied case in the literature. As seen in Figure 5, the Ag-modified porous f-Cu electrode demonstrated higher catalytic performance when compared to the non-modified Cu samples. Further, the electrochemical activity of the Cu substrate sputtered with Ag was compared against the f-Cu200mA and f-Cu200mA/Ag/Cu. This suggested a synergistic effect in the Ag-modified f-Cu200mA sample due to the specific interactions between the sputtered Ag and fabricated porous f-Cu. Chronoamperometric (CA) experiments were performed under moderate overpotentials prior to the onset of HER in a CO2-purged 0.1 M KHCO3 electrolyte. The anodic and cathode chambers of the H-cell were separated using a cation exchange membrane (CEM) to enable the analysis of liquid products. Simultaneously, the cathodic chamber was connected to the GC input line, which allowed the gas-phase products to be characterized in operando. Chronoamperometry experiments proceeded for six hours to assess the stability of the catalyst under sustained activity ( Figure 6). Steady-state currents sustained through-a b c The 200 mA deposition current sample demonstrated the highest electrochemically active surface area and surface roughness (RMS R q ) among all the tested samples. Therefore, this sample was employed to further assess the performance stability and further optimization. It is well understood that the nanostructuring of surfaces dramatically increases the EASA, which results in increased current within the same geometric area. In 3D samples such as the electrodeposited f -Cu, the matrix of pores developed through the HER provided a unique architecture for the engineering of bimetallic structures. Specifically, it was concluded that the available area at the bottom of this 3D hierarchical structure could be further enhanced to systematically target early desorbed intermediates such as CO to synergistically re-catalyze at Cu sites along the cylindrical structure. While the synergistic spillover effect is not yet fully understood, one-pot tandem catalysis systems can be designed to benefit from the selectivity of various transition metals to overcome monometallic catalysis bottlenecks, specifically premature desorption of CO 2 reduction intermediates. With this perspective, the unique incorporation of a secondary metal (Ag) to promote the desorption, re-adsorption, and further reduction of intermediates by the primary metal (Cu) was further investigated. Throughout the analyses and imaging of the as-deposited samples, it was observed that the bottom of the porous structure retained the initial substrate material quite well (Figure 2a). This revelation inspired us to utilize the substrate region of the 3D nanostructure to incorporate a secondary metal, where potential spillover effects could be enhanced due to the 3D structure of the pores. Using magnetron sputtering, a thin film of Ag (~200 nm) was deposited onto the pretreated Cu substrates where the underutilized surfaces available at the bottom of each pore could be potentially endowed to serve as CO generation sites. For this arrangement, the formed CO 2 reduction intermediates at Ag sites (*CO) were in proximity to active Cu sites, which enabled the further reduction of CO to higher-order hydrocarbons. To the best of our knowledge, this configuration of Ag within 3D porous Cu nanostructures is the first studied case in the literature. As seen in Figure 5, Chronoamperometric (CA) experiments were performed under moderate overpotentials prior to the onset of HER in a CO 2 -purged 0.1 M KHCO 3 electrolyte. The anodic and cathode chambers of the H-cell were separated using a cation exchange membrane (CEM) to enable the analysis of liquid products. Simultaneously, the cathodic chamber was connected to the GC input line, which allowed the gas-phase products to be characterized in operando. Chronoamperometry experiments proceeded for six hours to assess the stability of the catalyst under sustained activity ( Figure 6). Steady-state currents sustained throughout the runs were achieved within 30 s of the experiment. Notably, a gradual decrease in current density was observed at −1.2 V vs. RHE. SEM images of the working sample prior to and following the chronoamperometry experiments were obtained to probe the stability of the surface nanostructures. The nanometric surface structures were shown to be identical and well intact post-operation ( Figure S11). For all chronoamperometry experiments under different overpotentials, the cathodic chamber was continuously purged with CO2, which allowed for the online measurement of gas-phase products via the connected GC system. Subsequently, following the completion of the experiment, the catholyte was immediately stored and sealed for NMR and COD analysis to ensure that no volatile components (i.e., ethanol, acetone) were lost through the sample transfer process (Figure 6b,c). The f-Cu200mA electrode primarily generated HCOOH at 33.1% FE, with small amounts of propanol and ethanol at 2.3% and 6.1% FEs, respectively. Hydrogen was the main gaseous product, with minor amounts of CO measured at 2.9% FE. The non-modified porous f-Cu200mA was active for single carbon products, which were typically categorized to be generated from a different reaction pathway than that of CO production. Tang et al. recounted an in situ Raman spectroscopy study of HCOOH and CO generation at Cu surfaces, and concluded that the suppression of one pathway resulted in an increase in catalytic activities of the other pathway [31]. In an analysis of the gas-phase products of the Ag-modified f-Cu electrode, the sample demonstrated a higher selectivity toward CO versus the non-modified Cu samples, with an FE of 59.5%, with a suppressed H2 evolution down to an FE of 2.3% (Figure 6b). Furthermore, an analysis of the catholyte through NMR and COD detected ethanol with a measured FE of 35.2% and formate with an FE of 1%. Additionally, minor concentrations For all chronoamperometry experiments under different overpotentials, the cathodic chamber was continuously purged with CO 2 , which allowed for the online measurement of gas-phase products via the connected GC system. Subsequently, following the completion of the experiment, the catholyte was immediately stored and sealed for NMR and COD analysis to ensure that no volatile components (i.e., ethanol, acetone) were lost through the sample transfer process (Figure 6b,c). The f -Cu 200mA electrode primarily generated HCOOH at 33.1% FE, with small amounts of propanol and ethanol at 2.3% and 6.1% FEs, respectively. Hydrogen was the main gaseous product, with minor amounts of CO measured at 2.9% FE. The non-modified porous f -Cu 200mA was active for single carbon products, which were typically categorized to be generated from a different reaction pathway than that of CO production. Tang et al. recounted an in situ Raman spectroscopy study of HCOOH and CO generation at Cu surfaces, and concluded that the suppression of one pathway resulted in an increase in catalytic activities of the other pathway [31]. In an analysis of the gas-phase products of the Ag-modified f -Cu electrode, the sample demonstrated a higher selectivity toward CO versus the non-modified Cu samples, with an FE of 59.5%, with a suppressed H 2 evolution down to an FE of 2.3% (Figure 6b). Furthermore, an analysis of the catholyte through NMR and COD detected ethanol with a measured FE of 35.2% and formate with an FE of 1%. Additionally, minor concentrations of propanol were measured across the applied potentials, which suggested an independent propanol production pathway through the addition of Ag. For the Ag-modified Cu nanostructures, it was observed that the generated products differed from the range of Cu samples under study. At −0.6 V vs. RHE, only small concentrations of CO were quantified via in-line GC. An additional overpotential (−0.8 V vs. RHE) yielded the first measured quantity of ethanol. The highest faradaic efficiency for the Agmodified sample was observed at −1.0 V vs. RHE for CO and ethanol, while an additional increase in overpotential diminished the products and generated significant quantities of hydrogen. This shift to the HER at higher overpotentials was also reported by Ager et al., where the studied oxide-derived Cu sample demonstrated its highest performance at −1.0 V vs. RHE, and imposing an added 100 mV of overpotential reduced the efficiency from 54.5% to 35% [33]. As a result, the study asserted that the severe depletion of accessible CO 2 at the active sites led to the domination of the HER at higher overpotentials. COD was used on the same catholyte sample utilized for 1 H NMR analysis to validate and quantify the generated liquid products that might be difficult to deconvolute from NMR analysis ( Figure S12). Both COD and NMR analyses of the catholyte sample yielded the same total FE in generated liquid-phase products. An increment of 4% in FE of total liquid products was obtained at −1.2 V vs. RHE via COD. Furthermore, NMR analysis indicated that small concentrations of propanol (PrOH) were resolved through NMR analysis, but were below the quantification limit. Consequently, the 4% deviance observed in the COD FE was attributed to the presence of propanol, which has been reported for Cu-based CO 2 catalysis in the literature [5]. At the time of this study, there were limited reports of Cu-based catalysts that directly produced ethanol in near-neutral electrolytes. Our results herein suggest that this unique product pathway was enabled due to the incorporation of Ag within a 3D porous Cu foam to amplify the spillover effect of the important CO intermediate. Extremely large EASA leading to the rapid consumption of H + cations suggested a rapid localized increase in pH, where pH-dependent reaction pathways were uniquely enabled. A recent study by Goddard et al. concluded that CO 2 reduction at pH 1 involved kinetically suppressed multi-carbon production pathways, where in contrast, a typical intermediate was identified at neutral pH that shared both single and multi-carbon pathways with identical favorability [34]. Their work also concluded that enhanced selectivity for multi-carbon products was observed at pH 12 due to kinetically blocked C 1 pathways. In our case, the rapid consumption of H + ions may have led to the same effect, where C 2+ pathways were favored due to an increase in the local pH at the cathode surface. While the effect of the local pH on product selectivity is known to be critical, the addition of Ag within the pores would not impose a pH change locally and in bulk. As a consequence, a more plausible hypothesis for the change in product selectivity toward higher carbon C 2 products via the incorporation of Ag within f -Cu might be attributed to the stabilization of CO intermediates at the Ag-Cu sites. Recent computational DFT studies have demonstrated the C 1 and C 2 product pathways were enabled via the stabilization of CO intermediates on an Au surface [35]. Since Au and Ag share a similar CO2RR pathway, it is reasonable to assume that the incorporation of Ag at f -Cu 3D porous wells resulted in recapturing or shuffling of Ag-liberated *CO intermediates, thus facilitating further reduction to form EtOH and PrOH. To gain additional insights into various reduction pathways observed in this system, electrochemical impedance spectroscopy (EIS) and Tafel plot analysis were performed to assess the adsorption and rate-limiting mechanisms of the catalytic process. The EIS experiments were carried out at the potentials of −0.57 V and −0.61 V vs. RHE. For comparison, the Nyquist plots of the f -Cu 200mA electrodes with or without the Ag modification are displayed in Figure 7. For clarification, the high-frequency portion of the EIS spectra is enlarged and also presented as an inset in Figure 7. The equivalent circuit used for fitting the EIS data is also included as an inset in Figure 7, where R s refers to the solution resistance. R CT1 and R CT2 denote the charge transfer resistances associated with the two semicircles of the Nyquist plot, corresponding to the pre-adsorption of the CO 2 species and the following electrochemical reduction. The fitting results are listed in Table 1, showing that both R CT1 and R CT2 were decreased with the increase in the cathodic potential from −0.57 to −0.61 V and that R CT1 was significantly decreased with the Ag modification. It could be deduced that the incorporation of Ag enhanced the pre-adsorption kinetics of CO 2 , thus providing additional carbon-substrate-linked species where the carbon-carbon (C-C) linking would be promoted via reactant or intermediate shuffling or spillover. Specifically, it can be hypothesized that the formation and availability of additional electroactive species locally at Cu-Ag could be the key factor in promoting C-C linking, thereby favoring the further reduction of CO 2 (and its intermediates) toward C 1 and C 2+ products. 0.57 to −0.61 V and that RCT1 was significantly decreased with the Ag modification. It could be deduced that the incorporation of Ag enhanced the pre-adsorption kinetics of CO2, thus providing additional carbon-substrate-linked species where the carbon-carbon (C-C) linking would be promoted via reactant or intermediate shuffling or spillover. Specifically, it can be hypothesized that the formation and availability of additional electroactive species locally at Cu-Ag could be the key factor in promoting C-C linking, thereby favoring the further reduction of CO2 (and its intermediates) toward C1 and C2+ products. Tafel plots of the samples were generated to compare Tafel slopes between the two samples ( Figure 8). Tafel analysis demonstrated that the Ag-modified Cu nanostructures exhibited a Tafel slope equal to 50.42 mV dec −1 , which was appreciably less than the unmodified Cu nanostructures with a Tafel slope equal to 120.55 mV dec −1 . Generally, smaller Tafel slopes exhibited faster reaction kinetics, which were often correlated with higher catalytic activities. In the recent literature, Tafel slopes were commonly compared against rate expressions to deduce the reaction pathways. Xu et al. [36] recently suggested rate expressions and their related Tafel slopes for an initial activation stage for CO 2 reduction reactions at a Tafel slope of 1188 mV dec −1 : followed by the subsequent reaction of CO 2 to COOH at 59 mV dec −1 : and the final reaction step to CO at 39 mV dec −1 : Analyzing the measured Tafel slopes from our study, the data suggested that the initial activation step was the rate-determining step that bottlenecked the reaction process. A calculated Tafel slope of 120.55 mV dec −1 was within the range of error for the literature value of 118 mV dec −1 , which suggested that the rate-limiting step was the initial activation process of CO 2 . In the case of the Ag-modified f -Cu, the measured Tafel slope of 50.42 mV dec −1 resided between the suggested second and third steps from the literature. While it is challenging to pinpoint the rate-limiting step, the lower Tafel slope, due to the incorporation of Ag, strongly hinted at synergistic interactions between Ag at the bottoms of the pores and the Cu pore walls. Tafel plots of the samples were generated to compare Tafel slopes between the two samples ( Figure 8). Tafel analysis demonstrated that the Ag-modified Cu nanostructures exhibited a Tafel slope equal to 50.42 mV dec −1 , which was appreciably less than the unmodified Cu nanostructures with a Tafel slope equal to 120.55 mV dec −1 . Generally, smaller Tafel slopes exhibited faster reaction kinetics, which were often correlated with higher catalytic activities. In the recent literature, Tafel slopes were commonly compared against rate expressions to deduce the reaction pathways. Xu et al. [36] recently suggested rate expressions and their related Tafel slopes for an initial activation stage for CO2 reduction reactions at a Tafel slope of 1188 mV dec −1 : followed by the subsequent reaction of CO2 to COOH at 59 mV dec −1 : and the final reaction step to CO at 39 mV dec −1 : Analyzing the measured Tafel slopes from our study, the data suggested that the initial activation step was the rate-determining step that bottlenecked the reaction process. A calculated Tafel slope of 120.55 mV dec −1 was within the range of error for the literature value of 118 mV dec −1 , which suggested that the rate-limiting step was the initial activation process of CO2. In the case of the Ag-modified f-Cu, the measured Tafel slope of 50.42 mV dec −1 resided between the suggested second and third steps from the literature. While it is challenging to pinpoint the rate-limiting step, the lower Tafel slope, due to the incorporation of Ag, strongly hinted at synergistic interactions between Ag at the bottoms of the pores and the Cu pore walls. Conclusions Nanostructured 3D porous Cu foams were systematically synthesized and analyzed to determine the optimal deposition parameters for a porous Cu catalyst. Further tuning through the novel introduction of Ag within the Cu foam pores was shown to enhance the selectivity and activities of the composite catalyst toward the generation of higherorder C2+ products. Electrochemical and characterization analyses via XRD, SEM, and VSI a b Conclusions Nanostructured 3D porous Cu foams were systematically synthesized and analyzed to determine the optimal deposition parameters for a porous Cu catalyst. Further tuning through the novel introduction of Ag within the Cu foam pores was shown to enhance the selectivity and activities of the composite catalyst toward the generation of higher-order C 2+ products. Electrochemical and characterization analyses via XRD, SEM, and VSI unveiled a 3D porous framework with an extensive electrochemically active surface area. This work reports on a series of Cu-based nanostructured foams that exhibited enhanced catalytic activities, product selectivities, and onset reduction potentials. Analyses of gas and liquid phase products yielded from chronoamperometry experiments detected C 1 products at an applied potential of −0.8 V vs. RHE, where HCOOwas the main product at 33% FE. The enhanced selectivity and onset potential were studied via EIS and Tafel analyses. While it was reported that C 2 and C 2+ reaction pathways were kinetically favorable at pH > 7, the suppression of CH 4 formation and preference toward HCOO measured through product analysis proposed an inadequate CO intermediate for the formation of C-C coupling. As such, the targeted inclusion of Ag within the cylindrical framework of the Cu foam was proposed to enhance the catalytic performance of the catalyst via the enhanced spillover of CO at Ag active sites. The product selectivity of the Cu foam modified with Ag exhibited a unique preference toward EtOH with the highest achieved faradaic efficiency of 35% at −1.0V vs. RHE. To further understand the increase in selectivity and performance, the EIS and Tafel experiments were performed, which proposed a shift in reaction pathways, as well as a lower charge transfer resistance when Ag was incorporated. The results of this study have demonstrated strong potential for the utilization of porous Cu nanostructures deposited on Ag thin films as a highly selective and tunable electrocatalyst for scalable applications in the electrochemical reduction of CO 2 . Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/nano13040778/s1. Figure S1 Figure S6: Plot of current density versus scan rate for DLC derivation of current-controlled samples; Figure S7: Stitched 3D models of asdeposited time-controlled samples to visualize impacts of time on growth structures; Figure S8: (left) LSV plots of f-Cu samples synthesized under time-controlled deposition. (right) Approximated Fes from differences in Ar (red) and CO 2 (grey)-saturated electrolytes; Figure S9: (left) LSV plots of f-Cu samples synthesized under current-controlled deposition. (right) approximated Fes from differences in Ar (red) and CO 2 (grey)-saturated electrolytes; Figure S10 LSV plots of f-Cu200mA/Ag sample tested under CO 2 (red) and Ar (grey) conditions; Figure S11: SEM images f-Cu200mA/Cu sample (a,b) before and (c,d) after chronoamperometry stress test; Figure S12: Comparison of detected liquid product analysis of f-Cu200/Ag sample measured through NMR and COD techniques; Table S1: Parameters for synthesis of the tested time-controlled f -Cu samples, with measured current density and corresponding efficiencies; Table S2: Parameters for synthesis of the tested current-controlled f-Cu samples, with measured current density and corresponding efficiencies; Table S3: Summary of all double layer capacitance values measured through steady-state cyclic voltammograms. Roughness factors are normalized against mirror-finished smooth Cu plate of 1 × 1 cm 2 geometric surface area; Table S4: Summary of all double layer capacitance values measured through steady-state cyclic voltammograms. Roughness factors are normalized against mirror-finished smooth Cu plate of 1 × 1 cm 2 geometric surface area. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2019-04-03T13:08:03.555Z
2018-10-05T00:00:00.000
91824104
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/178170/167533", "pdf_hash": "9cf7e75b23541a17533e798aeffad2955ba504bc", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41501", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "44c0c2198146388f6fbdbfa18ba0878620745350", "year": 2018 }
pes2o/s2orc
Antioxidant activity and hepatoprotective effect of Cichorium intybus ( Kasni ) seed extract against carbon tetrachloride-induced liver toxicity in rats Purpose: To assess the antioxidant and hepatoprotective activity of the aqueous-methanol extract of Cichorium intybus seeds (C. intybus) against carbon tetrachloride (CCl4)-induced liver toxicity in albino Wistar rats. Method: The seed extract of C. intybus was prepared in aqueous methanol (20:80) via Soxhlet solvent extraction process. CCl4 (0.8 mL/kg) was administered to induce hepatic damage in Wistar rats. The seed extract (100, 250 and 500 mg/kg doses) and a 25 mg/kg dose of silymarin (as standard drug) were administered orally to separate groups of albino Wistar rats for 14 days. Blood samples from the rats were analyzed for biochemical markers for hepatic injury. The tissue samples of the rats were subjected to histopathological studies as well as analyzed for liver antioxidants. Results: The results for biochemical markers revealed that the rats treated with the extract (500 mg/kg dose) showed a maximum elevation of catalase (48.90 μmole of H2O2 consumed/min/mg protein), glutothione peroxidase (22.1 mg GSH consumed/min/mg protein), superoxide dismutase (14.2 units/min/mg protein), and a reduction in glutathione (18.1 μmole of GSH/mg protein). Serum biochemical parameters including serum glutamate oxaloacetate transaminase (SGOT), serum glutamate pyruvate transaminase (SGPT), alkaline phosphate (ALKP), and direct bilirubin were significantly (p < 0.01) increased in the treated groups. Oral administration of different doses of C. intybus seed extract significantly (p < 0.01) protected the hepatic cells from impairment. The biochemical markers and hematological parameters were also normal in extract-treated rats in contrast to the standard (silymarin) and control groups. Conclusion: The results show that C. intybus plant is potential a good natural source of natural hepatoprotective and antioxidants agents. INTRODUCTION Liver disorders (jaundice, fatty liver, cirrhosis) are commonly affecting human health worldwide.Approximately 70-75% of the world's population depends on herbal medicines for curing diseases because they are cost-effective, less toxic, and easily available [1].Major metabolic activities of body are taking place in liver [2].Hepatic disorders usually develop during the process of removal of toxic and harmful chemicals from the liver [3].Plants and their various parts (stem, roots, leaves, flowers, and fruits) are well known for the treatment of hepatic disorders [4,5]. Approximately only 1 -2 % of plant species have been explored properly [6].C. intybus (Family: Asteraceae) is a small perennial herb that is usually bushy in nature; its common name is chicory [7].The colors of the flowers of C. intybus are usually blue, light purple, or lavender, but white or pink flowers have also been reported but very rarely [8].This plant is usually found as a wild plant grown along roadsides in Europe.C. intybus has different names when grown for its leaves such as leaf chicory, endive, or French endive [7].Chicory has been used in conventional medicines throughout the world for hundreds of years [8].In Iran, the parts of C. intybus other than the roots are mostly used to purify blood [9].Extracts of seeds of chicory have also been used in Ayurvedic medicines for the treatment of hepatic disorders [9,10]. C. intybus is also enlisted as a domestic plant usually grown for food, animal fodder, and to make traditional medicines [11].Several reports are available on the chemical composition of C. intybus seeds, but little attention has been given to their hepatoprotective effects [11].Therefore, we designed this research to evaluate the hepatoprotective and antioxidant potential of C. intybus seeds against CCl 4 (carbon tetrachloride)-induced hepatic damage in rats. Collection of plant material The seeds of C. intybus were collected from Lahore (Punjab, Pakistan) in March 2017.Prof Sohail Sheikh, Department of Botany, Govt.M.A.O College, Lahore, Pakistan, identified the plant seeds.A voucher specimen (no.GC-HERB-570) was submitted to the herbarium of Govt.M.A.O College, Lahore, Pakistan for future reference. Sample and extract preparation First, plant seeds were washed to remove dust; after that, they were shade dried at a temperature of 25-30 °C [12].With the help of a commercial blender (FTA-788, West point, Germany), seeds were pulverized and then sieved (150 mesh sieve (0.065 mm)) [13].The resultant seed powder material was stored in an airtight container.The powder of plant seeds was placed in a Soxhlet extractor for hot extraction using water and methanol in the proportion of 20:80 as a solvent respectively [14].The Soxhlet extraction process was performed continuously at a temperature of 70-75 °C for 6 hr.An aqueous-methanol seed extract was concentrated by the removal of excess quantity of solvent (water and methanol) using a rotary evaporator.The crude extract was refrigerated in an airtight bottle in a refrigerator for future use [15]. Animals Wistar albino rats (male) weighing 150-180 g were used in experimental work.They were acquired from the Department of Zoology, GC University Lahore, Pakistan.The rats were housed for seven days before pharmacological experiments in the chemistry lab of the University of Management and Technology Lahore.The Wistar rats were kept in a 12 h light/dark cycle under appropriate climatic conditions.These conditions include ac temperature of 23 ± 2 °C and humidity of 40-45 %.All the animals were grouped into five groups and each group consisted of six rats.Each group was provided a standard normal diet and water.Before the start of the experiment, all the animals were adapted to their surroundings for almost 7 days.For the animal experiments, approval (reg no.KMCRET/MS/03/2017) was obtained from the Institutes of Animal Ethics Committee (IAEC) and the studies followed international guidelines [16]. Determination of acute oral toxicity By following the guideline of OECD for the testing of chemicals, (Test No. 423 (OECD); acute oral toxicity-acute toxic class method), an acute oral toxicity study was carried out.Five rats (n = 5) were used for this study [17].The animals were kept under observation continuously for 24 h and allowed access to water, but food was not allowed.Animals (rats) were administered aqueous-methanol seed extract orally at dosage levels of 100, 250, and 500 mg/Kg body weight and kept under examination for 24 h continuously.The treated animals were observed for the first 2 h for morbidity and further for mortality for the next 24 h. If mortality from 2 out of 3 treated animals was observed, the administered dosage was then recognized as a toxic dosage.If mortality was perceived in one animal, the same amount of dosage was repeated again to confirm the toxic dose.If mortality was detected again, the procedure was repeated in lower dosages. Hepatoprotective property of C. intybus Six groups of albino Wister rats (Group I, II, III, IV, V, VI) were made comprising six healthy animals of the same weight.The hepatotoxicity was induced by the administration of CCl 4 at the dosage level of 0.8 mL/Kg according to the induced liver damage-model reported by [18] in all the groups (II-VI) except Group I. Group I was labeled a control that was only provided with a normal diet and was not treated with CCl 4 , while Group II acted as a negative control.Group III was kept as a positive control and provided with silymarin (standard drug) with the dosage of 25 mg/kg, while groups IV, VI, and VI were provided with plant's aqueous-methanol extract.All the doses were given to the animals orally with the help of gastric tubes.Group I rats were given only normal diet for 14 days.Group II animals were given 14 doses of silymarin at 24 hourly intervals.Similarly, 14 doses of test extracts in the concentration range of 100, 250, and 500 mg/Kg were given to animal groups III, IV, and V respectively at 24 h intervals.All the administered doses of the aqueous-methanol extract were given orally to the animals with the aid of gastric tubes.After the 15th day, the samples of blood were collected by piercing the retro-orbital plexus of all animals under mild ethereal anesthesia.Further, liver tissues of the entire treated animal were acquired by scarifying them.The estimation of liver antioxidants as well as histopathological studies were done through the analysis of tissue samples of treated animals.Moreover, the blood samples of treated animals were investigated for their biochemical markers of hepatic injury. Serum preparation from blood The blood of treated animals was taken.The samples were collected in plain sample bottles for biochemical examination, while the whole blood was collected in bottles already having anticoagulant and ethylene diamine tetra-acetic acid (EDTA) for hematogram use [14].The serum separation was performed by the centrifugation process at the rate of 700×g for 20 min.and further examined for biochemical parameters.Before further analysis, the serum was stored at -80 °C.With the help of an automatic hematology analyzer (Sysmex F-800, Japan) the erythrocytes, leucocytes, and platelets were determined.Furthermore, biochemical parameters such as serum glutamic oxaloacetic transaminase (SGOT), lipid profiles like total cholesterol (TC), alkaline phosphatase (ALP), serum glutamic pyruvic transaminase (SGPT), triglycerides (TG), bilirubin, and creatinine were also estimated by employing the Span Diagnostics Limited kit, Pakistan. Liver homogenate preparation The hepatic tissues of animals were homogenized in a 0.2 M phosphate buffer or 0.2 M tris buffer (pH 7.1) and centrifuged at 3,000×g for 15 min.Liver enzymatic as well as nonenzymatic antioxidants were measured by using a supernatant. Histopathological studies Highly active animals (alive) from each group were selected for histopathological studies.Each selected animal was dissected and liver was removed by the normal process.The removed liver from each animal was preserved in 10% formalin for 72 h, then the liver was washed with distilled water and then with alcohol and xylene; it was then embedded in paraffin wax.A small section was cut and stained with haematoxylin and eosin for histopathological investigations [14].Liver samples were then sent to UVAS Laboratories Pakistan, for histopathological studies.The liver sections were studied under a fluorescent microscope, and histopathological changes in its structure were observed.Bright and vivid photographs of liver sections were also taken through a fluorescent microscope to support histopathological findings.As a highresolution fluorescence microscope was used for this purpose, photographs clearly showed the changes in hepatocytes due to mortification and inflammation. Ferric thiocyanate (FTC) assay Antioxidant activity of the plant extract in terms of the inhibitory effect on linoleic acid peroxidation was assessed by thiocyanate procedure [23][24][25][26]. Statistical analysis All data obtained from this study was expressed in terms of mean standard deviation and presented as ±SD.ANOVA was applied to estimate the variance between different groups.The significant value was taken as p < 0.045. Acute toxicity Results showed that the aqueous-methanol seed extracts of C. intybus did not show any mortality until the concentration of 500 mg/kg.When extracts of C. intybus were employed in different concentrations (100, 250, and 500 mg/kg), no impairment was developed in the livers of tested animals; therefore, no mortality was perceived in any experimental group of mice. Effect of C. intybus seed extract on hematological parameters The influence of plant extract at three different dose levels (100, 250, and 500 mg/kg) on hematological parameters is presented in Table 1.The results of hematological parameters indicated that the reduction in the numbers of leucocytes (total WBC count, polymorphs, lymphocytes), erythrocytes (RBC count, hemoglobin, hematocrit, MCV, MCH, MCHC, RDW), and in platelets was observed in the tested animals due to CCl 4 induced hepatic injury.The administration of aqueous-methanol seed extract of C. intybus (100, 250, and 500 mg/kg) led to increased levels of hematological parameters (total WBC count 6950±0.01cells/cm 3 RBC count 7.6±0.04milicm 3 and platelets 430±0.05thou/cm 3 ). Effect of C. intybus seed extract on serum biochemical markers Table 2 displays the effect of plant extract (100, 250, 500 mg/kg) on serum biochemical markers in CCl 4 intoxicated rats.The results indicated that hepatic impairment elevated the level of liver enzymes such as ALP (alkaline phosphatase), SGPT (serum glutamic pyruvic transaminase), and SGOT (serum glutamic oxaloacetic transaminase). Treatment with plant extract at 100 mg/kg displayed the lesser activity but showed a comparable hepatoprotective activity in terms of decreasing the level of liver enzymes with the treatment at the dosage of 250 mg/kg in contrast to employed standard silymarin.The maximum hepatoprotective activity counter to decrease the level of liver enzymes was observed with the aqueous-methanol seed extract of C. intybus at the dosage of 500 mg/Kg than standard silymarin (25 mg/kg) as shown in Table 2.The aqueousmethanol seed extract of C. intybus decreased the liver markers (biochemical parameters) SGPT (41.21 ± 0.94 U/l), SGOT (39.29 ± 0.18 U/l), and ALP (109.24 ± 0.89 U/l). Effect of aqueous-methanol seed extract of C. intybus on in vivo antioxidant activity The results of in vivo antioxidant activity exhibited that the level of lipid peroxidation in intoxicated rats increased, while the level of enzymatic antioxidants decreased due to administration of CCl 4 at the dosage of 0.8 mL/Kg to the Wistar rats (Table 3).The aqueousmethanol seed extract of C. intybus (500 mg/kg) increased the total protein (9.95 µg/10 mg of liver tissue), enzymatic antioxidants SOD (14.2 units/min/mg protein), CAT (48.90 µmole of H 2 O 2 consumed/min/mg protein), GPx (22.1 mg GSH consumed/min/mg protein), and GSH (18.1 µmole of GSH/mg protein). Histopathological observations The hepatoprotective effect of the plant extract against CCl 4 induced hepatic injury confirmed from the result of histopathological investigation.The hepatocytes' impairment, which was produced due to administration of CCl 4 in rats, was presented in Figure 1(a).The standard silymarin treated rat liver exhibited portal tract inflammation with lymphocysts and displayed premature fibrosis of the perivenular region (Figure 1d), but the aqueous-methanol seed extracts of C. intybus shielded the liver hepatocytes by averting the oxidation in liver cells (Figure 1b and c).The histopathological studies clearly showed that an increase in the concentration of extract dose (100 to 250 and then 500 mg/Kg) prevented the liver damage caused by CCl 4 .The maximum DPPH radical scavenging in terms of IC 50 value (80 ± 0.22 μg/mL) was perceived with the concentration of 60 μg/mL, whereas the lowest IC 50 value (41 ± 0.33 μg/mL) was achieved with the concentration of 1000 μg/mL (Figure 2).The other concentrations (125, 250, and 500 μg/mL) of plant extract demonstrated the IC 50 values 74±0.53,66±0.64,51±0.12 μg/mL respectively in scavenging the DPPH free radical.DPPH radical scavenging increased with increasing concentration of aqueous-methanol plant extract.At all levels of concentrations from 60 μg/mL to 1000 μg/mL, the aqueous-methanol seed extract of C. intybus manifested the noteworthy and significant DPPH radical scavenging propensity, which was analogous to standard BHT (IC 50 = 84±0.54μg/mL, 76±0.81 μg/mL, 65±0.49μg/mL, 55±0.39 μg/mL, 43±0.51 μg/mL) with the same concentration level of plant extract.This demonstrates that the aqueousmethanol seed extract of C. intybus holds extraordinary antioxidant potential in terms of scavenging the DPPH free radicals. Antioxidant activity via inhibition of linoleic acid oxidation The antioxidant propensity at different concentrations of plant extract was estimated by means of the percentage inhibition of linoleic acid peroxidation (Figure 3).The % inhibition of linoleic acid peroxidation was 33±0.11,45±0.28,57±0.35,65±0.37, and 73±0.42 at different concentrations of 60, 125, 250, 500, and 1000 μg/mL of plant extract, respectively.The highest inhibition of linoleic acid peroxidation was 73±0.42% (for 1000 μg/mL of aqueous-methanol seed extract of C. intybus), while the lowest percentage inhibition (33±0.11%) was recorded for 60 μg/mL concentration of plant extract.The linoleic acid peroxidation was significantly inhibited by the aqueous-methanol seed extract of C. intybus at all the levels tested, and all the results for the percentage inhibition of linoleic acid were superior than that of standard BHT, which showed 30±0.27,41±0.38 %, 53±0.45 %, 60±0.29 %, and 69±0.11% inhibition for its different concentrations (60, 125, 250, 500 and 1000 μg/mL) respectively (Figure 3).Therefore, from the present study it is concluded that aqueous-methanol extracts exhibited remarkable antioxidant and hepatoprotective activity.It can also be proposed that alcohol soluble compounds may also be present in the seeds of C. intybus Linn, which behave as an active ingredient.These alcohol soluble compounds are poly-phenolics or flavonoids, which may protect the liver against the damage caused by free radicals.Future research work is needed to evaluate practical usefulness of C. intybus by separating and isolating natural polyphenolics or flavonoids through different techniques. The different concentrations of aqueousmethanol seed extract of C. intybus were also examined for their antioxidant propensity by total antioxidant assays (inhibition of linoleic acid peroxidation and DPPH radical scavenging).All the concentrations of aqueous-methanol seed extract of C. intybus exhibited the comparable antioxidant activity results with standard BHT.However, in case of inhibition of linoleic acid peroxidation, all the concentrations of aqueousmethanol seed extract of C. intybus manifested the significant and improved antioxidant results in comparison with the standard BHT.Moreover, the results for plant extracts at different concentrations also indicate that the antioxidant activity of the aqueous-methanol seed extract of C. intybus was concentration-dependent [9]. The plant extract showed enhanced antioxidant properties with increase in concentration.The presence of flavonoids and phenolics in seeds of C. intybus has been previously reported [ Figure 1 : Figure 1: The Histopathological examination of CCl4 treated and plant extract treated Wistar rats.(a) CCl4treated rat liver showing portal tract inflammation with lymphocyst, (b) protective effect of aqueous-methanol seed extract of C. intybus (250 mg/Kg) in the liver of the tested rats, (c) protective effect of aqueousmethanol seed extract of C. intybus (500 mg/Kg) in the Figure 2 : Figure 2: DPPH radical scavenging activity of the seed extract of C. intybus Figure 3 : Figure 3: Antioxidant activity of seed extract of C. intybus by inhibition of linoleic acid peroxidation DISCUSSION Poisonous chemicals, toxic as well as non-toxic drugs, and viral penetrations can severely damage the liver, causing heptocellular diseases [1].Fatty liver, necrosis, and cirrhosis are the most significant pathological features of CCl 4 that induce hepatotoxicity [7].A free radical species which is responsible for this hepatotoxicity is formed by the free radical mechanism of CCl 4 [8].This species is .CCl 3 , which is first produced and then catalyzed by the action of an enzyme named as cytochrome p450.This enzyme is usually used to transform or catalyze the poisonous drugs, compounds or chemicals in the interconnected network of tubular membranes [9].Hepatotoxicity induced by CCl 4 in normal rats increased the levels of biochemical framework prominently [10].The animals treated with aqueous alcoholic extract of C. intybus exhibited a remarkable decrease in all the serum parameters intoxicated by CCl 4 .Silymarin also exhibited the same results as test extracts.Flavonoids and phenolic compounds are responsible for the antioxidant and hepatoprotective activities of plants [14].These natural products found in plant and seed extracts could be responsible for the hepatoprotective effects [10]. Table 1 : Effect of aqueous-methanol seed extract of C. intybus on hematological parameters in CCl4-induced toxicity in rats Table 2 : Effect of aqueous-methanol seed extract of C. intybus on biochemical parameters in CCl4-induced toxicity in rats Table 3 : Effect of aqueous-methanol seed extract of C. intybus on in vivo antioxidant activity in CCl4-intoxicated
v3-fos-license
2020-12-01T14:13:56.989Z
2020-11-26T00:00:00.000
227231516
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/06/25/2020.11.25.396069.full.pdf", "pdf_hash": "8f841825984baf7c8c4853e55db198693962b148", "pdf_src": "BioRxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41502", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "8f841825984baf7c8c4853e55db198693962b148", "year": 2020 }
pes2o/s2orc
Environment-driven reprogramming of gamete DNA methylation occurs during maturation and influences offspring fitness in salmon An epigenetic basis for transgenerational plasticity is widely theorized but convincing empirical support is limited by taxa-specific differences in the presence and role of epigenetic mechanisms. In teleost fishes, DNA methylation does not undergo extensive reprogramming and has been linked with environmentally-induced intergenerational effects, but solely in the context of early life environmental differences. Using whole genome bisulfite sequencing, we demonstrate that differential methylation of sperm occurs in response to captivity during maturation for Atlantic Salmon (Salmo salar), a species of major economic and conservation significance. We show that adult captive exposure further induces differential methylation in an F1 generation that is associated with fitness-related phenotypic differences. Gene targets of differential methylation are consistent with salmonid fishes experiencing early-life hatchery rearing as well as targets of selection in domesticated species. Our results support a mechanism of transgenerational plasticity mediated by intergenerational inheritance of DNA methylation acquired late in life for salmon. Introduction: The inheritance of environmentally-induced epigenetic variation (e.g. DNA methylation, chromatin structure, small RNAs) has been proposed as a mechanism facilitating transgenerational plasticity (Richards 2006;Bell and Hellmann 2019). As a chemical modification of nucleotide bases, DNA methylation has a clear mechanism for multigenerational transfer, however our current understanding of its role in intergenerational (i.e. parent-offspring transfer) or transgenerational (i.e. multi-generational transfer) epigenetic inheritance is hindered by a lack of data from a wider diversity of organisms and thus evidence remains limited and controversial (Heard and Martienssen 2014;Horsthemke 2018;Skvortsova et al. 2018). In contrast to traditional animal models that either lack methylation (i.e. worms and flies) or undergo extensive methylation reprogramming during development (i.e. mammals), a fish model, specifically zebrafish (Danio rerio), does not exhibit global erasures and reprogramming of methylation (Jiang et al. 2013;Potok et al. 2013;Ortega-Recalde et al. 2019;Skvortsova et al. 2019) suggesting a greater potential for DNA methylation-mediated epigenetic inheritance in this species and possibly teleost fishes more generally. Environmentally-induced DNA methylation variation in teleost fishes has been reported as a result of differences in early-rearing environments (Le Luyer et al. 2017;Artemov et al. 2017;Metzger and Schulte 2017;Gavery et al. 2018). These signals stably persist until adulthood (Metzger and Schulte 2017;Leitwein et al. 2020), occur in germ cells (Gavery et al. 2018; Rodriguez Barreto et al. 2019), and a growing body of literature demonstrates that intergenerational transmission occurs (Ryu et al. 2018;Rodriguez Barreto et al. 2019;Berbel-Filho et al. 2020;Heckwolf et al. 2020) supporting an overall mechanism of DNA methylationmediated intergenerational epigenetic inheritance for this clade. The timing for germ-line incorporation of environmentally-induced epigenetic variation is historically believed to be limited to early developmental stages as a result of the separation of the germ-line and soma, or the so-called "Weismann Barrier" (Monaghan and Metcalfe 2019). This barrier is more permeable than previously thought (Eaton et al. 2015), but current evidence is limited to small RNA mediated epigenetic effects (Sciamanna et al. 2019;Duempelmann et al. 2020) that lack a clear mechanism for multigenerational inheritance. The potential for intergenerational transmission of late-life-acquired environmentally-induced DNA methylation variation has not yet been documented and thus the potential for multigenerational environmental effects resulting from differences during gamete maturation remain unclear. Salmonid fish hatcheries provide relevant systems in which to test hypotheses regarding intergenerational epigenetic inheritance. Hatcheries have been used for decades to enhance, supplement, or recover salmonid fish populations (Naish et al. 2007), but have negative consequences for the fitness of hatchery-reared fish (Araki et al. 2008;Christie et al. 2014;O'Sullivan et al. 2020) presumably due to domestication effects. Numerous studies have failed to demonstrate significant genetic difference between hatchery-origin and natural-origin (wild) salmon (Christie et al. 2016;Le Luyer et al. 2017;Gavery et al. 2018) despite documented pronounced differences in gene expression (Christie et al. 2016). In contrast, DNA methylation divergence has been reported between hatchery-origin and wild salmon for several species (Le Luyer et al. 2017;Gavery et al. 2018Gavery et al. , 2019Rodriguez Barreto et al. 2019). Evidence for intergenerational effects has recently emerged (Rodriguez Barreto et al. 2019), although solely in the context of early-life hatchery exposure. Alternative rearing techniques, including juvenile (smolt) to adult supplementation (hereafter SAS; as per Fraser 2016) and live-gene-banking, where fish are only exposed to hatchery environments later in life during the onset of maturation, rather than from the onset of embryonic development, are increasingly being applied to conserve and recover the most critically endangered salmon populations (O'Reilly and Doyle 2007;Stark et al. 2014). While these approaches show promise for demographic recovery of some populations (Berejikian and Van Doornik 2018), fitness-related differences between SAS and wild salmon have been documented (Berejikian et al. 2001(Berejikian et al. , 2005. We currently lack knowledge of the potential for epigenetically-mediated intergenerational effects that may result in heritable declines in fitness in these contexts and these systems also provide opportunities to test the potential for intergenerational transmission of late-life, environmentally-induced, DNA methylation variation. Here we test the hypothesis that environmental differences experienced by adults during maturation alter DNA methylation of gametes. We further test the hypothesis that adult rearing environments influence offspring methylation patterns and that these differences influence offspring phenotypes. Results: To identify the potential for intergenerational transmission of environmentally-induced variation in DNA methylation acquired during gamete maturation, we used whole-genome bisulfite sequencing to characterize genome-wide DNA methylation variation in six adult smoltto-adult supplementation (SAS) and six adult natural-origin (wild) Atlantic Salmon (Salmo salar) from a smolt-to-adult supplementation program in the Miramichi River in New Brunswick, Canada ( Figure 1). Here, SAS individuals originated from a collection of wild juveniles (predominantly age 2 and 3) captured during their migration to salt-water ( Figure 1A-B) and reared for two years in land-based freshwater tanks ( Figure 1C). We first compared DNA methylation in sperm cells from maturing male SAS fish to wild male salmon from overlapping cohorts that had spent one year (i.e. grilse) to two years (i.e. multi-sea-winter salmon) in the marine environment ( Figure 1D) and were returning to reproduce in freshwater ( Figure 1E). We then created pure-type crosses of SAS and wild adults and reared the offspring in a common environment. We characterized growth-related phenotypic differences in 10-month old juveniles as well as DNA methylation patterns in their livers, an organ known for its important role in regulating growth and metabolism (Trefts et al. 2017) as well as its relative homogeneity of celltypes that could confound methylation analyses (Jaffe and Irizarry 2014), to determine the presence of inherited DNA methylation patterns and their potential influence on proxies of fitness. Smolt-to-adult supplementation in the Northwest Miramichi River. Natural origin (wild) juvenile salmon (A) are captured during their migration to the ocean (B) and reared in captivity until adulthood (SAS) at the Miramichi Salmon Conservation Centre (C). Wild salmon continue their marine migration spending 1-3 years feeding in the Labrador Sea (D) before returning to the Miramichi River to spawn. Wild salmon from the same cohort as SAS salmon were captured in various headwater pool habitats during their return migration (E). We identified differential methylation for individual CpGs between wild and SAS adults using beta-binomial models (Feng et al. 2014). There were 4,998 differentially methylated cytosines (DMC; p-value < 0.001) identified between adult SAS and wild salmon sperm that were grouped into 284 differentially methylated regions (DMR; Figure 2A). Regions ranged in size from 51 to 2229 bp, contained between four and 34 CpGs, and comprised 90.2% of DMCs with false discovery rate (FDR) < 0.05. The magnitude of methylation difference between SAS and wild salmon for the identified DMRs averaged 39% (range: 8% to 70%). DMRs in SAS fish were twice as frequently hypo-methylated relative to wild fish than hyper-methylated (68%: 193/284 hypo-DMRs; 32%: 91/284 hyper-DMRs; binomial p-value < 0.001). DMRs overlapped 237 genes or their cis-regulatory contexts (within 5,000 bp of gene features; Table S1). Gene ontology enrichment analysis revealed DMRs in sperm were associated with genes significantly enriched (p-value < 0.05) for a variety of functions in signal transduction pathways, brain development, tissue differentiation, muscle development and contraction, and chromatin silencing (Table S2). yellow) and wild (blue) salmon. Methylation percentage for each region in each individual (cells of the heatmaps) is expressed as a fraction where un-methylated = 0 (white) to completely methylated = 1 (red). In adult sperm tissues, (A) 284 DMRs were identified (193 hypo-methylated in SAS and 91 hyper-methylated in SAS). In juvenile liver tissues, (B) 346 DMRs were identified (215 hypo-methylated in SAS and 131 hypermethylated in SAS). Two DMRs (C) were found in common between the two tissues and exhibited similar patterns of differential methylation in both adults and juveniles. SAS Wild Age Adult Juvenile Differential methylation in juvenile liver To quantify the presence of intergenerational effects in juvenile Atlantic Salmon whose parents matured in the hatchery, we generated pure-type crosses of SAS and wild salmon and reared them in a common environment until age 10 months. F1 SAS juveniles tended to be longer (SAS: 65.2 ± 6.6 mm, wild: 63.2 ± 6.6 mm; mean ± SD; Figure 3A) and heavier (SAS: 3.42 ± 1.0 g, wild: 3.14 ± 1.0 g; mean ± SD; Figure 3B) than F1 wild juveniles but these differences were dwarfed by among-family variation and are not statistically significant when controlling for the small number of families investigated (length: F 1,8 = 0.19, p = 0.68; weight: F 1,8 = 0.13, p = 0.72). While F1 wild fish were smaller on average, they had similar condition factors (SAS: 1.20 ± 0.06, wild: 1.21 ± 0.06; mean ± SD; F 1,8 = 2.43, p = 0.16; Figure 3C). within-family N: 17 -37) and five families of wild (blues; N = 198; within-family N: 21 -53) salmon reared in a common environment. While SAS juveniles were on average longer (A; SAS: 65.2 ± 6.6 mm, Wild: 63.2 ± 6.6 mm) and heavier (B; SAS: 3.42 ± 1.0 g, Wild: 3.14 ± 1.0 g) than wild juveniles when reared in a common hatchery environment, wild juveniles had similar condition (C; SAS: 1.20 ± 0.06, Wild: 1.21 ± 0.06). All measurements are mean ± standard deviation. Controlling for among-family variance using a linear model with a random-effect for family rendered none of the comparisons statistically significant (length: C To investigate the potential for inherited methylation patterns in juveniles, we profiled DNA methylation at over 23.1 million CpG sites in liver tissues of F1 wild and SAS juveniles. We chose two individuals from each of four pure-type families for each treatment group (N = 16). In contrast to sperm, the more metabolically active liver tissues exhibited average methylation levels of approximately 80%. Juvenile liver tissue also exhibited a bimodal distribution of methylation where ~5% of CpGs were unmethylated (<5%), 76% of CpGs with methylation >80%, and a larger fraction of sites with intermediate methylation levels (19% liver CpGs vs. 4% sperm CpGs with methylation fraction between 5 -80%; Figure S2). Differentially methylated CpGs (p-value < 0.001; N = 5654) between wild and SAS juvenile offspring were organized into 346 DMRs that ranged in size from 51 to 2131 bp, contained between 4 to 40 CpGs, and covered 98% of DMCs with FDR < 0.05 ( Figure 2B). Similar to sperm cells, hypo-methylation was almost twice as common in SAS juvenile liver tissues compared with hyper-methylation (62%: 215/346 hypo-DMRs; 38%: 131/346 hyper-DMRs; binomial p-value < 0.001) and the average magnitude of methylation differences between SAS and wild offspring were comparable (mean: 30%; range: 5 -52%). DMRs in juvenile liver tissues overlapped 274 genes or their cis-regulatory context. Over-represented biological functions of these genes reflected nervous system development and regulation, muscle development and contraction, signal transduction pathways, and immune system processes (Table S4). We found overlap of DMRs between the two tissues and life stages (2/622 total DMRs; Figure 2C) that was more than would have been expected by chance (1000 permutations; p < 0.001). The shared regions exhibited the same direction of differential methylation in both tissues ( Figure S3A-B) and hierarchical clustering of these regions by individual largely recapitulated the SAS vs. wild groupings ( Figure 2C). These regions are in proximity (<20 kb) to genes involved in immune response, tissue differentiation and organ development, and G protein-coupled receptor signaling ( Figure S3A-B). Five additional genes were overlapped by DMRs in both tissues but the DMRs in each tissue were not located at the same sites. Of these, metabotropic glutamate receptor 4 (GRM4) had regions that were in close proximity (<5,000 bp). The DMRs in proximity to GRM4 were hypo-methylated in SAS sperm and hyper-methylated DMR in SAS liver tissue ( Figure S3C). As a means to investigate whether methylation influences juvenile phenotypic variation (i.e. size and condition factor), we used a network-based approach (Langfelder and Horvath 2008) to identify modules of correlated methylation signatures across the juvenile liver tissue samples and tested for module associations with juvenile phenotypes. To construct the network, we first binned methylation in non-overlapping 100 bp windows and selected the windows (i.e. regions) with among-individual variances greater than 0.05 (N = 59,803 regions). Approximately scale-free networks of correlated methylation regions were constructed using the approach implemented in the R package WGCNA. We identified 124 modules that included a total of 4,179 regions. Eighteen modules exhibited significant correlations with at least one phenotype (p < 0.05; Figure S4). Modules were enriched for a variety of signalling pathways and developmental processes relevant to the correlated phenotypes (e.g. growth factor signalling, skeletal muscle development; Table S5). The strongest association existed between the 'navajowhite1' module and condition factor (r = 0.69, p = 0.003). The regions in this module were in proximity (5 kb) to genes enriched for mesoderm development and regulation of transcription (Tables S5 and S6). In particular, the gene encoding insulin-like growth factor I (IGF-1), a hormone produced in the liver and a key regulator of growth in muscle and skeletal tissues (Ohlsson et al. 2009), was associated with this module. Elevated methylation of IGF-1 in juvenile livers, with an expected reduction in IGF-1 expression, was associated with reduced weight and length, but better condition in juvenile salmon ( Figure S4). condition factor, r = -0.53, p = 0.03). All modules exhibited a significant correlation between module membership (x-axes; MM = absolute value of the correlation of region methylation with the main axis of module variation) and gene significance (y-axes; GS = absolute value of the correlation of region methylation with the phenotype). DMR overlapping regions in these modules were more highly correlated with the main axis of variation of the modules than non-DMR overlapping regions (A -purple: t 35.0 = 2.9, p = 0.007; B -yellow4: t 26.0 = 3.0, p = 0.006; C -darkorange: t 24.4 = 2.6, p = 0.01). DMR overlapping regions were also more centrally located and highly connected in module networks than non-DMR overlapping regions (D -F). Three modules exhibited significant overlap with DMRs identified between SAS and wild juvenile fish (permutation tests; p < 0.001). DMR-overlapping regions in these three modules had consistently higher module centrality than non-DMR-overlapping regions ( Figure 4; purple: t 35.0 = 2.9, p = 0.007; yellow4: t 26.0 = 3.0, p = 0.006; darkorange: t 24.4 = 2.6, p = 0.01). Overall, these results indicate that 1) juvenile phenotypes are influenced by methylation variation among individuals and, 2) hatchery-induced differential methylation affects key loci that are central to certain modules influencing offspring phenotypes. To account for the possibility of selection causing genetic differences between SAS and wild salmon and explaining the observed phenotypic differences, we also genotyped individuals for 974,219 single nucleotide polymorphisms (excluding CpG context C/T and A/G SNPs to avoid confounding methylation variation with allelic variation) using the aligned bisulfite sequencing reads and performed outlier tests. Wild salmon are expected to have experienced selection during their time in the marine environment (smolt to adult) where mortality can range from 65 to 99% (Chaput 2012). In contrast, SAS salmon experienced relaxed selection during this time in the hatchery with a mortality rate from smolt to adult of approximately 40%. Overall, we failed to find support for a genome-wide average F ST larger than zero (AMOVA: 1000 permutations, p = 0.77). Using outlier detection algorithms, we did not detect significant shifts in allele frequencies between SAS and wild salmon using BayeScan or a polygenic framework (RDA; R 2 = 0, p = 0.71), and we only identified two outlier SNPs with FDR < 0.01 in OutFLANK ( Figure S5; Table S7). Maturation environment influences gamete methylation Our results demonstrate that environmental variation (i.e. growth and maturation in a natural vs. hatchery setting) experienced after approximately two years of common rearing in their natural riverine environment alters both DNA methylation in salmon sperm as well as DNA methylation of hatchery-produced progeny with SAS parents. The maturation-environment effect we demonstrated strongly suggests, as other have reported (Sciamanna et al. 2019), that the Weismann Barrier is permeable and that information perceived by the soma can be incorporated into the germ-line via epigenetic mechanisms. To our knowledge however, this is the first time environmental variation experienced later in life has been directly demonstrated to influence gamete DNA methylation in animals. Several lines of evidence support a mechanism of environmentally-mediated DNA methylation remodelling during gamete maturation. First, environmental differences in early life are known to influence gamete methylation (Gavery et al. 2018;Rodriguez Barreto et al. 2019). Second, multiple copies of DNA methyltransferase 3 (DMNT3), the methyltransferase responsible for the addition of new DNA methylation, have been retained following successive genome-duplication events (i.e. ohnologs) for teleost and salmonid fishes (Liu et al. 2020). In Rainbow Trout (Oncorhynchus mykiss), certain DMNT3 ohnologs are expressed in spermatozoa during late spermatogenesis (i.e. a few weeks before spawning; Liu et al. 2020), thus providing a mechanism by which salmonids may alter DNA methylation in their gametes until days or weeks before spawning. Third, teleost fishes do not appear to experience genome-wide reprogramming of paternal methylation patterns following fertilization (Jiang et al. 2013;Potok et al. 2013) or differentiation of gonadal tissue (Ortega-Recalde et al. 2019;Skvortsova et al. 2019). Thus, adult salmon may be able to transmit heritable information to their offspring about the physical or biological environments they experience immediately prior to spawning. As such, our results are consistent with the hypothesis that epigenetic mechanisms can facilitate transgenerational plasticity (Bell and Hellmann 2019). Transgenerational plasticity is theorized to evolve when environmental variability is sufficiently stable or predictable such that adults can transmit relevant information about the environment to their offspring (McNamara et al. 2016). If only early-life epigenetic signals were capable of being transmitted intergenerationally, transgenerational plasticity may not have been expected to evolve for salmon whose lives generally span both temporally and spatially diverse environments, and whose life histories involve limited parental care (Thorstad et al. 2010). Our demonstration of the potential for adults to transmit environmental information acquired later in life to their offspring suggests transgenerational plasticity in salmon may be an important factor contributing to life history variation and adaptive responses to environmental change. Origins of hatchery-induced differential methylation There is an unresolved question of whether epigenetic differences arising as a result of hatchery exposure occur due to deterministic processes (i.e. adaptive responses potentially arising from existing molecular machineries as a result of past selection) or stochastic processes (i.e. random environmental perturbations of wildtype methylation patterns). Several lines of evidence suggest the patterns we observed originate at least partly from deterministic processes. First, if methylation changes were completely stochastic we would expect no bias in the direction of methylation changes, but we have observed that differential methylation in SAS fish was strongly biased (about twofold) toward hypomethylation in both sperm and juvenile livers. Second, despite the fact that our juveniles and adults originated from different cohorts and that juvenile livers will have undergone tissue-specific methylation reprogramming from the state observed in sperm, we detected more DMRs in common between the datasets than expected by chance. Finally, the regions in phenotypically correlated methylation networks that overlap SAS vs. wild DMRs are significantly biased toward being centrally located regions in the comethylated networks indicating these are not random associations. More generally, prior research has established parallel signatures of DNA methylation divergence in response to early-life hatchery rearing in two populations of Coho Salmon (Oncorhynchus kisutch; Le Luyer et al. 2017) and hatchery-reared Rainbow Trout exhibit a significant proportion (approximately 20%) of hatchery-origin DMRs that are shared between red blood cells and sperm (Gavery et al. 2018). There are similarities (discussed in detail below) in the biological functions impacted by the epigenetic signatures of response to early-life hatchery rearing for Rainbow Trout (Gavery et al. 2018), Coho Salmon (Le Luyer et al. 2017), and Atlantic Salmon (Rodriguez Barreto et al. 2019). These epigenetic signatures of early-life hatchery rearing are also broadly similar to epigenetic signatures of domestication observed in recently domesticated European Sea Bass (Dicentrarchus labrax; Anastasiadi and Piferrer 2019). Were the epigenetic effects induced by hatchery environments truly random, it would be very unlikely to detect particular genes or pathways across multiple studies. Altogether, the compiled evidence supports the hypothesis that there is a certain degree of conservation in the DNA methylation changes in response to captive rearing across a broader taxonomy of teleost fishes. We identified seven genes associated with SAS versus wild differential methylation that have been reported in previous studies of hatchery induced differential methylation in salmonids. Phosphatidylinositol 3-kinase regulatory subunit alpha (PIK3R1) was differentially methylated in SAS sperm cells and has previously been identified as differentially methylated in sperm of Atlantic Salmon reared in a hatchery from birth (Rodriguez Barreto et al. 2019). This gene is an important regulator at the center of many growth factor and hormone signalling pathways and mediates numerous cellular processes including cell metabolism, cell growth, cell movement, and cell survival (Cantley 2002). PIK3R1 has also previously been identified as potentially target of selection in domesticated salmon (Liu et al. 2017). Three additional genes reported in previous studies that we report as differentially methylated in sperm cells all have roles in the nervous system. NRG2 , is a growth factor that influences the growth and differentiation of various types of cells including epithelial and neuronal cells (Britsch 2007), PCDHGC5 is a cell adhesion protein with a critical role in mediating cell-cell connections in brain and other neural tissues (Wang et al. 2002), and STXBP5L influences neurotransmitter release with an important role in motor function (Geerts et al. 2015). The remaining three genes we detected in common with other studies were all differentially methylated in juvenile liver tissue. BCR is required for proper differentiation of keratinocytes and plays an important role in epidermal tissue development (Dubash et al. 2013). BCR has previously been identified as differentially methylated in hatchery-origin Coho Salmon white muscle (Le Luyer et al. 2017). CTNNA2 is involved in regulating cell adhesion and cytoskeleton organization during neuron differentiation (Schaffer et al. 2018) and was shown to be differentially methylated in both red blood cells and sperm of hatchery-origin Rainbow Trout (Gavery et al. 2018). ARHGAP32 is a GTPase activating protein that is also involved in neuron differentiation (Nakamura et al. 2002) and has previously been shown to be differentially methylated in sperm of Atlantic Salmon (Rodriguez Barreto et al. 2019). In general, patterns of DNA methylation across studies implicate regulation of cell differentiation and developmental processes with a particular enrichment of genes involved in neuron differentiation. Moreover, several intriguing genes we identified here have functionally similar relatives reported in other studies. We detected differential methylation of galanin receptor 1 (GALR1) between SAS and wild salmon sperm. Galanin is a neurotransmitter that is bound by three receptors (GALR1-3) and plays a modulating role in diverse functions including feeding behaviour, energy metabolism, osmoregulation, and reproduction as well as depression-related pathologies in humans (Mechenthaler 2008). The related galanin receptor 2 (GALR2) has previously been linked with differential methylation in hatchery-origin Rainbow Trout sperm (Gavery et al. 2018). Given the diverse functions of galanin it is difficult to speculate on the specific cause or consequences of its differential methylation in the context of hatchery rearing as regulation of food intake, environmental or crowding stress, osmoregulation, and neurological development are all plausible phenotypes implicated in domestication effects in salmon. We identified several glutamate receptors as being differentially methylated either in sperm (GRIK5, GRM4) or liver (GRM4, GRID2, GRM3). Glutamate receptors are well known targets of selection in many domestic animals (O'Rourke and Boeckx 2020) and have specifically been identified as being differentially methylated between wild and domestic European Sea Bass (Anastasiadi and Piferrer 2019). Glutamatergic signalling is an important excitory driver of the hypothalamic-pituitary-adrenal (HPA) axis that, among other functions, mediates organismal stress responses and aggression and it has been hypothesized that selection on these pathways underlies "tameness" in domesticated animals and attenuated stress responses under crowded conditions (O'Rourke and Boeckx 2020). Furthermore, differential methylation of glutamate receptors has been reported in response to prenatal stress in cows (Littlejohn et al. 2018) and is associated with adverse behavioural phenotypes in preterm infant humans (Everson et al. 2019). This raises the hypothesis that adult SAS fish could transmit information about the crowding they experienced prior to spawning to their offspring in order to prime them for a highly competitive environment upon hatching (Christie et al. 2012). Consequences of hatchery-induced methylation for offspring phenotypes We identified several correlated methylation profiles that were associated with offspring phenotypes. Conceptually, this analysis identifies pathways or biological functions that are coregulated by methylation. Of the 18 methylation modules that were correlated with juvenile phenotypes, several occurred in proximity to genes enriched for signalling pathways (i.e. muscle growth and differentiation, skeletal development, neural development, and immune system processes) directly relevant to the phenotypes being studied. In particular, the methylation module in juvenile livers with the strongest phenotypic correlation (i.e. navajowhite1) contained regions overlapping IGF-1. IGF-1 is a hormone produced and released from the liver in response to growth hormone (GH) signalling that plays a key endocrine role mediating growth and differentiation of muscle and skeletal tissues and therefore body size (Ohlsson et al. 2009). Body size and condition are traits closely linked with juvenile salmonid survival and fitness (Quinn and Peterson 1996;Einum and Fleming 2000). In addition, the GH/IGF-1 axis has been implicated in acclimation to saltwater (McCormick 2001) which is a major selective barrier for anadromous salmonids and has been identified as a deficiency of hatchery reared fish (Shrimpton et al. 1994). Our results demonstrate that differences in the DNA methylation state of this important growth-regulating gene exert influence on salmon growth and developmental trajectories that are likely to have real consequences for individual fitness. Hatchery-induced differential methylation appears to directly influence both the fitnessrelated traits we quantified here and likely other more complex behavioral traits with fitness consequences at later stages of development than we have studied. Hatchery-induced DMRs overlapped regions centrally located in co-methylated modules that were associated with biological functions involved in brain neuron differentiation as well as the glutamate receptor GRM4 (i.e. module yellow4) suggesting that hatchery-environment-induced methylationmediated behavioural changes (e.g. attenuated stress response to crowding; O'Rourke and Boeckx 2020) have consequences for the growth trajectories of offspring. We have also identified modules (i.e. purple) and differential methylation of genes not included in methylation modules that lie downstream of genes in important phenotype-associated modules (i.e. IGF-1 signalling pathway). Phosphatidylinositol-mediated signaling (i.e. module purple) and PI3KR1 in particular plays a key role in modulating the response to IGF-1 stimulation (Hakuno and Takahashi 2018) and thus hatchery-induced differential methylation appears to influence or modulate growth trajectories at a later step in that signalling cascade that, at least in part, may explain the subtle differences we observed in phenotypes between SAS and wild juveniles. Collectively, our results suggest that hatchery-associated effects are indeed mediated through DNA methylation with direct consequences for aspects of fish phenotypes and ultimately their fitness. Comparison of hatchery rearing approaches Despite detecting DNA methylation differences between SAS and wild fish, our work also reveals some fundamental differences between the effects of early-rearing and later-life hatchery exposure for salmon. SAS fish (both SAS adults and their progeny) exhibited more hypo-methylation relative to wild fish (for both adult males and their progeny reared in a common environment), in contrast to previous work that has demonstrated predominantly hypermethylation of fish produced and reared in hatchery from the egg stage (Le Luyer et al. 2017;Gavery et al. 2018;Rodriguez Barreto et al. 2019). Reduced representation bisulfite sequencing (RRBS) in Coho Salmon and Rainbow Trout reported differential methylation at between 0.03% and 0.1% of analyzed sites or regions. Using similar criteria, our results showed differential methylation affected an order of magnitude less CpG sites (0.004%). This suggests that the potential for DNA methylation-mediated domestication effects caused by later-life hatchery exposure may not be as severe as those observed for salmon that experience early-life hatchery rearing. RRBS approaches are believed to preferentially target gene regulatory relevant regions of the genome (e.g. Anastasiadi and Piferrer 2019) and thus, because of a difference in techniques between studies (RRBS vs. whole-genome bisulfite sequencing), our comparisons may be biased. However, the genomic distribution and density of regulatory relevant CpGs in non-mammalian vertebrates fundamentally differs from that of mammals (Long et al. 2013). Bioinformatic interrogation of our data indicates RRBS applied to our study would have assayed approximately 7 million CpGs and only detected 10% of the observed DMCs implying the above comparison is reasonably unbiased. Furthermore, it suggests that in salmon, and possibly fishes more generally, RRBS approaches fail to capture a significant proportion of biologically relevant methylation differences. Like previous studies, we have demonstrated a lack of genome-wide differentiation between hatchery-reared and wild fish (Christie et al. 2016;Le Luyer et al. 2017;Gavery et al. 2018). This result suggests that the selective regime imposed by the hatchery environment over one generation was not strong enough to cause widespread differentiation. In turn, it also suggests that despite the high levels of mortality during the marine phase of salmons' lives (65 -99%; Chaput 2012), selection in the marine environment may not be important enough to cause widespread, temporally consistent, changes in allele frequencies between wild and SAS salmon. Previous work in Atlantic Salmon has reported consistent allele frequency changes over the marine migration period for only one of two populations studied indicating patterns of differentiation due to the marine environment are spatially and temporally variable (Bourret et al. 2014). It is difficult to know if the two outliers we detected result from selection in the hatchery or marine environments. In spite of this, our results clearly implicate a stronger role for epigenetic factors and not differences in genetic variation in hatchery-related phenotypic divergence. We have demonstrated the potential for domestication effects to be propagated to offspring for salmon who experience hatchery environments during maturation via the intergeneration transmission of DNA methylation. Our experiments on juvenile fish were conducted in a laboratory setting and thus whether these effects are also detectable and have fitness consequences in the true context of SAS program where SAS individuals are released and reproduce naturally in the wild remains unknown. Genotype-by-environment interactions are pervasive in salmonids (Vandersteen et al. 2019) and so epigenotype-by-environment interactions may be as well. As such, there is an urgent need to evaluate the interaction between SAS and wild rearing on offspring DNA methylation and development in a natural environment. Other sources of epigenetic information (i.e. small RNAs) are also well known to mediate intergenerational effects (Sciamanna et al. 2019) that may well mediate the phenotypic differences between individuals we have observed. On the other hand, multiple epigenetic mechanisms often function together to affect phenotypic changes (Cavalli and Heard 2019) and future work unravelling the mechanistic basis of hatchery-induced phenotypic effects will need to clarify the potential contributions of other epigenetic mechanisms, their relative importance, and the degree to which these effects are reversible following the cessation of the environmental exposure. Only then will the evolutionary consequences of environmentally-induced epigenetic variation in these systems be globally understood. Methods: Atlantic Salmon (Salmo salar L.) juveniles (smolts) ( Figure 1A) were collected using a rotary screw trap from May to June 2015 from the mainstem of the Northwest Miramichi River, New Brunswick, Canada near the mouth of Trout Brook ( Figure 1B). They were transported to the Miramichi Salmon Conservation Centre (MSCC, South Esk, NB; Figure 1C) where they were held in tanks with natural photoperiod until 2017 (i.e. smolt-to-adult supplementation or SAS). SAS fish were initially fed frozen shrimp and gradually weaned onto standard hatchery pellet food over the course of a couple months. Natural origin (wild) adult Atlantic Salmon returning to spawn in the Miramichi River to spawn were collected by seining in September of 2017 and 2018 as a part of regular brood-stock collection program by the Miramichi Salmon Association staff from selected pool habitats in the upper reaches of a major branch (the Little Southwest Miramichi River) of the Northwest Miramichi River ( Figure 1E). Adult fish were transferred to MSCC and held in tanks for up to three weeks. In 2017, both SAS and wild adult fish were haphazardly netted from holding tanks, gametes were collected, and eggs were artificially fertilized to create pure-type breeding crosses (SAS x SAS and wild x wild; N = 8 crosses each). Fertilized eggs were incubated in flowthrough troughs until first feeding after which offspring from multiple families were mixed and transferred to 3m x 3m square tanks and fed ad libitum on hatchery pellet food until they reached a size of approximately 1.5 g. Juvenile offspring fish were netted haphazardly from the tanks, euthanized in an overdose solution of eugenol (Sigma-Aldrich Canada, Oakville, ON), weight and length measurements taken, and liver tissue was dissected and preserved in RNAlater. Juvenile samples were later genotyped with a panel of 188 SNPs (KASP SNP assays; LGC Biosciences, Beverley, MA, USA) and assigned to their family of origin using COLONY v2.0.6.5 (Jones and Wang 2010). In 2018, sperm samples were collected from eight wild and eight SAS males in 2 mL tubes and stored on ice for between 2 -6 hours. Sperm (250 μ L) was centrifuged at 7000 rpm for 10 min, the supernatant discarded, and isolated sperm cells preserved in 1.5 mL of RNAlater. To characterize C-T polymorphisms that could bias methylation estimates, we combined equal proportions of DNA from all individuals (N = 36) and sequenced them as a pool in one lane of an Illumina HiSeqX. Raw sequencing reads were trimmed using fastp v0.19.9 and then aligned to the Atlantic Salmon genome with bwa mem (Li 2013). Duplicate reads were removed using MarkDuplicates and overlapping 3' ends of paired reads were clipped using the clipOverlap function of BamUtil v1.0.14 (Jun et al. 2015). We called SNPs using a frequency based approach in freebayes v1.3.1 (Garrison and Marth 2012) that required variant sites to be covered by a minimum of 10 reads and have a minimum of two reads supporting the alternate allele. We retained both C-T and A-G (i.e. C-T on the minus strand) polymorphisms and removed these sites from the methylation results using bedtools v2.26.0 (Quinlan and Hall 2010). We quantified differential methylation at CpG sites covered by at least one read in all samples where we additionally required a minimum of five reads and a maximum of 20 reads (approximately 99.9 th quantile) for at least 12 of 16 juveniles or eight of 12 adults. The sequencing performance for four adult samples (i.e. two HiSeqX lanes) representing two SAS and two wild fish was poor and thus these samples were excluded to reduce biases in methylation estimates due to low coverage for these samples. The minimum coverage filter ensured that differences in methylation were not due to spurious differences in coverage between groups and the maximum coverage filter removed highly repetitive regions where the confidence in mapping accuracy is low. Differential methylation of CpG cytosines (DMC) was determined using betabinomial models implemented in the DSS v2.32.0 package (Feng et al. 2014) in the R statistical environment v3.6.1 (R Core Team 2019). Methylation levels were first smoothed using a window size of 500 bp and models were fit with group-specific dispersion estimates as implemented in DSS. False discovery rates were calculated according to Benjamini and Hochberg (1995). Differentially methylated regions (DMRs) were determined based on attributes of DMCs, where regions were required to be a minimum of 50 bp long, have > 3 CpGs, and greater than 50% of the CpG sites with a p-value < 0.001. Due to the large number of small contigs in the Atlantic Salmon genome (i.e. >230,000; ICSASG v2; NCBI RefSeq: GCF_000233375.1; Lien et al. 2016), we restricted our analyses to the 29 full-length chromosomes and contigs larger than 10 kb in length (>96% of the un-gapped length of the genome). To determine potential functional consequences of methylation differences we used bedtools v2.26.0 (Quinlan and Hall 2010) to identify gene features associated with DMRs. NCBI RefSeq gene annotation information for the salmon genome was retrieved and genes were associated with differential methylation if any DMRs were located within 5000 bp of a coding region consistent with previous work in salmonids (Le Luyer et al. 2017). Gene ontology information for annotation genes was obtained from the Ssal.RefSeq.db v1.3 (https://gitlab.com/cigene/R/Ssa.RefSeq.db; accessed: June 22, 2020) R package. We tested for enrichment of biological functions for genes associated with DMRs using Fisher's Exact Tests and the 'weight01' (Alexa et al. 2006) as implemented in the TopGO v.2.38.1 package (Alexa and Rahnenfuhrer 2019). We used a network-based approach to investigate associations of correlated methylation signatures with juvenile phenotypes. We first summarized methylation in non-overlapping 100 bp windows across the genome. Windows required a minimum of three CpG sites and we only retained windows with among-sample variances greater than 0.05 to reduce computational burden when constructing the network (N = 59,803 windows). We calculated connectivity between all pairs of regions using the bi-weight midcorrelation raised to the power of 18 to approximate a scale-free network as implemented in the WGCNA package in R (Langfelder and Horvath 2008). Modules of correlated methylation signatures were inferred using hierarchical clustering of the topological overlap dissimilarity matrix and a dynamic tree-cutting algorithm. The modules were constructed using a block-wise approach with a maximum of 30,000 regions allowed in each block and all blocks were then merged to form the final modules. The association of methylation modules with phenotypes was assessed by correlating module eigenvectors scores (the first axis of a principal component analysis conducted on all the regions within each module) with phenotypic values for each individual using the bi-weight midcorrelation. Module-traits correlations with p-values < 0.05 were retained for further analysis. For each significant module-trait correlation we assessed whether the any regions within the module overlapped with previously identified DMRs found between SAS and wild fish. We assessed the statistical significance of these associations using a resampling procedure to compare the number of DMR overlaps with those based upon a random draw of regions for each module. We tested for differences in average module membership (region correlation with module eigenregion) and gene significance (region correlation with phenotype) between DMRoverlapping and non-DMR-overlapping regions using t-tests in R. To test for genetic divergence between SAS and wild fish we first used BisSNP v1.0.0 (Liu et al. 2012) to call single nucleotide polymorphisms from the aligned bisulfite sequencing reads. For this analysis we retained all 16 adult samples and one individual per full-sib family from the juvenile dataset (N = 8). Juvenile individuals were chosen to maximize the number of successfully called genotypes. We required a minimum depth of coverage of 8X to call an individual genotype and we retained only SNPs with a successful genotyping rate of 80% (N = 24 individuals). We further removed SNPs with minimum allele frequency of 5% (minimum of 2 alternate alleles). We used AMOVA implemented in pegas v0.13 (Paradis 2010) to test for genome-wide differentiation. We used BayeScan v2.1 (Foll and Gaggiotti 2008) with a liberal prior odds setting of 10 as well as OutFLANK v0.2 (Whitlock and Lotterhos 2015) to identify potential outliers between SAS and wild groups. For the OutFLANK analysis we first built the genome-wide null distribution of divergence based on a linkage pruned set of 12,221 SNPs (obtained with '-indep 50 5 1' using PLINK v0.19) and tested for outliers in the whole dataset based on this distribution. We also employed a polygenic framework to test for subtle correlated changes across many alleles using redundancy analysis (Forester et al. 2016) implemented in the vegan v2.5-6 R package (Oksanen et al. 2019 Figure S1: Distribution of methylation proportion (# methylated bases / read coverage) for 16.4 million CpG sites from adult sperm samples. Supplemental Figure S2: Distribution of methylation proportion (# methylated bases / read coverage) for 23.1 million CpG sites from juvenile liver samples. Supplemental Figure S3: Differential methylated regions (DMR) between SAS (yellow) and wild (blue) salmon that overlapped (A-B) or targeted the same gene (C) between sperm (solid lines) and juvenile liver tissues (dashed lines). Supplemental Figure S4: Heatmap of methylation -phenotype correlations for all modules (represented by color names on the y-axis) with at least one significant correlation. Figure S5: Distribution of genetic divergence (F ST ) between SAS and wild salmon for 974,219 single nucleotide polymorphisms. Supplemental Supplemental Table S1: Genes associated with differentially methylated regions in sperm. Supplemental Table S2: Gene ontology biological process terms associated with differentially methylated regions in sperm. Supplemental Table S3: Genes associated with differentially methylated regions in juvenile liver. Supplemental Table S4: Gene ontology biological process terms associated with differentially methylated regions in juvenile liver. Table S5: Gene ontology biological process terms associated with methylated regions included in co-methylated modules. Supplemental Supplemental Table S6: Genes associated with methylated regions included in co-methylated modules. Supplemental Table S7: Genomic location of two outlier SNPs detected at a FDR < 0.05 between SAS and wild fish using OutFLANK. Supplemental Table S8: Statistics of sequencing effort and coverage of whole genome bisulfite sequencing for all samples. Supplemental File S1: Phenotypic data for lab reared juveniles. Supplemental Figure 3: Differential methylated regions (DMR) between SAS (yellow) and wild (blue) salmon that overlapped (A-B) or targeted the same gene (C) between sperm (solid lines) and juvenile liver tissues (dashed lines). Grey boxes highlight the extent of DMRs and the lower tracks indicate annotated genes.
v3-fos-license
2020-02-06T09:11:51.758Z
2020-01-01T00:00:00.000
214347392
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/floram/v27n1/2179-8087-floram-27-1-e20190048.pdf", "pdf_hash": "2c8072dfe83576251396fc3363ee8f80d4744d65", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41505", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "sha1": "ead62d6f30b85eacbce63983b22768f89af5afaf", "year": 2020 }
pes2o/s2orc
Juvenile Wood from Pinus patula Schltdl & Cham for Multilaminated Panel Production Wood scarcity, attacks by primates and insects, and fungal damage in forest plantations make the introduction of new species necessary. Given this, it is important to understand the potential uses of wood in the production chain. Pinus patula Schltdl & Cham presents good adaptation to Brazilian conditions and is a candidate for wood supply. Its juvenile wood density however, is lower than that of other pine species. This study aimed to evaluate the properties of veneer on panels produced with twelve-year old P. patula wood compared with panels produced with P. taeda wood of the same age, which is commonly used for panel production. Panels were bonded with urea-formaldehyde and phenol-formaldehyde adhesives using veneers applied on two types of plywood panel. The P. patula panels showed lower strength, stiffness and density when bonded with urea-formaldehyde, and higher strength, density and stiffness when bonded with phenol-formaldehyde in comparison with P. taeda. P. patula panels can be used for multilayer panel production. INTRODUCTION In Brazil, the plywood sector has an established production capacity of over 4 million cubic meters, with 74% of the total being exported . Approximately 70% of the plywood panels are produced with pine wood, such as Pinus taeda L. and Pinus elliottii Engelm. The wood of these trees is in high demand, making the consideration of other species for panel production necessary (Iwakiri et al., 2012). Sapajus nigritus damages P. taeda plants in Paraná and Santa Catarina state, leading to reduced wood production (Liebsch et al., 2018). There is no efficient control method for S. nigritus, such that alternative plant species may be able to mitigate this problem. Pinus patula Schltdl & Cham is native to the Mexican mountains, in Sierra Madre Oriental and is one of the most exploited species in the country (Sánchez-González, 2008;Van Zonneveld et al., 2009). Its species grow up to 12.6 m 3 /ha/year by twelve years of age (Santiago-García et al., 2015). In Brazil, P. patula is planted in southeast Minas Gerais, northeast São Paulo, west Santa Catarina and in the mountains of Rio Grande do Sul with wood productivity higher than P. taeda (Aguiar et al., 2014). There is no record of S. nigritus damage to Pinus patula trees. In Brazil, pine plantations have suffered a reduction in their forest rotation, undergoing two thinnings, the first at eight years and the second between 12 and 13 years, and clear cutting between 19 and 20 years (Folmann et al., 2014). Thus, it is important to evaluate the possibilities of using wood from younger trees, such as the 12 year-old trees used in this study. It was found that P. elliottii showed a variation of wood density in the radial direction that allowed the categorization of its wood by means of this characteristic, as either juvenile or adult. This variation was similar to that found by other researchers for P. patula and P. taeda, showing that for these species juvenile wood occurred until the 5th growth ring and the adult wood after the 14th ring in relation to the pith (Palermo et al., 2013). Thus, the wood used in this study was considered juvenile and transition. According to Vieira et al. (2012), the consolidation of the plywood industry occurred in 1965, with one of the driving factors being the development of urea-formaldehyde and phenol-formaldehyde resins, which gave plywood panels a more efficient bonding, providing humidity resistance between the veneers. This development allowed for their use in external environments, or in internal environments with the presence of high humidity. The construction principles used in plywood panel manufacture aim to balance the physical-mechanical variation of the veneers' adjacent layers, arranged in the longitudinal and perpendicular direction to the panel plane. Balancing of plywood panels can be achieved with an even or odd number of veneers, but for that, the layers of their structural composition should always be in odd numbers (Ross, 2010). Different structural compositions consist in the addition or arrangement of the veneers in alternation or in the same direction, in relation to the surface veneer, reinforcing the area that suffers greater structural demands under bending. Thus, other principle construction compositions such as laminated veneer lumber (LVL) could be used for plywood panels. LVL are parallel veneer panels used as a structural component in buildings, especially in countries where there is a tradition of using wood in construction systems (Müller et al., 2015). According to Wilson & Dancer (2005), in southeastern regions of the U.S., LVL panels are produced with Pinus elliottii or Pinus taeda wood, hot-pressed with phenol-formaldehyde adhesive. The veneers generally have 2.54 or 3.18 mm and LVL can vary in thickness and width but is most commonly produced with the dimensions 4.45 cm thick and 121.9 cm wide, with lengths of 18.29 m. After being cut into narrower dimensions, they are currently used as an alternative to structural timber for headers and beams and as flanges in "I" composite beams. Although LVL panels have specific production conditions for structural applications, the type of arrangement used in these parallel veneer panels could also be used in the plywood panel industry for specific purposes. An example would be small pieces that are supported only on the longest sides, thus requiring greater bending effort in this direction. Further, for indoor use, they can be glued with other types of non-waterproof adhesives. Aydin et al. (2004) used urea-formaldehyde and PVA to bond LVL panels made from Eucalyptus camaldulensis and Fagus orientalis woods, while Melo & Del Menezzi (2014) produced and evaluated the properties of LVL panels made from Schizolobium amazonicum and PVAc adhesive. Multi-laminated panel production, using perpendicular or parallel veneer arrangements, could be a viable alternative to P. patula wood use, due to good results from P. taeda plywood (Iwakiri et al., 2012;Müller et al., 2015). This study aimed to evaluate the P. patula multilayered wood panel quality produced with urea-formaldehyde and phenol-formaldehyde adhesives with veneers arranged in perpendicular and parallel on timber panels. Biological material Eighteen trees each were harvested for P patula and P. taeda (reference species), with diameters (DHB) ranging from 17 to 33 cm. Nine trees were used for panel production and nine for wood basic density evaluation. Twelve-year-old trees from both species were harvested in General Carneiro, Paraná state, Brazil (26º25'39" S; 51º18'56" W) 1300 meters above sea level. Data from a weather station located 5 km from the study area indicated average annual precipitation of 1776 mm, the occurrence of frost between May and September and temperate climate according to Köppen classification. Wood basic density The wood basic density was determined by the ratio between dry mass and saturated volume, according to NBR-11941. Discs were removed from the base (0.10 m), DHB (1.3 meter high), at 25, 50 and 70% of the total tree height. Lamination, drying and sorting The trees used for lamination were cut into 2.75 m long logs, and those with a diameter at the thin point equal to or greater than 17 cm were laminated. The number of rolled logs from each tree was recorded and the relative height at which each tree could be laminated was calculated. Veneers with 600 x 600 mm and 2.2 mm thickness were produced from P. patula and P. taeda trees and their quality was evaluated according to NBR ISO 2426-1 and 2426-3 and ABIMCI technical parameters (ABIMCI, 2002). The A and B class surface covers and the C +, C and D Classes were used for the core in multilayered panels, according to ABIMCI classifications. Panel production The panels of each species were produced with two adhesive types and the seven veneers arranged according Figure 1. The adapted parallel arrangement (referred to in this study as Parallel Veneer Panel -PVP) was realized to facilitate the glue line shear strength test between this veneer and the adjacent one, allowing the evaluation of this mechanical property using the standard plywood test. Since the plywood glue line shear strength test is performed on veneers that are arranged perpendicular to each other, this adaptation has allowed the same test to be used on the innermost glue line for both panel arrangements, making it possible to compare this property between the panels produced with the two types of arrangements. On the other hand, the panel central veneer in perpendicular arrangement caused minimal influence on the panel bending strength, or elasticity modulus, (perpendicular x parallel direction) because its inertia moment is minimal and, therefore, caused minimal influence on bending properties. Three panels per treatment were produced with P. patula and P. taeda, applying two types of adhesives (Urea-formaldehyde -UF and Phenol-formaldehyde -PF), and two veneer arrangements, totaling eight panel types (Table 1). Seven-layer plywood panels of 600 × 600 mm were pressed in the heated hydraulic laboratory press. For UF and PF adhesives, the glue spread was 160 g/m 2 based on wet mass with the adhesive properties and glue compositions complying with the technical parameters (Table 2). Ammonium sulfate was also used as a catalyst at a 2% ratio in the urea-formaldehyde glue composition. The pressing process occurred with 12 kgf/cm 2 for 15 minutes at 110°C for urea-formaldehyde and 140°C for phenol-formaldehyde. The resulting panels had a 13.5 mm nominal thickness and were conditioned in a climatic chamber according to NBR-9489 recommendations. The properties of multilaminated panels were evaluated based on plywood ABNT standards for both types of panels produced. This option of standards for both the panel veneer compositions, made their comparison possible and fulfilled study objectives. Additionally, it was found that prior studies involving LVL panels were carried out and evaluated by means of physical and mechanical properties traditionally used for plywood panels. (Tenorio et al., 2011;Guimarães et al., 2015;Mendoza et al., 2017). Statistical analysis The variance homogeneity (Bartlett's test at 95% significance) and normality test (Shapiro Wilk test at 95% significance) were performed. The means per panel type were analyzed by variance analysis and Tukey test at 95% significance. Wood basic density The wood basic density varied from 320 to 400 kg/m 3 and 300 to 350 kg/m 3 for P. taeda and P. patula, respectively, depending on trunk height (Figure 2). Panel physical properties The panel density ranged from 426 to 542 kg/m 3 ; equilibrium moisture from 10.38 to 11.05%; thickness from 12.90 to 13.88 mm; water absorption from 95 to 101%, thickness swelling from 6.37 to 9.53% and swelling plus recovery from 2.30 to 3.94% (Table 3). Panel mechanical properties The mechanical property values in the different types of panels produced varied as a function of species, adhesive and composition (Table 4). Wood basic density The base of P. patula and P. teada presented higher basic density than the top. Cells with higher lumen and thinner wall are produced during the initial cambium activity (in the top) resulting in low basic density. However, this tendency is reversed with cambium maturation, generating cells with thicker cell walls, which increases the basic density. The wood produced during the cambium mature stage presents better quality for panel production (Vidaurre et al., 2011). ABIMCI (2002) recommends that pine plywood panels have an apparent density of 517 kg/m 3 and a maximum humidity of 11%. Considering an average wood shrinkage of 0.53% per moisture content percentile for both pine species used, and a panel compaction of 10%, an ideal basic wood density of approximately 445 kg/m 3 is estimated. Therefore, the range of basic density variation for both species was considered low for panel production. The disadvantage of this is greater veneer compaction during pressing. On the other hand, soft woods can facilitate the lamination process (Almeida et al., 2014). Panel physical properties The P. patula and P. taeda panel apparent density bonded with phenol-formaldehyde was lower than the 517 kg/m 3 required by quality standards. However, all panels showed equilibrium moisture content and nominal thickness variation lower than the 11 and +/-5% maximum limit suggested (ABIMCI, 2002). Average panel thickness was 13% less than the sum of the veneer thicknesses used, showing that there was veneer compaction. Costa & Del Menezzi (2017) verified that different densification strategies caused an increase in the mechanical properties of plywood produced with paricá wood of density similar to P. Patula and P. taeda. There are no normative standards for dimensional stability properties, however, the thickness swelling values for five tropical pinus species bonded with urea-formaldehyde for plywood production ranged from 5.06 to 7.09% and swelling plus recovery from 1.68 to 2.89% (Iwakiri et al., 2001), lower than those in this study. Pinus plywood may present water absorption close to 60% (Almeida et al., 2013;Silva et al., 2012;Campos et al., 2009), lower than those observed in this study. This difference is due to low wood density of both species used and the veneer thickness of 2.2 mm. The adhesive type did not influence water absorption. Panel swelling occurs due to the release of internal stresses during the pressing process (Iwakiri et al., 2001). The lower wood density of both species results in greater compaction. The panels were formed with seven 2.2 mm veneers, but their final thickness ranged from 12.90 to 13.88 mm, causing internal stresses. These tensions are released during water absorption, following which, the panel does not return to its original state. Therefore, this study is able to explain thickness swelling and swelling plus recovery values. Panel mechanical properties The plywood panel presented averages for modulus of rupture (MOR) and modulus of elasticity (MOE), both parallel and perpendicular, above the minimum of 25.79/18.04 MPa and 4735/2220 MPa, respectively, stipulated by ABIMCI (2002). Also, the glue line shear strength values under wet conditions were higher than the 0.88 for shear strength and 20% for wood failure, required by ABIMCI (2002). Plywood panels produced with five tropical pine species, between 20 and 25 years old, showed parallel MOR and MOE values ranging from 58 to 102 MPa and 6,300 to 13,714 MPa, respectively (Iwakiri et al., 2012), higher than those of this study. This difference occurred due to the juvenile wood use, since the trees in this study were 12 years old. The P. elliottii wood showed juvenile wood up to the 7th growth ring and mature wood up the 20th growth ring. A similar trend occurs for P. taeda and P. patula (Palermo et al., 2013). Thus, all the 12-year old trees used presented either juvenile or transition wood, of lower density and, consequently, lower mechanical strength. There are no minimum requirements for the PVP panel properties evaluated in this study. However, MOR of 74.49 MPa, MOE of 5338.95 MPa and glue line shear strength of 2.67 MPa for PVP were found for Pinus oocarpa panels with 663 kg/m 3 basic density (Lima et al., 2013). Values of 43.6 MPa, 3944 MPa, 1.81 MPa and 460 kg/m 3 for parallel MOR, parallel MOE, glue line shear strength and density of panels produced with P. taeda veneers on the surface and Schizolobium amazonicum (paricá) in the core were also reported (Iwakiri et al., 2010). A relationship between the density and mechanical strength of the panel can be observed by comparing the values obtained in this study. The parallel and perpendicular MOE and MOR values, confirm the balance effect in plywood panels. On the other hand, adapted PVP panels showed different strength and stiffness values in perpendicular and parallel directions. Pinus taeda showed higher mechanical strength than P. patula panels when urea-formaldehyde adhesive was used with this trend changing when phenol-formaldehyde was applied. This was attributed 7/8 Floresta e Ambiente 2020; 27(1): e20190048 Juvenile Wood from Pinus Patula… to the density variations of the panels. The P. patula panels presented lower density with urea-formaldehyde and higher density with phenol-formaldehyde (Table 3). Therefore, there is a direct relationship between panel density and resistance. CONCLUSIONS All panels have properties compatible with those required by ABIMCI, except for the apparent density. The adapted PVP panels presented differences in the mechanical properties between axial and tangential directions and the comparison of parallel and perpendicular MOE and MOR values confirms the balance effect in plywood panels. When bonded with urea-formaldehyde, Pinus patula panels showed lower strength, stiffness and density than P. taeda panels. The reverse trend occurred in panels glued using phenol-formaldehyde. 12-year-old Pinus patula panels showed adequate physical and mechanical properties, independent of the adhesive and veneer arrangement, demonstrating the potential of this species for panel production.
v3-fos-license
2024-07-27T15:19:19.827Z
2024-07-25T00:00:00.000
271483448
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "a27c2e77bd87f8f48c62b5d9c5e58b5dac07e56d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41506", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Chemistry", "Biology" ], "sha1": "49ff2175d0bd8114c691ac74c32cff8f3bc2eaea", "year": 2024 }
pes2o/s2orc
A unique combination of natural fatty acids from Hermetia illucens fly larvae fat effectively combats virulence factors and biofilms of MDR hypervirulent mucoviscus Klebsiella pneumoniae strains by increasing Lewis acid–base/van der Waals interactions in bacterial wall membranes Introduction Hypervirulent Klebsiella pneumoniae (hvKp) and carbapenem-resistant K. pneumoniae (CR-Kp) are rapidly emerging as opportunistic pathogens that have a global impact leading to a significant increase in mortality rates among clinical patients. Anti-virulence strategies that target bacterial behavior, such as adhesion and biofilm formation, have been proposed as alternatives to biocidal antibiotic treatments to reduce the rapid emergence of bacterial resistance. The main objective of this study was to examine the efficacy of fatty acid-enriched extract (AWME3) derived from the fat of Black Soldier Fly larvae (Hermetia illucens) in fighting against biofilms of multi-drug resistant (MDR) and highly virulent Klebsiella pneumoniae (hvKp) pathogens. Additionally, the study also aimed to investigate the potential mechanisms underlying this effect. Methods Crystal violet (CV) and ethidium bromide (EtBr) assays show how AWME3 affects the formation of mixed and mature biofilms by the KP ATCC BAA-2473, KPi1627, and KPM9 strains. AWME3 has shown exceptional efficacy in combating the hypermucoviscosity (HMV) virulent factors of KPi1627 and KPM9 strains when tested using the string assay. The rudimentary motility of MDR KPM9 and KP ATCC BAA-2473 strains was detected through swimming, swarming, and twitching assays. The cell wall membrane disturbances induced by AWME3 were detected by light and scanning electron microscopy and further validated by an increase in the bacterial cell wall permeability and Lewis acid-base/van der Waals characteristics of K. pneumoniae strains tested by MATS (microbial adhesion to solvents) method. Results After being exposed to 0.5 MIC (0.125 mg/ml) of AWME3, a significant reduction in the rudimentary motility of MDR KPM9 and KP ATCC BAA-2473 strains, whereas the treated bacterial strains exhibited motility between 4.23 ± 0.25 and 4.47 ± 0.25 mm, while the non-treated control groups showed significantly higher motility ranging from 8.5 ± 0.5 to 10.5 ± 0.5 mm. Conclusion In conclusion, this study demonstrates the exceptional capability of the natural AWME3 extract enriched with a unique combination of fatty acids to effectively eliminate the biofilms formed by the highly drug-resistant and highly virulent K. pneumoniae (hvKp) pathogens. Our results highlight the opportunity to control and minimize the rapid emergence of bacterial resistance through the treatment using AWME3 of biofilm-associated infections caused by hvKp and CRKp pathogens. KPi1627, and KPM9 strains.AWME3 has shown exceptional efficacy in combating the hypermucoviscosity (HMV) virulent factors of KPi1627 and KPM9 strains when tested using the string assay.The rudimentary motility of MDR KPM9 and KP ATCC BAA-2473 strains was detected through swimming, swarming, and twitching assays.The cell wall membrane disturbances induced by AWME3 were detected by light and scanning electron microscopy and further validated by an increase in the bacterial cell wall permeability and Lewis acid-base/van der Waals characteristics of K. pneumoniae strains tested by MATS (microbial adhesion to solvents) method. Results: After being exposed to 0.5 MIC (0.125 mg/ml) of AWME3, a significant reduction in the rudimentary motility of MDR KPM9 and KP ATCC BAA-2473 strains, whereas the treated bacterial strains exhibited motility between 4.23 ± 0.25 and 4.47 ± 0.25 mm, while the non-treated control groups showed significantly higher motility ranging from 8.5 ± 0.5 to 10.5 ± 0.5 mm. Conclusion: In conclusion, this study demonstrates the exceptional capability of the natural AWME3 extract enriched with a unique combination of fatty acids to effectively eliminate the biofilms formed by the highly drug-resistant and highly virulent K. pneumoniae (hvKp) pathogens.Our results highlight the opportunity to control and minimize the rapid emergence of bacterial resistance through the treatment using AWME3 of biofilm-associated infections caused by hvKp and CRKp pathogens. Introduction Misuse and overuse of antibiotics have led to the emergence of drug-resistant bacteria, which is a major threat to global health.In fact, WHO has declared that "Antimicrobial resistance (AMR)" is one of the top global public health and development threats.Multidrug resistance (MDR) has increased all over the world, which threatens public health.Several recent investigations reported the emergence of multidrug-resistant bacterial pathogens from different origins that increase the necessity of new potent and safe alternatives for antibiotics.Besides, the routine application of antimicrobial susceptibility testing detects the antibiotic of choice as well as the screening of emerging MDR (Algammal et al., 2019;Kareem et al., 2021;Elbehiry et al., 2022;Shafiq et al., 2022;Algammal et al., 2022a, b).AMR bacteria caused around 1.27 million deaths globally in 2019 and contributed to 4.95 million deaths (Murray et al., 2022).By 2050, more people are expected to die from antibiotic-resistant infections than from cancer (O'Neill, 2014).We need to urgently find effective alternatives to deal with this crisis.There is now more research being done on different alternative ways to fight bacteria, whereas these approaches are developed in different stages, including the use of novel antibiotics, phage therapy, antimicrobial peptides, nanoparticles, and antivirulence, which are considered as one such promising approach for overcoming bacterial resistance (Alaoui Mdarhri et al., 2022).One of these approaches is the anti-virulence approach, which is promising to provide novel antimicrobial therapies predicted to be superior to conventional antibiotics (Allen et al., 2014;Totsika, 2017).Anti-virulence strategies that target bacterial behavior, such as adhesion and biofilm formation, are anticipated to apply minimal selective pressure, which aims to reduce virulence and less likely to induce drug resistance.These strategies mainly focus on neutralizing virulence factors and declining bacterial infection without direct killing or elimination of the bacteria.Accordingly, there is less selective pressure on bacterial survival, thus less likely to induce drug resistance (DIckey et al., 2017;Maura et al., 2016).There are a variety of virulence factors in hvKP, including virulence genes, virulence plasmids, capsular polysaccharide, siderophore, lipopolysaccharide, and fimbriae, which play a crucial role in bacterial infection and resistance (Liao et al., 2024). Klebsiella pneumoniae is an important opportunistic human pathogen commonly involved in hospital-acquired infections (Lam et al., 2018;Piepenbrock et al., 2020).The hvKp strain is particularly virulent causing invasive and metastatic infections even in young and healthy individuals.Moreover, hvKp is easily transmitted leading to infections in multiple sites such as the thorax, abdomen, central nervous system, eyes, and genitourinary tract (Sellick and Russo, 2018).Most alarmingly, CR-Kp has emerged and caused severe and fatal infections in healthcare settings (Tang et al., 2020).K. pneumoniae causes several infections via gene or plasmid horizontal transfer (Wyres and Holt, 2018). Most microorganisms in a biofilm grow slowly, exhibit downregulated virulence, and are distributed heterogeneously.Biofilms are harder to be killed with antibiotics than individual cells.They can also avoid being removed by the immune system (Jefferson, 2004;Hu et al., 2012;Jennings et al., 2015).Biofilmrelated infections cover a range of conditions, from infections related to medical devices, like prosthetic joints, to infections affecting native tissues, like chronic osteomyelitis and cystic fibrosis. Biofilms are intricate communities of microorganisms that are surrounded by an extracellular matrix made up of proteins, extracellular DNA (eDNA), lipids, and exopolysaccharides (Jennings et al., 2015).Bacterial biofilms grow in a way that has many benefits, including the bacteria staying in a small environment as long as the conditions are good.In biofilms, bacteria make up less than 10% of their dry mass, while the matrix can make up over 90%.This matrix is composed of various types of biopolymers collectively known as extracellular polymeric substances (EPS).The EPS, produced by the organisms themselves, enables bacterial cells to live in proximity and interact.This behavior is significantly different from their planktonic counterparts (Hu et al., 2012). The hvKp strains can form biofilms and remain persistent inside the biofilms thereby enhancing virulence and invasive capacity of infection through colonization in the respiratory, gastrointestinal, and urinary tracts.During biofilm formation, exopolysaccharides are produced by bacterial cells forming a matrix around the cells to protect them from the harsh environmental conditions and exposure to bioactive agents.The current treatment for these infections involves removing the infected medical device and cleaning the affected tissue with antibiotics.However, treating these infections is still difficult.Promising research is being done on new anti-biofilm agents like quorum-sensing inhibitors, biofilm matrix-degrading enzymes, and antimicrobial peptides.These potential candidates hold the key to overcoming the hurdles posed by biofilm-related infections. Recently, we demonstrated that fatty acid (FA)-enriched fractions of Hermetia illucens (HI) (Black soldier fly) larvae oil possess bactericidal activity against hypervirulent mucoviscous K. pneumoniae strains, actual phytopathogens, and multi-drug resistant (MDR) pathogenic fish bacteria (Marusich et al., 2020;Mohamed et al., 2021Mohamed et al., , 2022)).In particular, the third acidic watermethanol extract (AWME3) demonstrated an exceptional ability to eliminate MDR and XDR K. pneumoniae strains at low doses.Hermetia illucens (HI) is a remarkable insect species because its larvae have the ability to produce FAs through biosynthesis pathways rather than solely relying on bioaccumulation from their diet.This makes them highly promising compared to other insects.Larvae are full of natural substances that can kill bacteria and could be used to treat serious infections caused by antibioticresistant bacteria.HI larvae contain 15%-49% fat providing a rich lipid source (Li et al., 2016). In the present study, we further explore the anti-biofilm and anti-virulence properties of AWME3 against biofilms formed by K. pneumoniae strains isolated from Russian hospitals between 2011 and 2016, including mucoviscous KPM9, hypermucoviscous KPi1627, and the standard non-mucoid NDM-1 carbapenemase-resistant KP ATCC BAA-2473 strains.We also investigate how AWME3 activity affects bacterial membrane permeability and the Lewis acid-base/van der Waals properties.These changes represent the mechanistic key to understanding superior AWME3's sub-MIC activity against virulence factors, such as mucoviscosity and rudimentary motility, and different types of biofilms formed by the three tested K. pneumoniae strains. Chemicals and media Different chemicals used in this study, including acetic acid (CH 3 COOH), ethanol (C 2 H 5 OH), hexane (C 6 H 14 ), chloroform (CHCl 3 ), ethyl acetate (C 4 H 8 O 2 ), and toluene (C 7 H 8 ), were purchased from Thermo Fisher Scientific, Waltham, USA.Crystal violet, propidium iodide, ethidium bromide, glutaraldehyde, phosphate buffer saline (PBS) were purchased from Sigma-Aldrich, St. Louis, USA.Methanol (CH 3 OH) and hydrochloric acid (HCl), purchased from Sigma-Aldrich, St. Louis, USA, and Milli-Q H 2 O were mixed together with the intended ratio for extraction procedure.Luria-Bertani (LB) broth and Mueller-Hinton (MH) broth (Sigma-Aldrich, St. Louis, USA) were used to culture bacteria in liquid media.LB and MH agar (Sigma-Aldrich, St. Louis, USA) were used to culture bacteria on solid media.Tryptone soy agar (Oxoid, Basingstoke, Hampshire, United Kingdom) was used to determine twitching motility of bacteria.Peptone, tryptone, and NaCl were purchased from Sigma-Aldrich, St. Louis, USA, while yeast extract was purchased from Difco, USA, and used to prepare the culture media to validate the bacterial motility. Bacterial strains and growth conditions Environmental isolate K. pneumoniae KPM9 and clinical isolate K. pneumoniae KPi1627 strains were obtained from the State Collection of Pathogenic Microorganisms and Cell Cultures (SCPM, Obolensk, Russia).K. pneumoniae ATCC BAA-2473 laboratory strain was purchased from ATCC (American Type Culture Collection, United States).All tested bacteria strains were identified according to Lev et al. (2018).K. pneumoniae KPi1627 strain was isolated from a clinical sample (trachea) at Moscow Infectious Hospital No. 1 in 2014, while K. pneumoniae KPM9 strain was isolated from the environment (fresh-water) in the Krasnodar Region of Russia in 2011 (Lev et al., 2018) and collected in the Burdenko Neurosurgery Institution. The identification and detection of bacterial strains were confirmed using a Vitek-2 Compact instrument with a VITEK ® 2 Gram-negative (GN) ID card (SKU number 21341; BioMeŕieux, Paris, France) and a MALDI-TOF Biotyper (Bruker Daltonics, Bremen, Germany) instrument, which is capable of distinguishing among Klebsiella oxytoca, K. pneumoniae subsp.ozaenae, K. pneumoniae subsp.pneumoniae, K. pneumoniae subsp.rhinoscleromatis, and K. variicola.After that, identified K. pneumoniae strains were stored in 15% glycerol and kept at −80°C.A single colony from each strain was inoculated in 10 ml of the LB broth and incubated overnight at 37°C by shaking at 210 rpm/min.The overnight culture was adjusted to half of the McFarland standard (1 × 10 8 CFU/ml) to be used in biofilms assays under static conditions. Extraction method Acidic water-methanol extract (AWME3) was isolated from live Black Soldier Fly (H.illucens) larvae, 15 days old, brownish color, wheat fed, and provided by the NordTechSad, LLC company (Arkhangelsk, Russia).The H. illucens larvae fat was extracted according to Mohamed et al. (2022).Briefly, 3 g of larvae fat was subjected to sequential extraction using water (Milli Q quality), methanol (99.9%,HPLC grade), and hydrochloric acid (37%) with a ratio of 90:9:1, v/v/v.AWME3 was selected for our experiments in this study because of its highest activity among other extracts against Aeromonas sp (Mohamed et al., 2021). Membrane permeability of K. pneumoniae strains The impact of AWME3 on the alteration of the membrane permeability of all K. pneumoniae strains was determined using crystal violet (CV) uptake assay, in which stain is passed through the cell membrane (Hobby et al., 2019).After incubation in LB medium at 37°C overnight, K. pneumoniae strains were harvested and washed with PBS, pH 7.4, three times.The pellets were resuspended in PBS and mixed with AWME3 at a concentration from ¼ to 2 MIC for 4 h.Bacterial cells were incubated with 1% CV in the dark for 15 min.After centrifugation, the absorbance of the supernatant was determined by measuring the OD 570 nm using CLARIOstar ® Plus multimodal plate reader (BMG Labtech, Ortenberg, Germany).The absorbance of CV was considered as 100%.The crystal violet uptake was calculated using the following formula: % of uptake = (OD of the sample)/(OD of the crystal violet solution) × 100. Swimming motility All three K. pneumoniae strains were seeded on LB agar (Sigma-Aldrich, St. Louis, USA) and incubated at 37°C for 24 h.Then, one colony of each isolate was inoculated in the presence and absence (control) of ½ MIC AWME3 on the surface of swimming agar plates, containing 1.0% tryptone (Oxoid, Basingstoke, Hampshire, UK), 0.5% sodium chloride (Sigma-Aldrich, St. Louis, USA), and 0.3% agar (Difco, USA) and previously equilibrated to room temperature.Plates were incubated without inversion for 24 h at 30°C (Saeki et al., 2021). Twitching motility All K. pneumoniae bacteria strains were seeded on LB agar (Sigma-Aldrich, St. Louis, USA) and incubated at 37°C for 24 h.Then, one colony of each isolate was inoculated in the presence and absence (control) of ½ MIC AWME3 to the bottom of twitching agar plates containing 1.0% tryptone (Oxoid, Basingstoke, Hampshire, UK), 0.5% yeast extract (Oxoid, UK), 1.0% sodium chloride (Sigma, USA), and 1.0% agar (BD Difco, New Jersey, USA).Plates were inverted and incubated at 37°C for 24 h.Subsequently, the agar was carefully removed, and the motility zone was measured to the nearest millimeter after staining with 2% crystal violet (Sigma-Aldrich, St. Louis, USA) for 2 h (Hu et al., 2012).As a negative control, each strain was inoculated in tryptone soy agar (BD Difco, New Jersey, USA) under the same conditions. Minimal inhibitory biofilm concentration test The MIBC assay was conducted via the microdilution assay described by Cepas et al. (2019) with some modifications.For all K. pneumoniae strains, 100 µl of twofold dilutions of AWME3 in LB broth media with 1-, 0.5-, 0.25-, 0.125-, 0.063-, 0.032-, 0.016-, and 0.08-mg/ ml concentrations in LB broth was inoculated with 100 µl of bacterial suspension 1 × 10 6 CFU/ml and incubated for 24 h at 37°C without shaking.A negative control (culture medium without inoculum) and a positive control (culture medium with inoculum) were included in each 96-well plate.All plates were covered with adhesive film to avoid evaporation.After incubation, the unattached cells were carefully removed and washed twice with PBS and dried at 60°C for 20 min.Attached biofilms were stained with 125 µl of 1% (v/v) of CV and incubated for 10 min at room temperature.Afterward, the CV was completely removed, washed with PBS, and dried at 65°C for 60 min.The plates were rinsed with d.H 2 O and dried, followed by the addition of 125 µl of 30% acetic acid to dissolve the biofilm-bound dye.Optical density was measured at 570 nm (OD 570 ) using a CLARIOstar ® Plus multimodal plate reader (BMG Labtech, Ortenberg, Germany).The MBIC was defined as the lowest concentration of AWME3, which resulted in a threefold decrease in OD 570 , in comparison with the positive growth-control value (only bacteria).Additionally, MIBC of AWME3 and positive control (Dox) was determined in 96-well plates and identified as the lowest concentration, which inhibits biofilm growth of K. pneumoniae strains. Minimum eradication biofilm concentration test The minimum eradication biofilm concentration (MEBC) was determined according to Balkrishna et al. ( 2021) with minor changes.The MEBC was determined based on the MIBC test, where different concentrations of AWME3, in the range 0.08-1.0mg/ml, were inoculated with fixed 1 × 10 6 -CFU/ml concentration of each K. pneumoniae strain in sterile LB broth and incubated at 37°C for 24 h.Subsequently, an aliquot of 30 ml of each of the MIBC, 2 MIBC, and 4 MIBC was scraped and spread on sterile MH agar plates then incubated for 48 h at 37°C.The lowest concentration of AWME3, which prevented bacterial growth, was identified as the MEBC.Likewise, doxycycline was used as a positive control, and MEBC was determined by the same manner. Testing of AWME3 action against mature biofilms The biofilm disruption assay was performed in a 96-well polystyrene plate (TPP, Trasadingen Switzerland) following the procedure (Chmielewska et al., 2020;Wijesinghe et al., 2021) with minor modifications.The bacteria were grown in microtiter plates for 72 h at 37°C to form mature biofilms in the wells.Then, media was discarded gently, and wells were washed using PBS buffer to remove loosely adhered cells.Freshly prepared LB broth was added to each well, and then AWME3 was added to give final concentrations of 0.25, 0.500, 1.0, and 2.0 mg/ml.The plate was incubated for the next 24 h under static conditions at 37°C.The wells of the plates were washed by sterile PBS to remove the planktonic cells followed by staining with 0.1% CV solution in water for 30 min.The stain was removed and gently washed with d.H 2 O, dried at 60°C for 60 min, and the remaining biofilm-bound dye was dissolved using 30% acetic acid.The OD570 was recorded using a CLARIOstar microplate reader, and the percent of biofilm disruption was calculated with respect to the control group.The MEBC of AWME3 and doxycycline was determined by counting formed colonies (CFU).Briefly, adhered treated biofilms were completely scraped and serially diluted in PBS.Of prepared dilutions, 30 µl was spread on MH agar medium separately, then incubated at 37°C for 48 h; MEBC identified as the lowest concentration of AWME3 or doxycycline, which was able to eliminate the bacterial biofilm growth. Fluorescence microscopy Biofilm architecture, in the absence and presence of AWME3 antimicrobial, was evaluated using the fluorescence microscopy protocol of Sateriale et al. (2020) and described in the Supplementary Methods in light microscopy examination with some changes.After fixing wells with 90% ethanol for 15 min and completely drying at 30°C, the biofilms were stained with 1 mM propidium iodide (PI) for 15 min at room temperature.The excess dye was washed with d.H 2 O. Finally, biofilms were observed with a fluorescent microscope (Life technologies, Bothell, WA, USA) equipped with a digital camera.Digital images were acquired using a ×4 objective at PI excitation/ emission wavelength of 543/617 nm.All obtained images were analyzed using Fiji Image J software (National Institutes of Health, Bethesda, USA) to obtain the mean fluorescence intensities from digital fluorescent images of biofilms. Scanning electron microscopy Treated and untreated biofilms of K. pneumoniae ATCC BAA-2473 strain were examined using a scanning electron microscope (SEM) according to Ceruso et al. (2020) with minor modifications.Briefly, the K. pneumoniae cells were cultured and grown on 1-cm 2 cover glass (Thermo Fisher Scientific, Waltham, USA) in a six-well plate (TPP, Switzerland) for 6 h as mentioned above.Next, 0.0-, 0.125-, 0.250-, and 0.500-mg/mL concentrations of AWME3 were added to the formed biofilm in a six-well microtiter plate, then incubated for 24 h at 37°C without shaking.Further, the planktonic cells were removed by washing with PBS, pH7.4,three times.All adhered biofilms on the surface of the glass coverslips were fixed with 2.5% glutaraldehyde, pH 7.2, overnight at 4°C, washed three times in the rinsing buffer PBS at 4°C for 15 min, and then dehydrated by ethanol solutions in following concentrations: 30%, 50%, 70%, 80%, 90%, and 95%.All dehydrated samples were visualized under SEM (TESCAN, Kohoutovice, Czech Republic). Statistical analysis Statistical analysis was conducted, and graphs were generated using the software GraphPad Prism 7 (GraphPad Software Inc., San Diego, CA, United States).All experiments were performed in triplicate validating the statistical significance by one-way ANOVA test with Dunnett's multiple comparison test and twoway ANOVA test with Dunnett's, Tukey's, and Sidak's corrections, and the statistical significance level was p < 0.05. Results Our study focused on exploring the anti-biofilm and antivirulence properties of AWME3 against biofilms formed by K. pneumoniae strains isolated from Russian hospitals between 2011 and 2016.This included mucoviscous KPM9, hyper-mucoviscous KPi1627, and the standard non-mucoid NDM-1 carbapenemaseresistant KP ATCC BAA-2473 strains. 3.1 Phenotypic characteristics of the tested K. pneumoniae strains K. pneumoniae KPi1627 and K. pneumoniae KPM9 strains demonstrated multidrug-resistant phenotypes to more than three different classes of antibiotics (Magiorakos et al., 2012), while K. pneumoniae ATCC BAA-2473 was classified as extensive drug resistant (XDR) (Mohamed et al., 2022).In addition, K. pneumoniae KPi1627 and K. pneumoniae KPM9 displayed high hypermucoviscosity after string assay, while K. pneumoniae ATCC BAA-2473 was negative to the same test. AWME3 impact on MDR K. pneumoniae strains grown under biofilm vs. planktonic bacterial mode To enhance the diagnosis, treatment, and prevention of infections, it is crucial to differentiate between acute infections caused by the independent growth of individual microorganisms (the so-called "planktonic" growth) and biofilm infections, which involve clusters of microbial cells (Mirzaei et al., 2024).The minimum inhibitory concentration (MIC) is the lowest concentration of antibiotics that stops visible bacterial growth.The minimum bactericidal concentration (MBC) is the lowest concentration needed to kill the bacteria.Diagnostic laboratories use MICs primarily to confirm the presence of resistance.MIC and MBC are determined based on planktonic cells, while the minimum biofilm eradication concentration (MEBC) indicates the lowest antibiotic concentration needed to eliminate the biofilm. In a previous study, we determined MIC and MBC of AWME3 extract against planktonic bacterial cells of three K. pneumoniae isolates, including KPi1627, KPM9, and KP ATCC BAA-2473 (Mohamed et al., 2022).The MIC and MBC were recorded as 250 µg/ml under planktonic growth conditions in these experiments.In the present study, we identified the minimum inhibition biofilm concentration (MIBC) and MEBC for the same strains as 500 µg/ml, when they were exposed to static conditions after being adherent to polystyrene microtiter plates with the exceptional resilience (Table 1).Doxycycline (Dox) used as the positive control displayed remarkable MIC values of 6.25, 3.12, and 12.5 µg/ml against K. pneumoniae KPi1627, K. pneumoniae KPM9, and K. pneumoniae ATCC BAA-2473, respectively.At the same time, Dox exhibited MEBC values exceeding 50 µg/ml for all three strains. These results demonstrate a significant reduction of sensitivity to AWME3 treatment by all hvKp-tested strains when they were grown in biofilm mode compared to that of planktonic cell growth.Moreover, the AWME3 extract demonstrated bactericidal activity, comparable to the standard antibiotic (Dox) against all biofilms formed by K. pneumoniae strains (Table 1). AWME3 impact on preformed biofilms of MDR K. pneumoniae strains Biofilms are constructed as bacterial colonies formed in multiple layers on biotic or abiotic material that facilitates bacterial survival and persistence against harmful conditions, as well as contributes to virulence during infection.We performed biofilm formation assay using LB broth medium in a 96-well microtiter plate.The biofilms grown for 24 h were assessed by staining with crystal violet (CV).Crystal violet, an aniline dye, represents the initial stain employed in the process of Gram staining.When cells are exposed to a 95% ethanol or acetone solution, they create a vivid and eye-catching purple color after interacting with the crystal violet pigment.The intensity of CV staining in K. pneumoniae, Gram-negative bacteria, reveals the extent of the thin peptidoglycan layer covered by lipopolysaccharides and lipoproteins, proteins, and DNA, which form a biofilm on the plastic surface. AWME3 effect on mixed biofilms of MDR K. pneumoniae strains The tested opportunistic MDR K. pneumoniae pathogens frequently form mixed biofilms that can lead to nosocomial infections in healthy individuals.We wanted to test how well AWME3 works against resistant strains and if it can remove the strong biofilms created by these strains.Mixed biofilm established by equal volumes (1:1:1) of the three different strains was the strongest one (p = 0.003) compared to biofilms formed by each K. pneumoniae strains alone (Supplementary Figure S1).On the other hand, KP ATCC BAA-2437 was the lowest (p = 0.0005) in biofilm formation capacity, compared to mixed biofilm (Supplementary Figure S1).The results depicted in Supplementary Figure S2 provide compelling evidence of the dose-related eradication of mixed biofilms formed by three strains of K. pneumoniae.Notably, when exposed to a concentration of 2 MIBC (1,000 µg/ml) of AWME3, the mixed biofilms consisting of KPi1627, KPM9, and KP ATCC BAA-2473 were completely eradicated (p < 0.0001) (see Supplementary Figure S2).Concentrations of 0.25 MIBC (125 µg/ml), 0.5 MIBC (250 µg/ml), MIBC (500 µg/ml), and 2 MIC (1,000 µg/ml) exhibit a remarkable (p < 0.0001) inhibition of the mixed biofilms resulting in 76.3%, 80.01%, 88.2%, and 98.56% reductions, respectively.In addition, our AWME3 extract effectively reduces mixed biofilms to 28.3% and 33.97% at low concentrations of 1/16 MIBC (62.5 µg/ml) and 1/ 32 MIBC (31.25 µg/ml), respectively, against mixed hvKp strains (Supplementary Figure S2) with significant reductions (p = 0.010, p = 0.038).Our results have conclusively demonstrated that the AWME3 extract derived from HI larvae fat possesses remarkable antimicrobial properties against both single and mixed biofilms formed by various MDR strains of K. pneumoniae, such as KPi1627, KPM9, and KP ATCC BAA-2473. AWME3 disrupts mature biofilms established by MDR K. pneumoniae strains Considering the dynamic growth of biofilms and their greater tolerance to antibiotics, we explored in our study the biofilm growth during 72 h first, and then subjected them to antibiotic challenge.We used light, fluorescence, and scanning electron microscopy as direct microscopic methods to gather evident information about the effect of AWME3 on treated biofilms. Through the light microscopy technique, we investigated the effect of AWME3 at 0.5 MIBC (0.25 mg/ml), MIBC (0.5 mg/ml), and 2 MIBC (1 mg/ml) on mature biofilm formation on glass coverslips using the CV assay.We found that the control group of bacteria without treatments revealed a remarkable sight.A dense and intricately woven mat of biofilms emerged resembling a heavy knit fabric.These biofilms displayed multiple layers, clearly intact, and adorned the uneven surfaces.Additionally, intriguing clusters of cells gave rise to darkened areas adding to the captivating spectacle.The EPS creates a vast, intertwined network that serves as a matrix for connected threads, effectively shielding the biofilm from a variety of hazardous conditions (Supplementary Figure S3A).AWME3 exposures to a sub-MIC (0.25 mg/mL) led to a remarkable decrease in cell count, formation of fragile mats, degradation of clusters, and a noticeable absence of cell aggregation (Supplementary Figure S3B).The highest concentration of AWME3 at 2 MIBC (1.0 mg/ml) successfully prevented the formation of mature biofilm in K. pneumoniae KP ATCC BAA-2473 leading to the complete absence of cell clusters or aggregates.Furthermore, numerous vacant spaces were observed, and the bacterial cells appeared minimal (Supplementary Figure S3D).Notably, there was no bacterial growth after culturing the 30 µl of scraped biofilm on MH agar plates incubated for 24 h, compared to standard antibiotic (Dox), which could not inhibit the mature biofilms at 4 mg/ml (data not shown). Mature biofilms were also quantified using the CV staining assay in 96-well microplates.The graph in Figure 2 clearly illustrates that at concentrations of 1,000 and 2,000 µg/ml of AWME3, the biofilms created by K. pneumoniae KPi1627, KPM9, and KP ATCC BAA-2473 strains were impressively and significantly disrupted (*p < 0.01-****p < 0.0001). The mature biofilms formed by the K. pneumoniae ATCC BAA-2473 strain clearly exhibited the highest susceptibility to AWME3, with a biofilm reduction of 63.1% when treated with 0.25 mg/ ml (Figure 2C). The propidium iodide (PI) staining method is extensively utilized and endorsed in biofilm research giving valuable information regarding eDNA release and degradation that affects biofilm maturation.PI, which can only traverse compromised bacterial membranes, is regarded as an indicator of the integrity of the membrane.Staining based on intact membrane impermeable DNA-binding stains like PI is occasionally used even while specifically studying eDNA (Mann et al., 2009). The AWME3 ability to disrupt biofilms formed by the tested K. pneumoniae strains was further investigated using fluorescence microscopy study of biofilms stained with propidium iodide (PI) (Figure 3).The images were obtained using 543/617 nm excitation/ emission filter.As shown in control images, tested bacterial strain formed very dense biofilms on glass coverslips.Dense clumps of cells in biofilms were visualized, and bacteria were seen to be heavily colonized and adherent in multiple layers.Exposure to varying concentrations of AWME3 resulted in a significant (p < 0.0001) decrease and scattered bacterial cell presence (Figures 3B-E).Upon treating bacterial biofilms with 0.5 MIBC (0.25 mg/ml), the presence of reduced mats, threads, and clumps was observed, with cells displaying dispersion and noticeable gaps between them (Figure 3B).AWME3 at a concentration of 0.5 mg/ml (MIBC) significantly decreased the number of cells and the overall bacterial cell count and intensity (Figure 3C).The highest impact of AWME3 was clearly observed when the concentration of MIBC was 2 MIBC (1.0 mg/ml) (Figure 3D).There was almost 10-fold decrease (p < 0.0001) in fluorescence intensity compared to that of the control group, as shown in Figure 3E.These results demonstrate that AWME3 effectively breaks down the pre-existing deposits of extracellular nucleic acids (eNAs) found in mature K. pneumoniae biofilms, and the level of disruption is proportional to the used AWME3 concentration. Disrupted biofilm visualized by scanning electron microscopy SEM analysis was performed to visually observe the biofilm disruption after AWME3 treatment.Cells appear aggregated and accumulated in multiple layers in untreated biofilms established by K. pneumoniae KP ATCC BAA-2473 (Figures 4A-C).Furthermore, no morphological alterations were detected in untreated cells, where cells were smooth with intact cell wall and bacilli (Figure 4C). AWME3 treatment caused a significant decrease in the number of adherent bacteria, compared to that of the control cells.The SEM analysis revealed rough surfaces, with wrinkled cell walls and visible pores in the bacterial cells treated with AWME3, as indicated by the blue arrows (Figures 4D-F).Lysed cells, cell wall debris, and ghost cells were obvious when biofilms were treated with 2 MIBC (1.0 mg/ml) of AWME3 (blue arrows and red cycle in Figure 4F).The biofilm was reduced down to a monolayer of adherent cells, and even single cells were detected (Figures 4D-F).These findings suggest that AWME3 is a potent anti-biofilm agent, which can disrupt the mature biofilms. Thus, three independent microscopy techniques all showed that AWME3 disrupts mature biofilms composed of peptidoglycan Concentration-related effect of AWME3 on the deposition of extracellular nucleic acids (eNAs) in K pneumoniae biofilms.Fluorescence microscopy representative images of (A) untreated cell (control) biofilm established by KP ATCC BAA-2473 strain and the same bacteria treated with (B) 0.5 MIBC (0.25 mg/ml), (C) MIBC (0.5 mg/ml), and (D) 2 MIC (1.0 mg/ml) of AWME3.PI staining used to stain eNAs of the biofilm.Relative fluorescence intensity of biofilm structures of the KP ATCC BAA-2473 strain (E) is reported in arbitrary unit (au) obtained after quantification of digital images using the Fiji Image J software.Data are expressed as the mean ± STD. ****p < 0.0001 was significant compared to the control group.layers covered by lipopolysaccharides, lipoproteins, proteins, and eNAs, produced by various MDR hvKp strains on the plastic surface in a dose-related manner. Hypervirulent K. pneumoniae bacteria lose its mucoviscosity in the presence of AWME3 The hypermucoviscous (HMV) phenotype is one of the key virulence factors of K. pneumoniae.This phenotype is associated with serious infections such as liver abscesses, pneumonia, and bloodstream infections.The HMV phenotype is characterized by its ability to form a thick, sticky biofilm, which contributes to its pathogenicity (Kawai, 2006).Understanding this phenotype is important for developing effective treatment strategies against K. pneumoniae infections. We conducted an analysis of autoaggregation to test the hypothesis that it plays a role in biofilm formation.Specifically, we investigated how AWME3 affects the autoaggregative behavior of mucoid hypervirulent K. pneumoniae strains.The autoaggregation experiment was conducted at room temperature for 24 h.The results are presented in Supplementary Table S1 and Supplementary Figure S4A.The data indicate that 0.5 MIC of AWME3 (0.125 mg/ml) did not have an effect on the autoaggregation of KPi1627, KPM9, and KP ATCC BAA-2473 strains (p = 0.995, p = 0.971, p = 0.945, respectively) (Supplementary Figure S4A).Furthermore, the turbidity of the supernatant from the treated cells, measured after cell centrifugation, did not differ from that of the control group (Supplementary Figure S4B).Conversely, the same AWME3 concentration resulted in the formation of loose pellets, which differed from the dense pellets formed by all non-treated K. pneumoniae strains (Supplementary Figure S4C). For better understanding of the impact of AWME3 on the virulence of hvKp strains, we conducted a straightforward string test considering the positive string as strings longer than 5 mm.The results of the KPi1627 and KPM9 strain tests indicated their virulent nature, except for the KP ATCC strain, which tested negative, suggesting a lack of virulence.Of note, the untreated KPi1627 strain showed the highest HMV-phenotype among all the tested bacteria, with a string 51.7 ± 3.5 mm in length.In comparison, the KPM9 and KP ATCC BAA-2473 strains had string lengths of 31 ± 3.63 and 3.81 ± 1 mm, respectively (Supplementary Table S2, Supplementary Figure S5).It is noteworthy that when all string-positive isolates were exposed to 0.5 MIC (0.125 mg/ml) of AWME3, they became negative in the string test highlighting the high efficacy of AWME3 in combating one of the key virulence factors of K. pneumonia strains with HMV phenotype. Effect of AWME3 on rudimentary motility of K. pneumoniae strains Throughout history, K. pneumoniae has been recognized as a major cause of urinary tract infections (UTIs) further emphasizing its significance beyond its association with pneumonia (Paczosa and Mecsas, 2016).The expression of fimbriae is crucial for the successful colonization of the urinary epithelium by K. pneumoniae.These structures facilitate attachment to urothelial cells and play a pivotal role in promoting bacterial adhesion to abiotic surfaces, like urinary catheters (Stahlhut et al., 2012).Bacterial motility is a critical factor in the successful colonization of both living and non-living surfaces.Remarkably, the hyper fimbriae phenotype appears to grant the mutant strain a form of rudimentary mobility.While K. pneumoniae was previously considered to be non-motile, the discovery of rudimentary (limited) motility in this bacterium is well documented (Carabarin-Lima et al., 2016;E ́rika et al., 2021) representing another phenotype associated with virulence in K. pneumoniae infections. Therefore, we conducted an assessment of the sub-MIC (0.125 mg/ml) impact of AWME3 on the three types of rudimentary motility (swimming, swarming, and twitching) of K. pneumoniae isolates.This experiment analysis showed that AWME3 extract has a significant effect on the basic (twitching) mobility of K. pneumoniae strains KPi1627, KPM9, and KP ATCC BAA-2473 (Supplementary Table S3).In particular, the AWME3 extract significantly reduced the swimming motility of KPM9 and KP ATCC BAA-2473 strains (p = 0.0025, p < 0.0001, respectively). However, no notable effect on KPi1627 swimming motility was observed at 0.5 MIC (0.125 mg/ml) of AWME3.The swarming motility of MDR KPM9 and KP ATCC BAA-2473 strains was significantly (p = 0.007, p = 0.003) reduced after being exposed to 0.5 MIC (0.125 mg/mL) of AWME3.The sub-MIC (0.125 mg/ml) of AWME3 significantly (p < 0.0001) reduced the twitching motility zone diameters of all K. pneumoniae strains (Supplementary Figure S6, Figure 5C).Furthermore, AWME3 at sub-MIC levels decreased the twitching motility of KPi1627, KPM9, and KP ATCC BAA-2473 bacterial strains by approximately 50% of the zone diameters.The treated bacterial strains exhibited twitching motility in the range of 4.23 ± 0.25 to 4.47 ± 0.25 mm, while the non-treated control groups showed significantly higher motility ranging from 8.5 ± 0.5 to 10.5 ± 0.5 mm (Supplementary Table S3, Supplementary Figure S6; Figure 5C). 3.9 Suggested mechanism of AWME3 actions against K. pneumoniae strains grown under planktonic bacterial mode AWME3 impact on permeability of bacterial cell membranes To verify if the changes in bacterial membranes caused by AWME3 affected their permeability, we conducted experiments using crystal violet (CV) and ethidium bromide (EtBr) uptake tests.Unlike CV, EtBr is able to accumulate in bacterial cells by either increasing membrane permeability or by inhibiting efflux pumps.Before testing for CV uptake, the bacteria were exposed to AWME3 in different concentrations (ranging from 0.0625 to 0.5 mg/ml) for 4 h.Subsequently, exposure was extended to concentrations from 0.25 to 0.5 mg/ml for 8 h to test the uptake of EtBr.We measured the amount of EtBr that entered the cells instead of looking at how much was removed through the membrane.After 15 min, we quantified and analyzed the EtBr fluorescent signal. CV easily penetrates and traverses the only damaged cell membrane (Li et al., 2013).We discovered that the K. pneumoniae strains treated with AWME3 exhibited varying levels of CV-uptake activity (Figure 6).Out of all the strains tested, KP ATCC BAA-2473 displayed remarkable susceptibility to AWME3.The study revealed a substantial (p < 0.0001) increase in CV uptake to 40.4%, 70.05%, and 71.43% at 0.5 MIC, MIC, and 2 MIC of AWME3, respectively, in comparison to untreated bacteria (1.15%) (Figure 6).When the highly virulent MDR KPi1627 and KPM9 strains were treated with a low dose of 0.5 MIC AWME3 (0.125 mg/ ml), their membrane permeability did not change much.The KPi1627 recorded 6.05% and KPM9 recorded 4.13%, compared to their control groups at 9.6% and 10.7%, respectively.However, at an MIC concentration 0.25 mg/ml, the permeability of the cell membranes of KPi1627 and KPM9 cells was greatly increased (p < 0.0001) reaching 41.48% and 36.83%respectively.The highest concentration of 2 MIC (0.5 mg/ml) caused also significant (p < 0.0001) membrane permeabilization of 67.7% and 67.9% for the same cells, as shown in Figure 6.Thus, we proved AWME3 permeabilization or disruption of the bacterial cells membrane allowing EtBr to enter the cell cytoplasm.Treatment with various concentrations of AWME3 for 8 h resulted in lower EtBr emission intensity compared to that of the control indicating significant EtBr uptake (Supplementary Figure S7A).This result supports our findings on CV uptake.EtBr uptake values for KPi1627, KPM9, and KP ATCC BAA-2473 strains changed subtly and more significantly when exposed to 0.5 MIC (0.125 mg/mL) and MIC (0.25 mg/mL) of AWME3, respectively (Supplementary Figure S7B).The increases were 60.56%, 55.98%, and 56.37%, respectively, at the MIC concentration.Notably, the values for hvKp strains were significantly higher reaching 80.43%, 77.2%, and 76.88% when exposed to 2 MIC (0.5 mg/mL) of AWME3, as shown in Supplementary Figure S7B.KP ATCC BAA-2473 strain exhibited the highest susceptibility to AWME3 compared to the other strains.The aforementioned results clearly demonstrate that AWME3 can effectively increase permeability of the cell membranes of all K. pneumoniae strains in a dose-related manner.These findings have been confirmed through CV and EtBr uptake assays. 3.9.2AWME3 impact on cell wall Lewis acidbase or electron-acceptor/electron-donor characteristics of K. pneumoniae strains Biofilm formation studies have shown that electrostatic and van der Waals interactions play an important role on cell adhesion along with the growing importance of Lewis acid-base interactions (Vernhet and Bellon-Fontaine, 1995).The microbial adhesion to solvents (MATS) is one of the simple and reliable method to gather information about van der Waals and Lewis acid-base or electronacceptor/electron-donor characteristics influenced on bacterial cell adhesion.It was inspired by the MATH (microbial adhesion to hydrocarbons) method first described 40 years ago and being the golden standard for bacterial cell surface hydrophobicity until to date (Rosenberg, 1984). In this study, we demonstrated that tested bacterial species showed different degrees of microbial adhesion to solvents (hydrophobicity).Supplementary Table S4 highlights the higher rates of cell adhesion to chloroform, which is an acidic solvent, compared to both ethyl acetate, a strongly basic solvent, and toluene.No clump or lysis of cells has been observed by phasecontrast microscopy.All untreated strains showed the highest affinity for the acidic solvent and a low affinity for the basic solvents.The adhesion levels to n-alkane (hexane) were uniformly low across all bacteria, with the KPM9 strain showing an adhesion of 6.62 ± 5.95%, while the KPi1627 and KP ATCC BAA-2473 strains displayed noticeably lower adhesion levels of 2.42 ± 3.25% and 1.09 ± 2.7% respectively (Supplementary Table S4).These results showed that, without treatment, the cell surfaces of all strains had a slightly higher electron-donating (basic) nature than electron accepting (acidic). In contrast, the treatment with 0.5 MIC (0.125 mg/ml) of AWME3 resulted in a significant (p < 0.001) increase in adhesion to chloroform (an acidic solvent): from 36.31 ± 2.84% and 35.99 ± 1.37% up to 64.9 ± 6.26% and 44.28 ± 1.7% for KPi1627 and KP ATCC BAA-2473 strains, respectively (Supplementary Table S4; Figure 7A).Moreover, both of these strains exhibit a significant increase in their ability to adhere to a polar n-alkane (hexane) solvent, with adhesion percentages reaching up to 10.6 ± 2.9% and 24.72 ± 4.75% for KPi1627 and KP ATCC BAA-2473 strains, respectively (Supplementary Table S4).The KPM9 strain kept its Effect of AWME3 on CV uptake through cell membranes of different strains of K. pneumoniae.All tested bacterial strains treated with various concentrations 0.25 MIC (0.0625 mg/ml), 0.5 MIC (0.125 mg/ml), MIC (0.250 mg/ml), and 2 MIC (0.5 mg/ml) of AWME3 for 4 h.Action on membrane permeability calculated after measuring the absorbance of crystal violet dye.All data are expressed as mean ± SD of three independent experiments.Statistical analysis performed using two-way ANOVA variance and Dunnett's multiple comparisons test (****P=0.0001).Mohamed et al. 10.3389/fcimb.2024.1408179Frontiers in Cellular and Infection Microbiology frontiersin.orgaffinity to acidic, basic, and a polar n-alkane (hexane) solvent unchanged after treatment.However, there was a small increase in its affinity to toluene from 19.42 ± 4.4% to 37.55 ± 2.86% (Supplementary Table S4; Figure 7D).Thus, AWM3 treatment increased electron-donating properties of cell surfaces in all strains, except for KP ATCC BAA-2473, which had slightly higher electron-accepting characteristics after treatment (Supplementary Table S4; Figure 7C).The substantial 4-and 20fold rise in microbial adhesion to n-alkanes for KPi1627 and KP ATCC BAA-2473 strains, respectively, clearly demonstrates the significant enhancement of the van der Waals property of bacterial cell membranes after AWME3 treatment. Discussion K. pneumoniae is a critical pathogen responsible for a variety of infections in the hospital environment particularly in intensive care units where it causes nosocomial infections.Our previous research has revealed that the clinical isolate K. pneumoniae KPi1627 and the environmental isolate K. pneumonia KPM9 exhibited resistance to colistin but remained susceptible to the quinolone group.Conversely, the standard NDM-1 K. pneumoniae ATCC BAA-2473 strain was resistant to the quinolone group but sensitive to colistin.Moreover, all three strains demonstrated sensitivity to doxycycline.The AWME3 extract, obtained from HI larvae fat, not only inhibits but also eliminates all tested K. pneumoniae strains when grown in planktonic cell mode.This effective action occurs at a minimum inhibitory concentration (MIC) and a minimum bactericidal concentration (MBC) of 250 µg/ml (Mohamed et al., 2022). The endurance of biofilm has proven to be a challenging task due to the heightened resistance exhibited by biofilms when subjected to microbiocides and antibiotics in contrast to planktonic cells (Panebianco et al., 2021).Most clinical isolates of K. pneumoniae contain two types of fimbrial adhesions, such as types 1 and 3 fimbriae, which are important for K. pneumoniae pathogenicity and biofilm formation (Schroll et al., 2010).In addition, all tested bacteria were MDR pathogens (Lev et al., 2018;Chmielewska et al., 2020) comparing bacterial strains with their antibiotic resistance.All strains were found to form strong biofilms and remain resistant.These findings aligned with several other studies (Nirwati et al., 2019;Ochońska et al., 2021;Oleksy-Wawrzyniak et al., 2022).Although demonstrating strong bacteriostatic (MIBC > 50 µg/ml) efficacy against all tested bacterial strains, the standard antibiotic (Dox) at even 4 mg/ml was unable to break down the mature biofilms.On the contrary, the The influence of sub-MIC of AWME3 on tested bacterial strain hydrophobicity.The effect of 0.5 MIC (0.125 mg/ml) of hydrophobicity activity of KPi1627, KPM9, and KP ATCC BAA-2473 against different hydrocarbons, (A) chloroform, (B) hexane, (C) ethyl acetate, and (D) toluene.Data are mean values ± STD (n = 3).Data were analyzed by two-way ANOVA, followed by Tukey's multiple comparisons test; p-value ranged between **p = 0.009 and ****p < 0.0001. AWME3 extract at a concentration of 1 mg/ml exhibits significant antimicrobial properties against mixed, and mature biofilms (see Supplementary Figure S6; Figures 1, 2) formed by various multidrug-resistant strains of K. pneumoniae, including KPi1627, KPM9, and KP ATCC BAA-2473. During biofilm formation, eDNA mediates bacterial attachment to surfaces (Whitchurch et al., 2002), and it also plays a major role in mature biofilms.The importance of eDNA in biofilm formation has been proven by the fact that DNase I inhibits biofilm formation or detaches existing biofilm of several Gram-positive and -negative bacterial species (Okshevsky and Meyer, 2013).We used three different microscopy techniques in our study, and all of them consistently showed that AWME3 disrupts mature biofilms, as depicted in Figure 2. For the first time, we have demonstrated the effectiveness of AWME3 in combat with two key virulence factors of K. pneumoniae strains.Most of the hvKp strains typically possess a thick, hypermucoid capsule and therefore produce mucoid colonies that generate a positive result in string test; therefore, hypermucoviscous phenotype is mostly associated with hypervirulence (Shon et al., 2013).Lev et al. (2018) reported that KPM9 and KPi1627 have capsular type K20 and K2, respectively.The capsule is a crucial virulence factor that enhances hvKp's resistance to various antibacterial agents and its ability to form a biofilm.This biofilm, in turn, grants the bacterium resistance to antibiotics and protects it during periods of starvation stress (Wu et al., 2011;Zheng et al., 2018).Failing to eradicate or reduce the virulence level of hvKp capsules by the previous generation of antibiotics led to the weakening of the last-resort antibiotics, thus emphasizing the urgent necessity for identifying safe and selective therapeutic agents aimed at preventing and treating resistant bacterial strains (Ling et al., 2015).In the present study, we have successfully demonstrated the remarkable effectiveness of AWM3, when used at a sub-MIC concentration of 0.125 mg/ml, in effectively eliminating hypermucoviscosity (Supplementary Table S2, Supplementary Figure S5).Hypermucoviscosity is a crucial virulence factor of K. pneumoniae, and our findings highlight the immense potential of AWM3 in addressing this problem. For a considerable period, K. pneumoniae was perceived as nonmotile Gram-negative rod bacilli.However, this perspective was challenged when Lima and Leoń-Izurieta (Carabarin-Lima et al., 2016) demonstrated the presence of polar flagella in K. pneumoniae isolated from a patient with neonatal sepsis.Furthermore, they described a swimming-like motility phenotype generated by flagella in these clinical isolates.The hyperfimbriated phenotype represents a rudimentary form of motility serving as another virulence factor of K. pneumoniae.Moreover, the K. pneumoniae genome contains the flk gene, which encodes a regulator of flagella biosynthesis.The observed rudimentary movement in the mutant strain is the result of type 1-like fimbriae production or the KpfR regulator enhancing the expression of flagellar genes.Sharma et al. (2019) conducted a comprehensive study on comparative proteomics and systems biology.Their research focused on investigating the correlation between the decrease in proteins related to motility (such as flagella, fimbriae, and pili) and the formation of biofilms.This correlation is significant as it has the potential to contribute to the development of drug resistance.Our findings reveal that even at a sub-MIC concentration (0.125 mg/ml), AWM3 effectively suppresses (Figure 5) the twitching motility generated by all bacteria strains (Carabarin-Lima et al., 2016;E ́rika et al., 2021).In this regard, our data are consistent with previous studies that have shown the significant reduction in swimming and swarming motility of various strains of human pathogens when treated with natural product extracts (Song et al., 2019;Balkrishna et al., 2021;Vishwakarma et al., 2022). Long-chain free fatty acids (FFAs) have the potential to neutralize the virulence factors of bacterial pathogens (Borreby et al., 2023).Certainly, various FAs imitate virulence factors and regulate the motility, fimbriae, hyphae, and biofilm formation of various microorganisms.For instance, oleic acid, which was found as a component of AWM3, inhibited swarming motility and pyocyanin production in Pseudomonas aeruginosa (Singh et al., 2013).Several publications stated that single or combined fatty acids disrupt and eradicate biofilms formed by MDR pathogenic bacteria strains (Singh et al., 2013;Eder et al., 2017;Hobby et al., 2019).Our extract AWME3 shows higher activity compared to the extract of Withania somnifera seeds, which contains a large amount of fatty acids (Balkrishna et al., 2021).AWME3 was also more potent than different essential oils used to disrupt New Delhi metallo-b-lactamase-1-producing uropathogenic K. pneumoniae strains (Kwiatkowski et al., 2022).Besides, AWM3 was superior to mechanically processed oils from Hermetia illucens larvae and Bombyx mori pupae in their ability to kill bacterial cells (Saviane et al., 2021).Previous studies stated that natural products, in particular saturated fatty acids (SFAs) and polyunsaturated fatty acids (PUSFAs), were able to disrupt and inhibit mature biofilms established by K. pneumoniae strains (Jiang et al., 2020;Kumar et al., 2020;Galdiero et al., 2021;Saifi et al., 2024). The SEM analysis revealed wrinkled cell walls and visible pores in the bacterial cells treated with AWME3 (Figure 4) confirming our previous report (Mohamed et al., 2022) that AWME3 likely targets the bacterial cell wall membranes.The results of the CV (Figure 6) and EtBr-uptake (Supplementary Figure S7A) assays clearly demonstrate that these membrane disturbances can be accompanied by an increase in the permeability of the cell membranes of all K. pneumoniae strains in a dose-related manner.Alterations in cell morphology and viability are consistent with other studies that have shown the ability of fatty acids and their glycerides to inhibit and eliminate single or mixed biofilms formed by various types of microorganisms through leakage in the cell wall/cell membrane.Furthermore, these substances impair the electron transport chain, block enzymes, and cause deficiencies in nutrient uptake [ (Baker et al., 2018;Hobby et al., 2019;Kumar et al., 2020;Galdiero et al., 2021).The impact of exogenous fatty acids (linoleic acid, g-linolenic acid, alinolenic acid, arachidonic acid, eicosapentaenoic acid, dihomo-glinolenic acid, and docosahexaenoic acid) on K. pneumonia was explored in the study conducted by Hobby et al. (2019).The study also involved the treatment of K. pneumoniae with antimicrobial peptides (AMPs).Contrary to our current study, the research discovered that supplementing the medium with fatty acids resulted in a significant rise in the growth of K. pneumoniae. However, these exogenous FAs also caused structural changes in the This raises the intriguing possibility that these modifications could potentially enhance membrane permeability, which aligns perfectly with our current data. To understand the mechanistic pathways of AWME3-mediated anti-virulence and anti-biofilm activity, we investigated the Lewis acidbase or electron-acceptor/electron-donor characteristics, as well as the van der Waals interactions within the bacterial cell wall.By examining the surface properties of microorganisms, we aim to better understand how to effectively reduce or prevent their adhesion.For this purpose, the standard microbial adhesion to solvents (MATS) technique was used as the only simple and reliable method to gather information about the acid-base properties of microbial cells.Our data shows that, if left untreated, the cell surfaces of all strains displayed a slightly higher electron-donating (basic) nature compared to electron-accepting (acidic) properties.However, when treated with 0.5 MIC (0.125 mg/ ml) of AWME3, the electron-donating properties of cell surfaces in all strains were significantly enhanced, except KP ATCC BAA-2473.Interestingly, KP ATCC BAA-2473 exhibited a slightly higher electron-accepting characteristic after treatment (Supplementary Table S4, Figure 7).The same treatment resulted in significant enhancement of the van der Waals interactions of bacterial cell membranes as indicated by a substantial increase in microbial adhesion to n-alkane for KPi1627 and KP ATCC BAA-2473 strains.The hydrophobicity of the microbial cell surface is an important factor in the adhesion phenomenon (Rosenberg et al., 1991;AChinas et al., 2019).There is a strong correlation between biofilm and cell surface hydrophobicity.The hydrophobic/hydrophilic nature of the surface is determined by the percentage of cells that are attached to n-alkanes.The surface is considered relatively hydrophobic when this percentage exceeds 50% and relatively hydrophilic when it is lower than 50% (Bellon-Fontaine et al., 1996).Hence, it is likely that the treatment with AWME3 reduces the hydrophilic nature of the bacterial wall membranes of K. pneumoniae. From a mechanistic standpoint, AWME3 appears to behave differently than group 2 capsule polysaccharide (G2cps), which is another promising candidate for combating virulence and biofilm formation (Bernal-Bayard et al., 2023).Klebsiella's CPS has dual effects during biofilm formation helping with initial adhesion, maturation, but repelling competitors.It was proposed that CPS alters the physical properties of abiotic surfaces by increasing its hydrophobicity (Dos Santos Goncalves et al., 2014).The anti-biofilm activity of G2cps is due to changes in ionic charge and Lewis base properties induced by the CPS polysaccharides in membranes of Escherichia coli cells (Travier et al., 2013).In contrast to AWME3, which significantly increases the Lewis base properties of K. pneumoniae membranes (Figure 7), G2cps reduced the E. coli affinity to chloroform by 35%, thus indicating that contact with G2cps strongly reduces bacterial Lewis base properties.It is worth noting that E. coli mutants with partial resistance to G2cps, when exposed to G2cps, displayed higher Lewis base properties compared to G2cps-susceptible WT-E.coli cells.This suggests that treating K. pneumoniae cells with AWME3 may cause changes in their response to CPS by increasing the Lewis base properties of the bacterial cell wall membranes.It is noteworthy that among all other FAs, AWME3 contains cis-2decanoic acid and cis-9-octadecenoic acid, which have been reported to be more effective against biofilms formed by methicillin-resistant Staphylococcus aureus (MRSA) (Mirani et al., 2017).After being exposed to these fatty acids, the established biofilms were dispersed, and the surviving cells could not regain their biofilm lifestyle.Wild-type MRSA strains can produce fatty acid-modifying enzyme (FAME) to inactivate the bactericidal activity of fatty acids by esterification to cholesterol.The biofilm indwellers are non-metabolically active and incapable of synthesizing FAME rendering them susceptible to the anti-biofilm properties of cis-2-decanoic acid and cis-9-octadecanoic acid (Mirani et al., 2017).Hence, bacteria that create biofilms and produce little FAME are more likely to be vulnerable to natural antivirulence agents like AWME3. Conclusion In conclusion, the unique combination of natural FAs in our AWME3 extract, rather than an individual FA, appears to be responsible for effectively combating biofilms and two key virulence traits in the tested MDR hvKp strains of K. pneumoniae.Unlike other proposed anti-virulence methods and agents, AWME3 not only possesses bactericidal properties but also effectively reduces the hydrophilic quality of the bacterial wall membranes of K. pneumoniae.This remarkable compound serves as a trustworthy anti-biofilm agent against both mucoid and non-mucoid hvKp strains, and potentially other multidrug-resistant (MDR) bacterial pathogens.This discovery will help to identify new candidates, like AWME3, that can be used as anti-virulence agents with a reduced risk of developing resistance.These agents have the potential to effectively treat multidrug-resistant nosocomial bacterial infections and oral bacteria that can form biofilms. AWME3 effect against preformed biofilms of K pneumoniae (A) KPi1627, (B) KPM9, and (C) KP ATCC BAA-2473 strains.The experiments were performed in triplicate with independent cultures, and statistical significance was examined by the one-way ANOVA test with Dunnett's multiple comparison test.Results are indicated as means ± STDs.Asterisks indicate statistical significance (**p = 0.001, ****p < 0.0001). FIGURE 2 Effect of AWME3 on mature biofilms established by K pneumoniae strains (A) KPi1627, (B) KPM9, and (C) KP ATCC BAA-2473.The remaining biofilm mass was stained using CV staining and quantified at 570 nm.Results are the average of three independent experiments ± STDs.The statistical significance was calculated using analysis variance (one-way ANOVA) test with Dunnett's multiple comparison test.Asterisks indicate statistical significance ****p < 0.0001. TABLE 1 Antibacterial activity of AWME3 against K. pneumoniae strains grown under biofilm vs. planktonic bacterial mode.
v3-fos-license
2024-05-17T01:06:25.904Z
2024-05-30T00:00:00.000
270122233
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "74384c5785c777a7651a04c59c86781c8a6e044a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41508", "s2fieldsofstudy": [ "Medicine", "Economics" ], "sha1": "16720d351f8757ab0d3c4af8d76be21ed4b13112", "year": 2024 }
pes2o/s2orc
The implications of using maternity care deserts to measure progress in access to obstetric care: a mixed-integer optimization analysis Background Lack of access to risk-appropriate maternity services, particularly for rural residents, is thought to be a leading contributor to disparities in maternal morbidity and mortality. There are several existing measures of access to obstetric care in the literature and popular media. In this study, we explored how current measures of obstetric access inform the number and location of additional obstetric care facilities required to improve access. Methods We formulated two facility location optimization models to determine the number of new facilities required to minimize the number of reproductive-aged women who lack access to obstetric care. We define regions with a lack of access as either maternity care deserts, designated by the March of Dimes to be counties with no obstetric care facility or obstetric providers, or regions further than 50 miles from critical care obstetric (CCO) services. We gathered information on hospitals with obstetric services from Georgia Department of Public Health public reports and estimated the female reproductive-age population by census block group using the American Community Survey. Results Out of the 1,910,308 reproductive-aged women who live in Georgia, 104,158 (5.5%) live in maternity care deserts, 150,563 (7.9%) reproductive-aged women live further than 50 miles from CCO services, and 38,202 (2.0%) live in both maternity care desert and further than 50 miles from CCO services. Our optimization analysis suggests that at least 56 new obstetric care facilities (a 67% increase) would be required to eliminate maternity care deserts in Georgia. However, the expansion of 8 facilities would ensure all women in Georgia live within 50 miles of CCO services. Conclusions Current measures of access to obstetric care may not be sufficient for evaluating access and planning action toward improvements. In a state like Georgia with a large number of small counties, eliminating maternity care deserts would require a prohibitively large number of new obstetric care facilities. This work suggests that additional measures and tools are needed to estimate the number and type of obstetric care facilities that best match practical resources to meet obstetric care needs. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-024-11135-4. Background The maternal mortality rate in the United States (U.S.), 32.9 deaths per 100,000 live births as of 2021, is the highest among developed countries and has increased by 89% since 2018 [1,2].There is evidence that upwards of 80% of maternal deaths in the U.S. are preventable [3].Among the factors contributing to the maternal mortality crisis in the U.S. is a lack of access to risk-appropriate care and an undersupply of maternal healthcare providers [2]. Rural access to obstetric services has been declining in recent years.Over half of rural counties did not have a facility offering obstetric services in 2014, and this number grew by 2.7% from 2014 to 2018 [4].Administrators cite financial concerns, shortages of obstetric professionals, and low volume as reasons for closing their obstetric units [5,6].Lack of access to obstetric services is associated with adverse maternal outcomes, adverse neonatal outcomes, and prenatal stress [7][8][9][10][11].Recent findings suggest a lack of access and disparities in geographic access will persist unless facility-level infrastructure is expanded [12].However, geographic access to obstetric care is measured in several ways, which causes uncertainty about how to optimally invest in infrastructure to expand access.One common measure of access in the academic literature and news media is the maternity care desert, as defined by the March of Dimes [13,14].The March of Dimes categorizes counties with a lack of access to care (no hospital or birth center offering obstetric care and no obstetric providers) as maternity care deserts.As of 2022, more than 2.2 million reproductive-aged women in the U.S. live in maternity care deserts [15].Studies have shown that pregnant women who live in maternity care deserts have higher rates of infant and maternal mortality [16,17].However, the maternity care deserts access measure does not necessarily reflect distance to care because counties differ in size and some pregnant women within a county may live close to an obstetric facility in a neighboring county.Other studies have measured geographic access as driving time to the nearest facility offering obstetric services at different levels of care [12,18] and distance to the nearest facility offering critical care obstetric (CCO) services [19,20] as key measures for quantifying potential access. In contrast to these existing studies that measure current levels of access, we considered the implications of using these metrics as key performance indicators for tracking improvements in access to obstetric care.In particular, we asked: what is required for states to reduce the number of women who lack access to obstetric care, as defined by two different access to care measures?To answer this question, we considered the implications of expanding access to care through facility expansions by drawing upon mathematical optimization.Optimization is a mathematical science that is widely used to identify the ideal solution while considering the complex interactions and constraints within a system [21].The specific type of optimization modeling framework, facility location modeling, has often been used to evaluate the ideal placement of healthcare facilities to ensure proper coverage of a patient population [22][23][24].A comprehensive review of healthcare facility location modeling is provided by Admadi-Javid et al. [25]. In this article, we characterized access to obstetric care using existing access measures and evaluated these existing measures by determining how many facilities are needed to provide a sufficient level of access according to these measures.We focused on the tate of Georgia because Georgia has one of the highest rates of maternal mortality in the U.S. -almost twice as high as the national rate [26].As of 2019 more than 75% of Georgia's 159 counties had no hospital or birth center offering obstetric care [15].Georgia does have a set of Regional Perinatal Centers whose mission is to coordinate access to optimal and risk-appropriate maternal and infant care [27].Georgia is taking multiple initiatives to improve obstetric outcomes, including extending Medicaid coverage, introducing quality improvement initiatives, verifying levels of maternal care in Georgia hospitals, and expanding home visiting in rural counties [28]. First, we characterized regions in Georgia that lack access to obstetric care using two commonly used measures in the literature: (1) the March of Dimes maternity care desert measure [15] and (2) regions that are further than 50 miles from the closest facility that provides CCO services.Upon defining a region as lacking access or not, we reported the total number of reproductiveaged women who lack access to obstetric care according to each measure.Finally, we analyzed how many facilities would be needed in the state of Georgia to reduce the number of reproductive-aged women who lack access to obstetric care by 50% and 100%. The goal of this study is to characterize regions defined to have a lack of access to obstetric care based on two existing measures of access and to determine the facility interventions required to improve access according to these measures.We hypothesized that obstetric facility expansion policies focused on reducing maternity care deserts alone are impractical and could have negative consequences and policies focusing on reducing distance to CCO services alone are not aligned with risk-appropriate care for the majority of pregnancies, revealing the need for new measures of geographic access to highquality, risk-appropriate care which can be used as targets for policy intervention. Data sources First, we collected data to infer the geographic distribution of obstetric healthcare facilities and providers, as well as the geographic distribution of subpopulations and communities that would demand obstetric services.The data sources used are described below. Location of facilities providing obstetric care We included obstetric facilities in Georgia that are classified as birth centers, or Perinatal Care Level 1, 2, or 3 hospitals according to the public records from Georgia's Department of Public Health from 2017 [27].The address of each obstetric facility was verified by the study team by cross-referencing with Google Maps, and the latitude and longitude of each obstetric facility were located using Python's geopy package [29]. Location of demand for obstetric care To estimate the demand for obstetric care access, we used data from the American Community Survey (ACS) which provides population estimates for age and sex groups.We used the 2017 ACS 5-year estimates of the population of reproductive-aged women in each census block group, which we assumed is proportional to the demand for obstetric care in each block group.We used 5-year estimates because they are the most reliable and they are collected for all small geographies including census block groups.To estimate the location of this demand, we used the latitude and longitude of center of population of each census block group as reported by the U.S. Census Bureau in 2010 to be consistent with the census block groups used for the population estimates data from 2017 [30]. Distance to obstetric care We calculated the distance between each obstetric facility and each obstetric care demand point using Great Circle distance [29] in miles between the coordinates of each facility and each census block group center of population.Great Circle distance is the direct distance between two points accounting for the curvature of the earth and is commonly used to estimate access to healthcare [31,32]. Measures of obstetric access We then determined which census block groups lack access to obstetric care according to the measures outlined below. Maternity care desert We considered the March of Dimes definition of a maternity care desert which is defined to be a county that has zero hospitals or birth centers offering obstetric services and zero obstetric providers [15].Because maternity care deserts are defined at the county level and the distance measure is defined at the census block group level, we deemed any census block group in a maternity care desert county to be a maternity care desert census block group.Our study team validated Georgia maternity care deserts based on our data against the March of Dimes maternity care deserts dashboard and found they were consistent [33]. Distance to critical care obstetric (CCO) hospital We evaluated the distance from the center of population of each census block group to its nearest facility offering CCO services.In line with previous studies [20], we characterized hospitals as offering CCO services if they are designated as Perinatal Care Level 3 obstetric hospitals.We refer to birth centers and Level 1 and 2 obstetric hospitals collectively as "lower-level" hospitals.These lowerlevel hospitals provide basic and specialty obstetric care but do not provide CCO services.We referred to public reporting from Georgia's Department of Public Health to characterize each hospital's level of care [27].We then evaluated whether the census block group population center is within the pre-specified distance threshold of 50 miles.A 50-mile threshold is commonly used because it approximates the farthest distance most people appear willing to travel for specialized medical care, and it estimates the widely accepted "Golden Hour." The "Golden Hour" stems from trauma care, where it is thought that critically injured patients have better outcomes if they receive definitive care within an hour of their injuries [34].This 50-mile threshold has been commonly used to estimate access to obstetric care [19,20], although it has not been validated for obstetric care [35,36]. Evaluation metrics Using the measures above, we characterized each census block group as either having access to obstetric care or lacking access to obstetric care. Characterization of lack of access to obstetric care First, we characterized the number of census block groups that lacked access to obstetric care according to different measures of access (i.e., maternity care desert, >50 miles from CCO services, and both a maternity care desert and >50 miles from CCO services).Additionally, we characterized the demographics of the populations within the census block groups that lacked access to obstetric care according to different measures of access. Other measures of access to obstetric care We characterized the distribution of distance to the closest obstetric facility for different measures of access to obstetric care.We further characterized distance to care by level of care, calculating the distance to the closest facility offering Level 1, 2, and 3 care. Evaluating the need for facility expansion to improve access We considered how many new facilities would hypothetically be needed to reduce the number of reproductiveaged women who lack access to obstetric care by 50% and 100%.To do so, we use a mathematical optimization model drawing from the facility location literature (see Appendix).This optimization model determined the optimal placement of new obstetric facilities to minimize the number of reproductive-aged women living in deserts.This model unrealistically assumed that we could readily build obstetric facilities anywhere we wanted.We revisit this assumption in the discussion. We considered both measures of access to obstetric care in our optimization models.First, we investigated the number of new obstetric facilities that would hypothetically be required to reduce the number of women in maternity care deserts by a given percentage.To do so, we formulated a mathematical optimization model that minimized the total number of reproductive-aged women who live in maternity care deserts by introducing at most X new obstetric hospitals.This model returned the optimal location of these X new facilities.Here, X is a parameter that was varied to analyze the change in the number of reproductive-aged women living in maternity care deserts as more facilities are introduced.We also investigated the number of existing lower-level obstetric facilities that would need to be upgraded to provide CCO services to reduce the number of women living further than 50 miles from a CCO facility by a given percentage.We formulated a second mathematical optimization model that minimized the total number of reproductive-aged women living further than 50 miles from CCO services by optimally choosing at most X existing lower-level obstetric hospitals to upgrade to provide CCO services. Characterization of lack of access to obstetric care Figure 1 shows the regions that lack access to obstetric care according to the two access measures.In Georgia, 83 hospitals offer obstetric services.56 counties are deemed to be maternity care deserts, which contain a combined 524 census blocks.In comparison, 650 census block groups from 53 counties are further than 50 miles to CCO services. Table 1 shows that out of the 1,910,308 reproductiveaged women who live in Georgia, 104,158 (5.5%) live in maternity care deserts, 150,563 reproductive-aged women (7.9%) live more than (>) 50 miles from CCO services, and 38,202 (2.0%) live in both maternity care deserts and >50 miles from CCO services. In Georgia, 14.8% of people do not have insurance and 14.9% of people have Medicaid.These proportions are higher for people who live in regions characterized as maternity care deserts (16.9%, 21.1%), >50 miles from CCO services (17.2%, 20.4%), and regions designated as both (18.4%, 22.8%).Also, in Georgia, 16.9% of people have an income below the federal poverty line.This proportion is higher in regions characterized as maternity care deserts (23.7%), > 50 miles from CCO services (23.4%), and regions designated as both (25.1%). Other measures of access to obstetric care Table 2 shows the number of reproductive-aged women who live within the specified distance from obstetric services for each level of care.Of the 104,158 reproductive-aged women who live in maternity care deserts, 63% are within 50 miles of CCO services, 97% are within 50 miles of Level 2 care, and 100% are within 50 miles of an obstetric care facility.Of the 150,563 reproductiveaged women who live >50 miles from CCO services, 98% are within 50 miles of Level 2 care, 100% are within 50 miles of an obstetric care facility, and 75% do not live in a maternity care desert.Of the 1,806,150 reproductiveaged women who do not live in maternity care deserts, 93% are within 50 miles of CCO services.Similarly, of the 1,759,745 women who are within 50 miles of CCO services, 96% live in a county with an obstetric care facility. Responsiveness to interventions Figure 2 shows the results of our optimization analysis.To hypothetically reduce the number of reproductiveaged women living in maternity care deserts by at least 50%, 16 new obstetric hospitals would be required in counties that are currently maternity care deserts.This would be an increase of 19% over the 83 current number of facilities offering obstetric services and would reduce the number of reproductive-aged women living in maternity care deserts from 104,158 to 51,477.To eliminate maternity care deserts in Georgia, 56 new obstetric hospitals would be required (a 67% increase in obstetric facilities; one facility for each county that is currently a maternity care desert). Our optimization analysis shows that to reduce the number of reproductive-aged women living 50 miles from CCO services by at least 50% (from 150,563 to 57,338 reproductive-aged women) it would require upgrading 2 obstetric facilities to offer CCO services.To eliminate all census block groups that are >50 miles from CCO services, a minimum of 8 facilities would need to be upgraded to offer CCO services. Figure 3 shows how many facilities are needed to reduce the number of reproductive-aged women to a specified level.The number of reproductive-aged women living in maternity care deserts does not decrease significantly with each expanded obstetric unit.In contrast, a small number of expanded CCO services dramatically reduces the number of reproductive-aged women living further than 50 miles from CCO services. Discussion Access to care is an important dimension to consider in the context of the maternal health crisis in the U.S. Our study analyzed the implications of using existing measures of access to obstetric care as key performance indicators to evaluate and track improvements in access. In this paper, we analyzed two current measures of obstetric access, including the popular maternity care deserts measure.Maternity care deserts are counties in which there are no obstetric providers or obstetric care facilities.This measure has been widely used in both academic literature and popular media, and it has drawn widespread attention to the lack of access to obstetric care in the U.S. Consistent with the March of Dimes report, we found that 5.5% of reproductive-aged women in Georgia live in the 56 counties designated as maternity care deserts (more than the national average, 3.5%) [15].We found that 7.9% of reproductive-aged women live further than 50 miles from CCO services, which is less than a study using 2015 data which found that 10.2% of reproductive-aged women live further than 50 miles from CCO services [20].This difference may be due to a difference in distance metrics or the procedures for identifying the locations and levels of obstetric hospitals.We additionally found that 2.0% of reproductive-aged women live in regions that are both maternity care deserts and further than 50 miles from CCO services. In our analysis, we considered the hypothetical implications of using current access measures to inform facility expansions, with the goal of evaluating these measures without concern for costs or workforce barriers.Our optimization model showed that eliminating maternity care deserts in Georgia would require at least 56 new obstetric hospitals.Doing so would increase the number of obstetric hospitals in Georgia by 67%, from 83 to 139.In contrast, ensuring all reproductive-age women in Georgia live within 50 miles of CCO services would require upgrading at least 8 existing lower-level hospitals to provide CCO services.Thus, these different measures of access imply very different strategies to expand access and very different estimates of how many obstetric facilities of different levels are needed in a geographic region. Our findings suggest that additional tools are needed to provide estimates of how many facilities of each level of care are needed and can be sustained in a geographic region.Ideally, the number of facilities, their level of care designations, and coordination should promote optimal pregnancy outcomes.Access to obstetric care has been identified as an important opportunity to improve maternal outcomes and disparities, as rural residence has been associated with a greater probability of severe maternal morbidity and mortality [10], and maternity care deserts associated with higher rates of preterm birth, infant mortality, low birth weight, and maternal mortality [16,17,37]. However, the maternity care desert measure is inherently dependent on the number and size of counties in a state and fails to account for actual distance to healthcare facilities.Counties were determined by territories and states without standardization, resulting in high variability in the number and size of counties across states [38].For example, Georgia has the second most counties of any state (159), only behind Texas (254), although Georgia is the 8th most populated state in the U.S. and 24th largest by area.Thus, this measure may encourage a large number of obstetric units in Georgia simply because Georgia has a large number of counties, despite the fact that 82% of reproductive-aged women who live in maternity care deserts in Georgia live within 25 miles of an obstetric hospital. Fig. 3 The number of obstetric care facilities needed to reduce the number of reproductive-aged (RA) women who lack access to obstetric care according to two measures of access Considering these measures of access alone to inform facility expansion could lead to unintended negative consequences.We showed that it would require a 67% increase in the number of obstetric hospitals to ensure no reproductive-aged women live in maternity care deserts in Georgia.Even if the economic forces would allow for so many obstetric facilities, a maternal healthcare system with that many obstetric facilities could have unintended negative consequences due to the dilution of volume across many low-volume rural hospitals, which are known to be associated with poor pregnancy outcomes [39][40][41][42].Moreover, staffing this many units would likely be very expensive and challenging given that there are already obstetric workforce shortages in Georgia [43]. While distance to CCO services could be a useful measure of access, this measure alone neither considers whether there are other nearby facilities that offer potentially sufficient lower-levels of obstetric care nor coordination between lower-level and CCO facilities.Additionally, the threshold of 50 miles to CCO services has not been validated in obstetrics [35,36], nor does it account for transportation factors that influence actual driving time.Thus, there are a variety of limitations in using existing measures of access alone to inform the number of facilities that are needed in a geographic region.Our findings motivate the need for nuanced access to obstetric care measures that are capable of evaluating and planning action toward the reduction of lack of access, and new approaches to estimate the optimal number of facilities of different levels of care that are necessary and sustainable within a geographic region.Future work may consider other measures of access or access expansion interventions that incorporate home visits, telemedicine, and transportation programs. Our study is not without limitations.We use facility and population data from 2017 because the most recent publicly available data on obstetric facilities was published by the Georgia Department of Public Health in 2017.Because of the age of our data, some obstetric hospitals may have closed, opened, and merged since 2017.The Georgia Hospital Association reports that 13 hospitals in Georgia have closed since 2013 (as of November 2022) [44].The only obstetric hospital that closed was Wellstar Atlanta Medical Center, which closed in November 2022.This hospital was 1 mile from the Atlanta Region's Regional Perinatal Center which provides CCO services.Moreover, we found that our models' determination of maternity care deserts was consistent with the March of Dimes maternity care deserts dashboard [33].We expect that even with some facility closures or expansions of obstetric services at existing hospitals, our conclusion that the maternity care deserts measure is not a practical performance indicator of improvements to access to obstetric care still holds.Also, we did not account for geographical barriers or traffic when calculating distance from the centroid of a census block group when computing whether the group is further than 50 miles from CCO services, and we did not account for measurement errors in the ACS.We did not account for other important barriers to access, such as transportation disadvantage and insurance coverage.We also did not account for out-ofstate hospitals that offer obstetric services that could provide care to pregnant people in Georgia.Finally, our analysis only considered potential access.Future work may investigate the impact of facility expansion on realized access to care, especially considering some patients prefer to bypass local hospitals to receive care elsewhere [45,46]. Conclusion Our findings suggest that the current measures of obstetric access, while useful for capturing certain dimensions of the maternal healthcare system, may not be useful for estimating the optimal number, designations, and coordination of obstetric care within a geographic region.Specifically, while maternity care deserts are associated with increased rates of maternal mortality [16], this measure is not a practical performance indicator of improvements to access to obstetric care.Thus, there is a need for tools that can track improvements and inform the appropriate number of obstetric care facilities that are needed in a geographic region to improve access to high-quality, risk-appropriate care, and ultimately improve obstetric outcomes.In addition, future work may examine how to optimally balance the cost and outcomes of expanding care, considering the trade-offs between increased access and loss of quality due to dilution and staffing issues, and incorporating alternate access expansion strategies such as home visits, telemedicine, and transportation programs. Science Foundation under grant number DGE-2039655 (Meredith); any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.This research was also supported by the Harold R. and Mary Anne Nash endowment to the Georgia Tech H. Milton Stewart School of Industrial and Systems Engineering. Fig. 1 Fig. 1 Current state of lack of access to obstetric care in Georgia.The shaded regions represent census block group that are (A) Maternity Care Deserts, (B) >50 miles from critical care obstetric (CCO) services, (C) both Maternity Care Deserts and >50 miles from CCO services Table 1 The characteristics of all people who live in Georgia by obstetric access and the ages of reproductive-aged females by obstetric access Table 2 The number and proportion of reproductive-aged women by obstetric access who live within the specified distance threshold of each level of obstetric care
v3-fos-license
2022-12-24T14:57:03.328Z
2017-04-22T00:00:00.000
255001338
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11192-017-2387-x.pdf", "pdf_hash": "bbfb7d6494d84ad81b8dae7dc06c6e9257aabc55", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41510", "s2fieldsofstudy": [ "Art", "History" ], "sha1": "bbfb7d6494d84ad81b8dae7dc06c6e9257aabc55", "year": 2017 }
pes2o/s2orc
Western classical music development: a statistical analysis of composers similarity, differentiation and evolution This paper proposes a statistical analysis that captures similarities and differences between classical music composers with the eventual aim to understand why particular composers ‘sound’ different even if their ‘lineages’ (influences network) are similar or why they ‘sound’ alike if their ‘lineages’ are different. In order to do this we use statistical methods and measures of association or similarity (based on presence/absence of traits such as specific ‘ecological’ characteristics and personal musical influences) that have been developed in biosystematics, scientometrics, and bibliographic coupling. This paper also represents a first step towards a more ambitious goal of developing an evolutionary model of Western classical music. Introduction This paper has two objectives. First, the paper contributes to the music information retrieval literature by establishing similarities between classical music composers. 1 That two composers, or their music, 'sound alike' or 'sound different' is inherently a subjective statement, made by a listener, which depends on many factors, including the degree of familiarity to classical music per se. 2 This paper addresses the subjective issue, using wellestablished similarity indices (e.g., the centralised cosine similarity measure) based on measurable criteria. Even if no audio file is used in the analysis, 'sounding alike' is used in this paper as a proxy (or shortcut) with the specific meaning that the music of two composers is similar in ecological/musical characteristics and/or personal musical influences (as defined below). Uncovering what makes two composers similar, in a systematic way, has important economic implications for (1) the music information retrieval business; (2) a deeper insight into musical product definition and choice offered to music consumers and purchasers and (3) for our understanding of innovation in the creative industry. This leads to a second objective of the paper, which is to propose a statistical framework that could identify transitional figures, innovators and followers in the development of Western classical music. Western classical music evolved gradually, branching out over time and throwing off many new styles. This overall development is not due to simple creative genius alone, but to the influence of past masters and genres, as constrained or facilitated by the cultural conditions of time and place. Figure 1 conveys this development and proposes a (narrow) historical time line for music periods (e.g., Medieval, Renaissance, Baroque, Classical, Romantic and Modern/twentieth century) and some composers belonging to these periods. 3 Along vertical lines are composers who have developed and perfected (or pushed to the limit) the musical style of their period. Others composers (not necessarily shown), may gravitate around them, extending the volume of music production in an essentially imitative style. Along the diagonal line are some 'transitional' and/or 'innovative' composers whose works (or at least some of them) have been assessed by musicologists to contribute to a transition from one period to another. 4 As claimed by Gatherer (1997), ''a dialectical approach to music evolution would seek to identify the internal stylistic tensions and contradictions (in terms of thesis and antithesis) which give rise to new musical forms (synthesis).' ' Franz Brendel (1811-68), a doctor of philosophy, is the first self-consciously Hegelian historian of music and, according to Taruskin and Gibbs (2013), henceforth T&G (2013), his great achievement was to write the nineteenth century's most widely disseminated history of music. 5 Brendel casts his narrative in terms of successive emancipations of composers and the art of music (emancipation from the sacred, emancipation from words, etc.). For T&G (2013), through this Hegelian approach, ''many people have believed that the history of music has a purpose and that the primary obligation of musicians is not to meet the needs of their immediate audience, but, rather, to help fulfill that purpose-namely, the furthering of the evolutionary progress of the art. This means that one is morally bound to serve the impersonal aims of history, an idea that has been one of the most powerful motivating forces and one of the most demanding criteria of value in the history of music. (…). With this development came the related views that the future of the arts was visible to a select few and that the opinion of others did not matter.'' This Hegelian perspective claims to show why things changed. This makes it fundamentally different from Darwin's theory of biological evolution based on random mutation. Change or evolution in the Hegelian approach is viewed as having a purpose, which turns random process into a law. For Gatherer (1997), ''a Darwinian alternative to dialectics, which in its most reductionist form is known as memetics, seeks to interpret the evolution of music by examining the adaptiveness of its various component parts in the selective environment of culture.'' The diagonal in Fig. 1 (and the identified composers along the diagonal) could represent a somewhat lengthy process of music 'speciation' so to speak (in analogy to evolutionary biology). 6 Darwinian biological models have been applied to many aspects of cultural evolution (see Linquist 2010 for one good survey), but not so much to music (see, however, Gatherer 1997;Jan 2007). An evolutionary approach to classical music could perhaps be narrated along the following lines. Music transmission is analogous to genetic transmission in that it can give rise to a form of evolution by selection. By planting a fertile 'meme' in another composer mind, the initial composer manipulates his brain, turning it into a vehicle for the meme's propagation. 7 Composition imitation is how musical memes can replicate. However, the inherited music style adapts to local ecological and social conditions by a process of musical mutation/variation and differential fitness that is akin to natural Footnote 4 continued Hasse, the dramma giocoso/opera buffa of Galuppi and Pergolesi, and the opera reforms of Gluck and Jommelli). The case for putting de Muris and de Vitry on the diagonal is perhaps worth mentioning. They were both mathematicians and musicians. According to T&G (2013), their treaties and the debates they sparkled, and their notational breakthroughs and innovations had enormous repercussions. ''So decisive were their contributions that this theoretical tradition has lent its name to an entire era''-Ars Nova (which is also the title of the Treatise of de Vitry). 5 The title of the book can be translated as: ''History of Music in Italy, Germany and France-From the Earliest Christian Times to the Present. See: https://archive.org/stream/geschichtedermus01bren#page/n5/ mode/2up. 6 This kind of approach is conceptually not much different from understandings of biology and phylogeny through the study of genetics, whereby one identifies the lineages and the way population splits over time into new species. 7 Defining the 'meme' replicator as a unit of cultural transmission or a unit of imitation, Dawkins (1976) suggests that '' [j]ust as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation. '' selection. 8 Just as not all genes that can replicate do so successively, so some music memes are more successful in the meme-pool than others, leading to a process of 'musical' (instead of natural) selection, a non-random survival of random musical mutations. In other terms, musical memes are passed on in an altered form, through musical mutation and speciation, branching out over time into many new and diverse styles. This suggested, the present paper does not go deeply into any 'pseudo-scientific' metanarrative for Western classical music evolution. Rather, and more modestly, it proposes a statistical analysis that captures similarities and differences between classical music composers. The eventual aim is to increase our understanding of why particular composers 'sound' different even if their 'lineages' (or personal influences network) are similar, thereby contributing to an evolution in Western classical music. Musicologists and music historians have described and classified composers, the styles and the periods in which they lived. They have discussed the relationships and influences network of composers, the evolution of music styles, who they see as transitional figures, innovators, or followers. See for example the History of Western Music by T&G (2013), a History of Opera by Abbate and Parker (2012), Grout and Williams (2002) and many others. Typically, these authors use descriptive narratives and music manuscripts analyses. The objective of this paper is to complement these approaches by proposing a statistical analysis that captures similarity across pairs of composers by mean of pairwise comparison of presence-absence of traits such as personal musical influences and musical/ecological characteristics. To this end, we use an approach that is based on (but different from) the earlier contributions by Smith andGeorges (2014, 2015), using methods that have been developed in biosystematics, scientometrics, and bibliographic couplings. The rest of the paper is as follows. The first section describes the data (influences network and ecological characteristics) and the methodology used in Smith andGeorges (2014, 2015). The second section shows how the interaction of personal musical influences and ecological characteristics can provide a typology that could, in theory, lead to some evolutionary model of Western classical music. The third section introduces the centralised cosine measure as a statistical measure of similarity between composers. 9 The fourth section discusses some statistical results and the last section concludes. Smith andGeorges (2014, 2015) used data collected in the 'The Classical Music Navigator' (Smith 2000; hereafter referred to as CMN). 10 One important part of the CMN is the presentation of composers' personal musical influences. Each of the 500 composers of the database is associated with a list of composers who have had a documented influence on a subject composer. provide the following example in Fig. 2 which represents the network of influences on three composers, J. Haydn, W. A. Mozart, and Schubert, three Austrian composers, born respectively in 1732, 1756, and 1792, and who are typically associated with the 'Classical' period of Western classical music with Schubert also being a transitional composer between the Classical and Romantic periods. A casual listening to J. Haydn, W. A. Mozart, and Schubert suggests similarities across them, although to a majority of listeners J. Haydn and W. A. Mozart would probably sound 'closer' than J. Haydn and Schubert, or W. A. Mozart and Schubert. To overcome the subjectivity issue noted in the Introduction, Smith and Georges (2014) infer similarities among composers by assuming that if two composers share many of the same personal musical influences, their music will likely have some similarities. On the other hand, if two composers have been influenced by very distinct sets of composers, then their music is likely to have little similarity. Observe in Fig. 2 that these three subject composers share in common two particular influences: Handel and Gluck. There are no further common influences between Schubert and Haydn, but two additional common influences between Schubert and W. A. Mozart (M. Haydn and J. S. Bach) and five additional common influences between Haydn and Mozart. According to the assumption of Smith and Georges (2014), then, the larger number of common personal influences between J. Haydn and W. A. Mozart would cause (or even explain) the higher similarity between the music of these two composers than between Schubert and Mozart, let alone Schubert and J. Haydn. The third section confirms this with a methodology that generates similarity scores between any pair of composers, by means of pairwise comparison of presence-absence of personal musical influences, using the centralised cosine similarity measure. 11 A second collection of data in the CMN associates each of the 500 composers with characteristics such as time period, geographical location, school association, (Smith 2000) instrumentation emphases, etc., and for convenience denoted 'ecological' categories. have extracted 298 such ecological categories from the CMN. (See their paper for a complete list.) Thus, each composer is associated with a list of ecological categories, and the authors infer a statistical association between pairs of composers by assuming that if two composers share many ecological categories, then their musical 'ecological niches' are very similar, so that, in this sense, they may be considered similar. Figure 3 pursues the previous example for composers J. Haydn, W. A. Mozart, and Schubert and illustrates their musical ecological niches. 12 We see that Mozart and J. Haydn share a larger number of ecological characteristics than, say, J. Haydn and Schubert. The contention is that this would cause a stronger similarity in the music of W. A. Mozart and J. Haydn than in the music of Schubert and J. Haydn. As before, it is also possible to compute similarity scores between any pair of composers, by means of pairwise comparison of presence-absence of ecological categories, and this will be implemented in the third section using the centralised cosine similarity measure. By introducing ecological characteristics, the basic objective in was to explore the robustness of their earlier (2014) similarity results based on personal musical influences. They further propose a final list combining the ecological and influences network databases to assess similarities, arguing that this should produce a general improvement in the similarity rankings. Data and background information on composers' similarity This new paper, however, proposes a different approach. First, a new measure of similarity, equipped with a statistical significance test, the 'centralised cosine measure' is used, instead of the binomial index of dispersion used in Smith andGeorges (2014, 2015). Table 1. Source: Constructed by the author from raw data collected in 'The Classical Music Navigator' (Smith 2000), and reorganised Table 1 Ecological characteristics associated with J. Haydn, W. A. Mozart, and Schubert. Source: Assembled from raw data collected in 'The Classical Music Navigator' (Smith 2000), and reorganised The centralised cosine measure is based on earlier literature in scientometrics and bibliographic couplings. Second, instead of merely combining together personal musical influences and ecological characteristics (to produce an improvement in similarity rankings) as proposed in , this paper points out that some additional information can be gained when the two sets of similarity indices are compared, especially when they provide conflicting information, leading to interesting questions such as why particular composers sound different (e.g., composed in different ecological niches) even if they have been influenced by the same personal musical influences and why they sound similar (e.g., composed in similar ecological niches) even in the absence of a common set of personal musical influences. The next section therefore develops a typology that highlights conflicting or reinforcing results, based on the influences network and ecological characteristics approaches, in a framework somewhat reminiscent of a biological evolutionary model. Music evolution: a typology based on influences networks and ecological data Personal musical influences lead to a sort of lineage among composers. If two composers have been musically influenced by, roughly, the same list of composers, they share the same ''cultural gene'' pool. In this case I refer to them as 'Most Similarly-Influenced Composers'. Because of their common personal musical influences we might expect these composers to develop a roughly similar style of music and eventually to 'sound' similar. However, if they do not, this should lead to hypotheses as to why a pair of composers might have very similar personal influences and yet produce very different music. Therefore, we need a second set of data to help categorise the musical style of each composer, the ecological characteristics of music referred to in the previous section. I refer to a pair of composers sharing a large set of common ecological characteristics (and thus having very similar ecological niches) as 'Most Ecologically-Related Composers'. Table 2 illustrates the interaction between these two dimensions. If most similarly-influenced composers (on the basis of individual musical influences) are also most ecologically-related composers (on the basis of ecological data), then those composers are most similar (they share a very similar set of personal musical influences and a very similar set of ecological characteristics, that is, very similar ecological niches). In terms of Fig. 1, these composers are likely to be grouped into one of the vertical lines of the 'tree'. At the other extreme we have most dissimilar composers. In Fig. 1, it could be composers belonging to non-connected vertical lines representing very distinct musical periods and styles. But there are two other, perhaps more interesting, cases. First, why do composers produce music that 'sounds' different if they have the same lineage/personal musical influences? As mentioned in the Introduction, some composers may have developed a different music style through a process of 'musical' selection and 'speciation' whereby an inherited musical style adapts to local and social conditions through mutation/variation and differential fitness/competition that is akin to natural selection. If a subject composer is very similar to a series of other (contemporary) composers in terms of personal musical influences but at the same time Most dissimilar composers Adaptation: Music speciation and evolution b Figure 6a-p in the fourth section will provide a visual representation of the table for any 'subject' composer with respect to all other 499 composers of the CMN a Pairs of composers sounding alike despite lack of common lineage b Pairs of composers sounding different despite a common lineage mostly ecologically unrelated to them, then the music of this composer is likely to 'sound' different, to have evolved. In Table 2 this is represented as 'music speciation and evolution'. In Fig. 1, this would be represented by composers along the diagonal line (e.g., Gluck, Debussy, Schoenberg, etc.). The second interesting case is why particular composers 'sound' alike if their lineage is different? Two composers, although perhaps geographically distant, may have composed music that sounds alike because they belong to very similar musical ecological niches that lead to selection pressures to adapt and develop similar sounding forms, despite having a very different lineage, in a process that could be called musical 'convergent evolution'. See Table 2. In biology, one can identify convergent evolution wherein species that live in similar but geographically-distant habitats will experience similar selection pressures from their environment, causing these to evolve similar adaptations, or converge, coming to look and behave very much alike even when originating from very different lineages. 13 However, this possibility seems less likely in the case of Western classical music because the time frame is rather short and the spatial frame is small, so that 'convergence' may only play a rather minor role in the overall process of musical evolution. A simpler interpretation is that a composer, having little documented personal musical influences in common with another contemporary composer, and therefore being perhaps (although not necessarily) isolated in the network of composers, has nevertheless composed in an ecological niche reminiscent of the musical style of the other composer, producing music that sounds similar. By being imitators or followers, and perhaps not central to the musical scene, these composers contributed less to the evolution of the sound of Western classical music. The centralised cosine measure as an index of association/similarity This section describes the methodology used in this article to assess the relationship (association/similarity) between pairs of composers. The discussion is couched in terms of personal musical influences but the methodology related to ecological categories is analogous. I first describe how I have conceptually organised the CMN database. This description draws on earlier articles by Smith andGeorges (2014, 2015) and . Suppose the set C of all 500 composers (n = 500) who are included in the CMN. For any pair of composers (i, j) for i; j 2 C (among the n 9 n possible pairs), we are interested in capturing whether a composer k 2 C had a reported influence on both i and j, on i but not j, on j but not i, and on neither i nor j. Running this across all composers k for each pair (i, j) we eventually obtain the set I i of all personal influences on composer i, and the set I j of all personal influences on composer j. Also, for any pair (i, j), I i \ I j ¼ CI i;j is the set of composers k that have influenced both i and j; I i À I i \ I j ¼ I i;Àj is the set of composers k that have influenced i but not j; I j À I i \ I j ¼ I j;Ài is the set of composers k that have influenced j but not i and DI i;j ¼ I i;Àj [ I j;Ài is the set of composers k that have influenced either i or j but not both. From this we can produce a count table, given in Table 3, for any pair (i, j) that sums the elements (the number of composers) in each of the four sets CI i;j , I i;Àj , I j;Ài , and C À CI i;j À DI i;j , and from which similarity indices for all pairs of composers (i, j) can be computed on the basis of well-known formulas. 14 13 A common illustration of this convergent evolution is the parallel evolution taking place in Australian marsupials versus placental mammals elsewhere. 14 Dozens of measures of association have been studied in the biosystematics literature, such as the first and second Kulczynski coefficients (1927), the Jaccard coefficient (1901), the Dice coefficient (1945), the In what follows I focus on the 'centralised' cosine measure in part because (unlike many other indices) this measure can be used to judge the statistical significance of the association between two composers. 15 Although the centralised cosine formula is based on the concepts underlying Table 3, it is not a straightforward application and therefore, it requires a slightly more structured presentation in order to establish a connection with the table. Here, the discussion follows closely . The ordinary (non-centralised) cosine similarity measure (also known as the Salton's measure) is a statistic familiar to bibliometrics and scientometrics. The idea was mathematically formalized by Sen and Gan (1983) and later extended by Glänzel and Czerwon (1996) who also applied the methodology. As applied to the CMN database, consider each composer i as a n  1 vector in the space of all n composers in the database. If a composer k among the n composers was an influence on i, then the kth component of the vector corresponding to composer i is set equal to 1, otherwise it is set equal to 0. Therefore, with respect to all composers in the database, each composer i is represented by a Boolean vector of 0's and 1's. The cosine similarity measure for a pair of composers (i, j), each represented by their own Boolean vectors B i and B j , can then be computed as: where subscript k in B k,i indicates the kth component (of value 1 or 0) of vector B i . Thus, in essence, the cosine of the angle between the two vectors B i and B j gives a measure of association/similarity. The cosine similarity index ranges between 1 and 0, where 1 indicates that two composers are exactly identical and 0 indicates complete opposition. A value somewhere in the middle of the 0-1 range indicates degrees of independence of two composers. As discussed in , when all the vectors are Boolean vectors, the null distribution of the cosine similarity under the assumption of independence between two composers is unknown and has a nonzero mean; in order to derive a statistical test for the cosine measure, a centralised cosine measure was proposed (Giller 2012). The centralised cosine measure is the cosine measure computed on the centralised vectors, with respect to the mean (average) vectors. Assuming that: B i ¼ ð1=nÞ P n k¼1 B k;i and B j ¼ ð1=nÞ P n k¼1 B k;j , the centralised cosine measure is: Footnote 14 continued Simpson coefficient (1943), the binary distance coefficient (Sneath 1968), the binomial index of dispersion v 2 statistic (Potthoff and Whittinghill 1966), the Salton's measure (1987) or its equivalent, the cosine similarity measure discussed in scientometrics and bibliographic coupling literature (Sen and Gan 1983;Glänzel and Czerwon 1996). 15 The ''Appendix'' provides a comparison between the centralised cosine measure and another well-known measure, the binomial index of dispersion, that was used in Smith andGeorges (2014, 2015). In order to establish a connection between this formula and the elements in Table 3, I now use a result in who proved that the centralised cosine measure can be computed as: where a, b, c, d are the count of composers in the sets CI i;j , I i;Àj , I j;Ài , and C À CI i;j À DI i;j described above, and reported in Table 3. It can be shown that values of the centralised cosine measure range from -1.0 to 1.0. A value of 1.0 indicates that two composers are identical. A value of -1.0 indicates that two composers are complete opposite. A value of 0 shows that two composers are independent (unassociated). A nonzero value of the centralised cosine measure might be due to randomness or actual association between composers. Unlike in the case of the ordinary cosine measure, there is a proper statistical significance test. Under the assumption that the size of the database n is large enough, the distribution of the centralised cosine measure (under the assumption of independence) is approximately normal, with mean 0 and variance 1/n. Therefore, the distribution of the centralised cosine measure can be converted into a standard normal distribution using the Z-score/statistics: where ABS is the absolute value and n is the size of the database at hand, that is n = 500 for the personal musical influences database and n = 298 for the ecological categories database. 16 Using the centralised cosine measure, Table 4 ranks composers in order of greater similarity to Debussy, on the basis of personal musical influences. The index identifies Ravel as the composer most similar to Debussy. The centralised cosine measure for Debussy and Ravel is 0.587. The corresponding Z-statistic is 13.119, which is greater than the critical value of 1.96 at a 5% significance level under the standard normal distribution. We can then reject the null hypothesis of no association between Debussy and Ravel. 17 As said above, when CSC takes a value of 0, this means that the two composers under consideration are 'independent' (unassociated). So, a negative value for CSC suggests that the composers are negatively associated. But what is the exact meaning of this? Recall that the centralised cosine measure is based on Boolean vectors. The Boolean vector for Debussy, B i = B Debussy , is a (500 9 1) vector of components B k,Debussy each equal to '1' or 16 Note that square root of 1 is ±1, which is why we take the absolute value, ABS. 17 Table 4 indicates that we can reject the null hypothesis of no association between Debussy and the first 181 composers of the table (until and including Monk in Table 4) at a 5% significance level. Table 4 also shows results for the binomial index of dispersion discussed in the ''Appendix''. Note that as shown in last column of the table, the binomial statistic for Debussy and Ravel is 172.1. Using the v 2 distribution, the critical value at a 5% significance level is 3.84. (For significance levels at 1 or 10%, the critical values are 6.63 and 2.70, respectively.) Because 172.1 [ 3.84, we reject the null hypothesis of no association between Debussy and Ravel in favor of the alternative that these two composers are statistically significantly associated (in agreement with the conclusion drawn from the CSC index). Observe that we can reject the null hypothesis of no association with Debussy for the first 181 composers of the table: this is the same cutoff point for both the v 2 test (binomial index) and the Z-statistic (CSC index). '0' depending on whether a composer k 2 C had or not a reported musical influence on Debussy. The Boolean vector for Carter follows an analogous definition. If the sets of personal musical influences on Debussy and Carter are such that B k,Carter is more often 1 (or 0) when B k,Debussy is 0 (or 1), then CSC will take a negative value and this suggests that Carter may have (deliberately or not) rejected composers that had a musical influence on Debussy while being influenced by others that had no reported musical influence on Debussy. This property of the centralised cosine measure provides a more sensitive measure of 'similarity' than the binomial index described in the ''Appendix'' (and previously used by Smith andGeorges 2014, 2015) as it also tracks composers who (consciously or not) attempted to 'differentiate' themselves from others. 18 For all 500 subject composers, two tables of similarity indices have been generated, one on the basis of the personal musical influences database (as done in the example for Debussy), and one that is based on the 298 ecological characteristics database. The large number of indices computed (2  500  500) forces us to report average results for subsets of composers and specific results for a few composers only. Before doing this in next section, observe Figs. 4 and 5. Figure 4 gives the ten most similar composers to J. Haydn, Mozart, and Schubert, on the basis of personal musical influences using the centralised cosine similarity measure developed in this section. Observe the differences between Figs. 2 and 4. Figure 2 provides composers who had a reported influence on these three subject composers. The assumption in the first section was that the larger number of common personal influences between W. A. Mozart and J. Haydn The number in front of a composer's name gives his ranking (in terms of importance), as defined in the CMN. This is the primary ranking discussed in next section would cause (or even explain) the higher similarity of styles between these two composers than between Mozart and Schubert, let alone J. Haydn and Schubert. Figure 4 confirms that J. Haydn and Mozart have a higher centralised cosine similarity index (0.52) than Mozart and Schubert (0.36) or Haydn and Schubert (0.26). 19 Figure 5 gives the 10 most similar composers to J. Haydn, W. A. Mozart, and Schubert on the basis of ecological characteristics. Two things are worth noticing. First, when comparing similarities on the basis of personal musical influences and ecological data there are only three common names in the two lists of the 10 most similar composers to J. Haydn (i.e., Mozart, Beethoven, Boccherini), three common names in the lists for Schubert (i.e., Rossini, Mendelssohn, Bruckner), and five common names in the two lists related to Mozart (J. Haydn, JC Bach, Salieri, Schubert, Beethoven). This is not surprising because personal musical influences and ecological data provide two different perspectives on the concept of similarity. Second, observe that most composers similar to Mozart and to Haydn are, in both lists, Classical period composers. However, many composers similar to Schubert on the basis of ecological characteristics are Romantic period composers (R. Schumann, C. Franck, Grieg, Fauré, Mahler-all composers born (2) The number on the edge linking any pair of composers gives the centralised cosine similarity index (on the basis of personal musical influences) between the two composers. Note that the width of the edge also proxies the degree of similarity quite after Schubert). Yet, the similarity list based on personal musical influences (lineage) suggests that Schubert is strongly associated to older composers of the Classical period (e.g., Reicha, Salieri, Carulli, Méhul, and Rossini). This confirms the insight of the previous section-Exploiting the conflicting results generated by the two databases is a useful approach to detect transitional-period composers such as Schubert, whose lineage is still anchored in the Classical period while his musical ecological niche pulls him towards the Romantic period. 20 This explains to some extent music 'speciation' and evolution-a large number of Schubert's compositions 'sound' different from the music of Mozart and Haydn, even if Schubert's influences network (lineage) remains anchored in the Classical period. This also suggests that a presentation analogous to Table 2 could help us detect music speciation and evolution. This is explored further in the following section. (2) The number on the edge linking any pair of composers gives the centralised cosine similarity index (on the basis of ecological characteristics) between the two composers. Note that the width of the edge also proxies the degree of similarity 20 Figures 4 and 5 also give the birth date of each composer (in front of the name) so that we can compute the sum of the age differentials between all composers similar to Schubert and Schubert himself. The sum is -49 years in the influences network case, and ?161 years in the ecological database case. This clearly indicates that while the personal influences network associates Schubert with composers relatively older than him, the ecological database associates him with much younger composers. The same calculations for Mozart provide sums of age differentials of -40 years and -145 years, respectively, demonstrating that the ecological niche of Mozart was rather backward-looking. For Haydn, we get ?148 years and ?43 years, respectively. Although Haydn's musical ecological niche is clearly forward-looking (as he is typically associated with innovations in Symphonic and String Quartets compositions), Haydn is also forward-looking with respect to his influences network. From this perspective, his ecological niche is in concordance with his influences network, as in the case of Mozart. This is not the case for Schubert. Selected statistical results and discussion Built from the perspective of a 'subject' composer, Fig. 6a-p plot vectors (dots) representing other composers located relative to the 'subject' composer according to their similarity in terms of personal musical influences (X-axis) and ecological characteristics (Y-axis). For purpose of clarification, we will refer to these 'other' composers-the dots in Fig. 6a-p-as 'object' composers in the sense that they are compared to one unique 'subject' composer. For example, in Fig. 6a Beethoven is the 'subject' of the analysis and Brahms, Dvořák, etc. are 'object' composers located (with dots) relative to Beethoven. Furthermore, 'object' composers are grouped into four categories according to an age relationship with the 'subject' composer: 1. Composers dead 0-25 years before the birth of the 'subject' composer, 2. Older contemporary composers, 3. Younger contemporary b Fig. 6 A few selected 'subject' composers. Notes (1) Each dot in these figures is a vector that represents an 'object' composer, located relative to the 'subject' composer of the figure, according to the values of two similarity indices based on: (1) personal musical influences (lineage) on the X-axis and (2) musical ecological niches on the Y-axis. The axes do not cross at the origin but at the critical values delimiting statistically-significant similarity index values (above) versus independence/dissimilarity (below). (2) The number in front of a composer's name is a ranking which reflects the importance of this particular composer. This is the primary ranking established in 'The Classical Music Navigator' (Smith 2000), and also discussed in main text of this section composers, and 4. Composers born 0-25 years after the death of the 'subject' composer. See Fig. 6a, d, g, h, respectively, for 'subject' composer Beethoven. Note that the two axes in all panels of Fig. 6 have been drawn at their critical significant values at 5%. Given Eq. 4, the Z-statistic is at its critical value when Z = ABS CSC  ffiffi ffi n p ð Þ= 1.96. The value for n is 500 in the case of the influences network database, and 298 in the ecological characteristic database. Thus, the critical values are CSC c ¼ AE1:96= ffiffiffiffiffiffiffi ffi 500 p ¼ AE0:0877 and CSC c ¼ AE1:96= ffiffiffiffiffiffiffi ffi 298 p ¼ AE0:1135, respectively. The four quadrants delimited by the two positive critical values correspond to the four cells in Table 2. Thus, the word 'high' in Table 2 is now assumed to represent a statistically significant positive association between 'object' and 'subject' composers, and the word 'low', no statistically significant association. 21 In some panels of Fig. 6, we can also see vertical and horizontal spikes of dots at the origin (zero). These dots represent independence (along one of the two criteria). Observe therefore four cases: (1) 'Object' composers who score high on both indices are located in the North-East quadrant and are considered to be very similar to the 'subject' composer. (2) 'Object' composers who score low on both indices are located in the South-West quadrant. Their association to the 'subject' composer is statistically insignificant on both criteria and they are considered to be most dissimilar to the 'subject' composer. (3) 'Object' composers who score high on the personal influence index, but low on the ecological index, with respect to the 'subject' composer, are located on the South-East quadrant. Their ecological niches are different from the one of the 'subject' composer, even if they share a common lineage of personal musical influences. As we argued before, this may be a sign of music speciation and evolution. (4) 'Object' composers who score low on the personal influence index but high on the ecological index with respect to the 'subject' composer are located in the North-West quadrant. Despite no or little common personal lineage with the 'subject' composer, they have developed a somewhat similar sound by composing in musical niches that share many ecological characteristics. Using evolutionary biology terminology, this could be a sign of 'convergent evolution'. Of course, a high positive value for a similarity index reveals a significant association between a pair of composers, but does not imply causality. Still, by grouping composers on the basis of an age relationship with the 'subject' composer we can somehow identify the antecedent or 'causality in similarity'. For example, if an 'object' composer was located in the South-East quadrant but died before the birth of the 'subject' composer, then music speciation/evolution should be attributed to the 'subject' composer. The latter distanced himself from the former by composing in a different musical/ecological niche. However, under the same South-East location, music evolution/speciation should be attributed to the 'object' composer if he was born after the death of the 'subject' composer. Extending this reasoning in the case of contemporary composers (both alive at one point in time) is of course ambiguous. A much younger contemporary composer is likely to be the one imitating or differentiating oneself from the older composer. But some degree of crossimitation must be expected from composers of similar ages. Figure 6a-p apply this graphical approach to a few composers such as Gluck, Beethoven, Wagner, Debussy, and Schoenberg, and I discuss their specifics later on in this section. As one cannot make general statements about Western classical music evolution, a gigantic undertaking of a major art form, based on an analysis of just five 'subject' composers, I first start by establishing some general observations. Tables 5a-c present Smith (2000) secondary ranking of the Top 20 most 'influential' composers statistics covering individual, some subsets, and all of the 500 composers included in the database. Table 5a is essentially equivalent to Fig. 6. For example, Table 5a is divided into four panels corresponding to the four age relationships between 'subject' and 'object' composers. Table 5a also gives the density in each quadrant (North-East, South-East, North-West and South-West), that is, it computes with respect to a 'subject' composer, frequencies of occurrence of 'object' composers located in each quadrant. Table 5a reports results for five specific 'subject' composers (Monteverdi, Gluck, Beethoven, Debussy, and Schoenberg). 22 However, this computation was done for all 500 'subject' composers and Table 5b reports average results over all 500 'subject' composers. The first column in the first panel of Table 5b ('subject' composer vs. composers dead 0-25 years before the birth of the 'subject' composer) gives the mean (and standard deviation in brackets) of these frequencies computed over all 500 'subject' composers-4, 11, 25 and 60% for, respectively, the North-East, South-East, North-West, and South-West quadrants. The results illustrate that, on average, composers strongly differentiate from recently dead composers. Sixty percent of them compose in a different ecological niche (from the one associated to dead composers) and have no significant similarity on the basis of personal musical influences (South-West quadrant). Only 4% of them are statistically similar to those dead composers with respect to ecological niche and personal influences (North-East quadrant). 23 Finally, observe the much higher density in the North-East quadrants and lower density in the South-West quadrants in the first column of panels 2 and 3, where 'subject' composers are compared to either older or younger contemporaries, respectively. This suggests an overall larger tendency for crossimitation between pairs of contemporaries (higher similarity in personal musical influences and ecological niches). We pursue the analysis by considering subsets of 'subject' composers regrouped into rankings such as most 'important' composers. 24 We also grouped them by periods such as all 48 Renaissance composers included in the Classical Music Navigator (CMN) database, all 50 Baroque composers, all 57 Classical, all 146 Romantic, and all 195 Modern composers. 25 Of the three rankings used here, the first one is 22 Using Beethoven as an example of how Table 5a is constructed on the basis of Fig. 6, observe that panels 3 and 4 in Table 5a show that 33% of Beethoven's younger contemporary composers are located in the North-East quadrant of Fig. 6h while only 6% of composers born 0 to 25 years after Beethoven's death are located in the North-East quadrant of Fig. 6a. In Fig. 6h, we see indeed that 18 'object' composers are in the North-East quadrant out of a total of 54 composers included in the graph. Setting y = 18 and n = 54, we get that p = 18/54 = 0.33 as reported in Table 5a. For Fig. 6a, y = 2, n = 33, and p = 0.06. We can test whether the difference in the two proportions p1 and p2 is statistically different, that is: H 0 : p1 = p2 versus H A : p1 = p2. We need to compute: where p à ¼ y1þy2 n1þn2 . In our example, Z à ¼ 2:94 [ 1:96. Hence we reject the null hypothesis that the two proportions p1 and p2, are the same, in favour of the alternative that their difference is statistically significant at a 5% level. Alternatively, in terms of P value, Pr(Z [ 2.94) = 0.0016 \ 0.05, and we again reject the null hypothesis. 23 This is also confirmed when looking at the first column in panel 4 of Table 5b (representing 'subject' composers vs. composers born 0-25 years after the death of the 'subject' composers). Only 4% of the composers are strongly similar to the (dead) 'subject' composers, while 61% are statistically independent across both personal and ecological categories. 24 Composers who belong to the 'canon' of classical music are not necessarily ranked, but scientists can count the number of lines or pages devoted to them in major music encyclopedia, the number of recordings available, etc., and then turn scores into a ranking. Although the rankings per se (and underlying aggregation methodology) are controversial and often discredited by musicologists, the collection of names in these lists, instead of the ranking, may provide useful information. 25 I left out the group of pre-Renaissance composers. Also, whether a composer falls into a specific period is based on the categories given in the CMN. (2017) 112:21-53 43 the 'primary' ranking of the Top-100 composers computed by Smith (2000) in the CMN, from which Top-20 and Top-50 rankings are also derived (and referenced in Table 5b as TOP 20S, TOP 50S and TOP 100S). 26 The second one is Smith's 'secondary' ranking of most 'influential' composers, based on the list and the (primary) ranking of those composers who were influenced by the composer under study (TOP 20iS in Table 5b). 27 The third one (TOP 20M in Table 5b) is the Top-20 ranking proposed by Murray (2003). 28 In the following, I only discuss results for Top-20 composers according to the primary ranking of Smith (TOP 20S), because other rankings give roughly similar results. Hence results are robust and do not depend on the method underlying the construction of these rankings. Compare first and second columns in panel 2 of Table 5b (Columns ALL and TOP 20S) and think of the mean across all 500 'subject' composers (first column) as the result pertaining to an 'average' subject composer. We therefore see that Top-20 'subject' composers have (on average) denser North-East and South-East quadrants than the average 'subject' composer (0.46 [ 0. 28 and 0.31 [ 0.19). This suggests that the creative process of Top-20 composers (even more so than for an average composer), is not due to genius alone but is based on personal musical influences, in particular a strong similar lineage (or network of personal influences) with older contemporaries. This reminds the much-quoted expression attributed to Isaac Newton: ''if I have seen further, it is by standing on the shoulders of giants.'' Concentrating more specifically on the South-East quadrant, we observe that it is denser for Top-20 'subject' composers than for the average 'subject' composer (31 vs. 19%). According to our typology in Table 2, this suggests that major composers, while also sharing personal musical influences with older contemporaries, contributed more than the 'average' composer to music evolution by composing in a different (i.e., new) musical ecological niche, which, in turn, made them sound 'different' from the average composer. On the other hand, the North-West quadrant for the average 'subject' composer is denser than the one corresponding to Top-20 'subject' composers (25 vs. 8%). This means, first, that the 'average' composer has a distinct personal lineage (from the one of older contemporary composers), suggesting that the 'average' composer is somewhat isolated, or perhaps less well-connected (than Top-20 composers) to the network of key influences. Secondly, this means that the 'average' composer is more likely to share the musical ecological niche of older contemporaries, eventually producing music that sounds somewhat similar (convergent evolution), and as such contributing less to the evolution of Western classical music. Although panels 1 and 3 of Table 5b can generally be interpreted along similar lines as panel 2, panel 4 brings an interesting twist. Imitation or differentiation, in panel 4, must be attributed to the 'object' composer as the 'subject' composer is dead. Panel 4 therefore means that 'object' composers are more likely to differentiate themselves (or at least be independent) from Top-20 composers than from an 'average' composer (64 vs. 61%). 29 This perhaps reflects the idea that new generations try to differentiate themselves in particular from top (dead) composers, for fear of being categorised as 'epigones' by music historians and eventually forgotten by the public. 30 One problem with our focus on Top-20 'subject' composers is that they are not necessarily 'innovators' or 'transitional' composers, (i.e., composers located on the diagonal in Fig. 1 as identified by musicologists). For example, few musicologists would consider J.S. Bach or W. A. Mozart, two major composers, to be genuine innovators. An alternative strategy is therefore to compare innovators and/or transitional figures with composers of the music period from which they progressively diverged, for example, by comparing Monteverdi to all Renaissance composers, Gluck versus all Baroque composers, Beethoven versus all Classical composers, Debussy and Schoenberg versus all Romantic composers. We therefore propose to compare statistical results for specific 'innovators' in Table 5a with results for the 'average' subject composer of a specific period in Table 5c. Focusing on panel 2 in both tables, we see that the South-East quadrant for specific 'innovators' in Table 5a is denser than the quadrant corresponding to the 'average' composer of the period from which they progressively diverged. In the case of Beethoven, 42% of his older contemporaries fall in the South-East quadrant while the corresponding number is just 16% for the 'average' classical composer. This not only means that Beethoven was better connected (than the 'average' classical composer) to older contemporaries in terms of personal musical influences (i.e., 'standing on the shoulders of giants'), but also that he was progressively composing in a different musical ecological niche (than the one of the 'average' classical composer), leading to a change of sound in classical music and opening the way to the Romantic period. This is also true for Monteverdi (11%) versus the 'average' Renaissance composer (2%) or Gluck (26%) versus the 'average' Baroque composer (6%). This, however, is just marginally true for Debussy (vs. the 'average' Romantic composer), and not true for Schoenberg (20 vs. 25% for the 'average' Romantic composer). 31 One difficulty is, of course, the concept of an 'average' Romantic composer who would be representative of a rather long period divided itself in very distinct sub-periodsearly, middle and late Romantic periods-each having their own 'innovators' or transitional composers. Besides, it is also informative to recall that Schoenberg felt that his early 29 On a cautious note, the difference between the two proportions is not statistically significant according to the methodology presented in Footnote 22. Hence we should avoid extracting too much musicological information from this fact. 30 For example, much of the traditional symphonic writing fell out of fashion after Beethoven's Ninth Symphony (1824). That Joachim Raff (1822-1882), a prolific and very well-known traditional symphonist of his time (but born just 5 years before the death of Beethoven), tends to get little attention in music history books, shows the ease with which the historian's attention is captured by novelty. 31 We can test the difference in proportions using the methodology in Footnote 22. The differences are statistically significant at 5% for Monteverdi, Gluck, and Beethoven versus their corresponding 'average' composers of the period from which they progressively diverged. For Debussy and Schoenberg versus the 'average' Romantic composer, however, we cannot reject the null hypothesis of no difference in proportions. music would prove his understanding of and respect for tradition. 32 This perhaps explains our results in panel 2 of Table 5a (or in Fig. 6m) that characterize Schoenberg as building on the romantic tradition (very dense North-East quadrant -54%) instead of being exclusively characterised as an innovator. Scientometrics After these general observations, I now pursue with a few specific results related to Fig. 6a-p for 'subject' composers Gluck, Beethoven, Wagner, Debussy, and Schoenberg (of which Gluck, Beethoven, Debussy and Schoenberg are viewed by musicologists as innovators and 'transitional' composers, and therefore positioned on the diagonal in Fig. 1). The objective is to demonstrate that our results, based on a statistical methodology, confirm many facts well-known to musicologists. First, observe again that after the death of the 'subject' composer, there is a strong tendency for newer generations of composers to seek different personal lineages and/or musical ecological niches (i.e., the North-East quadrants of Fig. 6a-c, have a very low density of dots relative to other quadrants, in particular the South-West quadrant). Figure 6c shows that twentieth century composers Xenakis, Berio, Reich and Glass who were born 0-25 years after the death of Debussy, are quite different from him on both criteria. See also Fig. 6a for Beethoven and Fig. 6b for Wagner. Although this confirms the general result observed previously, it is worth emphasizing that this is a differentiation away from 'subject' composers (such as Beethoven, Wagner, or Debussy) who are known to have had direct influences on younger contemporary composers. Hence, a strong process of music evolution and differentiation operates over time, across new generations. Of course, there are exceptions. A composer such as Brahms, born after the death of Beethoven, appears in the North-East quadrant of Fig. 6a, suggesting strong similarities with Beethoven. And, as is well known, Brahms' First Symphony (from 1876) has often been compared to the Ninth Symphony of Beethoven (1824). 33 Music evolution and differentiation can also be viewed from another side, when observing graphs of 'object' composers who were dead before the birth of the 'subject' composer. We observe in Fig. 6d-f a low density of 'object' composers in the North-East quadrant, which suggests that the 'subject' composer (respectively, Beethoven, Wagner, and Debussy) distanced himself from past generations of composers in terms of musical ecological niche and/or personal lineage. Second, results are quite different from those reported above when considering 'contemporary' composers ( Fig. 6g-p). In this case, we typically observe a large density of dots in the North-East quadrants, suggesting a process of imitation. For example, we see the common personal lineage and ecological niches of Beethoven with older contemporaries such as J. Haydn and W. A. Mozart (Fig. 6g). Then, it is the turn of younger contemporaries such as Hummel, Schubert, Mendelssohn, to also 'imitate' Beethoven to some extent (Fig. 6h). We see the extent to which Wagner's music is both a product of his time and a music that has been imitated, with a large density in the North-East quadrant for older composers (Berlioz, Meyerbeer, Glinka, Nicolai in Fig. 6i) and younger contemporaries (Gounod, Borodin, Bizet, Massenet in Fig. 6j). We see a strong similarity of Debussy with some of his older contemporaries in Fig. 6k (Franck, Fauré, Chabrier, Chausson), and we 32 As evidenced by a letter from 1923 to conductor Werner Reinhart, reproduced and translated in Stein (1975) in his selected writings of Schoenberg, the composer wrote: ''I do not attach so much importance to being a musical bogeyman as to being a natural continuer of properly-understood good old tradition!''. 33 As T&G (2013) recount, the pianist and conductor Hans von Bülow hailed it as the 'Tenth Symphony' and then proclaimed a new holy trinity of classical music-'Bach, Beethoven and Brahms'-that has lived on, ever since, in the catchphrase 'the three B's'. see Ravel and Roussel subsequently embracing Debussy's impressionism (Fig. 6l). We see the middle and late Romantic heritage of Schoenberg (e.g., Brahms, R. Strauss, Mahler, Reger) in Fig. 6m and then we see Berg and Webern developing the innovative dodecaphonic (or twelve-tone) method of composition of Schoenberg (Fig. 6n). We finally see in the North-East quadrant of Fig. 6p that Gluck and Jommelli (both born in 1714) are very similar. Gluck's reforms of the opera will be discussed shortly. However, note that Jommelli is also known for his reforms of the Italian opera, so much so that he has been called the 'Italian Gluck' (Grout and Williams 2002). Third, relationships among contemporaries are not just limited to a process of imitation; we also see a process of differentiation and evolution among them as the South-West and South-East quadrants are also densely populated in Fig. 6g-p). According to our typology, Gluck's music is different from earlier Baroque contemporaries such as A. Scarlatti and, later on, Handel (South-West quadrant of Fig. 6o). Indeed, Gluck's reforms of the opera of the mid-eighteenth century was a reaction to the excesses of 'pre-reforms' Baroque opera seria (and the virtuosic display of da capo aria) of composers such as A. Scarlatti and followers. 34 He abolished vocal virtuosic excess for its own sake so that the music would serve the needs of the drama, that linguistic elements took place over purely musical considerations, that realism was privileged over fantasy or irrationality. Gluck's operas, despite all his reforms, also follow the conventions of the older French Tragédie Lyrique, including the use of librettos in French language, which tends to explain the common lineage with Rameau and other French baroque composers located in the South-East part of Fig. 6o, despite the obvious evolution from their music. 35 Continuing with other composers who changed the sound of music, we see that Liszt, a younger contemporary of Beethoven, has developed a music different from the one of Beethoven despite having a similar lineage (Fig. 6h, South-East quadrant). 36 Much of the symphonic writing ('traditional', 'non-programmatic', 'multi-movement' symphony) fell out of fashion after Beethoven's Ninth symphony (1824). From that point onwards, the last symphony of Schubert, and those of Mendelssohn and Schumann, however magnificent they are, could only be regarded as the works of epigones. And symphonies composed yet later on, in the 1850s and 1860s, by conservative composers such as Anton Rubinstein, Carl Reinecke, Max Bruch, or Joachim Raff have not successfully survived the repertoire. 34 As T&G (2013) explain, A. Scarlatti, a culminating figure of Baroque opera at the turn of the 18 th century, laid the foundation of opera seria (serious opera) and the da capo aria which includes a last section that is essentially unwritten but becomes an opportunity for the singer to do free-form spontaneous embellishment and improvisation, ensuring a virtuoso display and the kind of spectacular performance on which public opera has always thrived. With time, most great singers (of which the well-known Farinelli) carried around a portfolio aria that could be inserted whenever they sang, even if irrelevant to the context. Although the operas of Handel are cast in the same mold as other opera seria, he typically gave performers less room to manoeuvre. This led to a decline in interests from performers and the public and forced Handel out of opera and into English oratorios. 35 Incidentally, the relative position of Rameau and Pergolesi in this graph reminds the so-called 'Querelle des bouffons' (the War of the Buffoons), which in the 1750s (a long generation before the French revolution) foreshadowed not only musical change but also political and social change. As T&G (2013) explain, Jean-Jacques Rousseau and Diderot ridiculed the high-minded French Tragédie Lyrique and Pastorale Héroique in the style of Lully, Rameau, Leclair (Scylla et Glaucus), and Mondonville (Titon et l'Aurore), which was performed by the royal musical establishment. Furthermore, Rousseau argued that French language was not suitable for operas. Instead, Rousseau was glorifying the 'modern' style of Italian operas buffa and intermezzos, including the most popular at the time-La serva padrona-by Pergolesi, brought to Paris in 1752. 36 Liszt's teacher, Carl Czerny, was himself the pupil of Beethoven (among others). Czerny's compositional style and teaching often mimicked Beethoven himself, and much of Franz Liszt's early learning can be said to have come from Beethoven himself (Mao 2012). which any other is composing. We see perhaps 'convergent evolution' in Fig. 6k (North-West quadrant) for U.S.-born Samuel Barber, a (much) younger contemporary of Debussy who, despite some American feel of his music, was rather isolated over there, and composed in an ecological niche (concertos, symphonies, opera) that was much closer to the late-Romantic European composers than the ecological niche (including jazzy elements and film music) of U.S. composers of his time such as A. Copland or L. Bernstein. We also see Gershwin whose composition, An American in Paris, reflects the journey that he had consciously taken as a composer. As cited in Hyland (2003), Gershwin declared with respect to this composition: ''The opening part will be developed in typical French style, in the manner of Debussy and Les Six, though the tunes are original''. 40 And despite all the jazzy elements of his music, his piano Concerto in F was criticised for being too much related to the work of Debussy. Despite Gluck's opera reforms mentioned earlier, and his separate network of influences (including his partisans opposed to the famous poet and librettist Metastasio and his circle of opera seria composers using dazzling artifices), he was part of the transition between Baroque and Classical Periods, sharing the ecological niche of many composers (e.g., Fasch, Hasse and Pergolesi in Fig. 6o and Piccinni in Fig. 6p) who were also contributing to mid-eighteenth century stylistic changes, suggesting a convergent evolution. 41 Conclusion and future research This paper uses two databases, the personal influences and the ecological categories databases extracted from the CMN, to test, statistically, for similarity between pairs of composers, using the centralised cosine similarity index. Each of these two databases permits to capture one aspect of similarity across pairs of composers. As such, this is a contribution to the music information retrieval research. However, this paper goes one step further by using the two similarity rankings conjointly in order to generate a typology of cases that permits to explore music imitation and differentiation, music 'speciation' and 'convergent evolution'. That results in the fourth section corroborate many facts well known to musicologists is indicative of a sound database and methodology. This said, although there is scope for a true evolutionary model of Western classical music, including the construction of a phylogenetic tree, there are also challenges. In biological systematics, one is typically given some group of species (from within a large genus), and data on some number of their adaptive traits (plus external knowledge on which traits are viewed as more primitive to the others). Then, various algorithms have been developed to produce a family tree having the most likely chance of accurately reflecting speciation patterns over time. But in that instance, there is the useful simplification that each species comes from only one other, whereas in the Western classical music context, the 'events' (particular composers) are the product of multi-influence. 40 Les Six is a name given in 1920 to a group of six French composers who worked in Paris, of which Milhaud, Poulenc and Honegger are the most well-known. 41 The famous rivalry between 'Piccinnists' and 'Gluckists', a 'querelle' in Paris opposing dramatic versus musical values of two operas composed on the same subject, Iphigénie en Tauride, seems to suggest a stronger differentiation than the one implied by the North-West location of Piccinni versus Gluck. However, for T&G (2013) ''[b]oth were equally, though differently, a sign of the intellectual, philosophical, and social changes that were taking place over the course of the eighteen century. (…) The two composers, privately on friendly terms, were more allies than rivals,'' suggesting indeed a convergent evolution despite their respective advocates and their conflicting network of influences that led them to a collision course. Hence, at this stage, it is best to see our work as preliminary background. First, it will take some time to sort through the numerous results obtained with the methodology introduced in this paper. Second, there is a need to improve this framework using a finer analysis, one that would introduce specific sub-periods (early, middle, and late Romantic periods, subdivisions of the twentieth century, etc.), and that would consider additional age categories among contemporaries (not just older vs. younger contemporary composers). Third, current results and their limitation are also driven by the information available in the CMN. One limitation is that the CMN data suffer from some spottiness, as many of the less significant composers on the list of 500 remain incompletely studied or commented upon. Musicological research on composers is an ongoing effort and newly discovered influences from (and on) lesser composers must progressively be included in the CMN. A large-scale literature review should reduce this problem and would permit to improve our narrative of Western classical music evolution based on statistical analysis and methods developed in biosystematics, scientometrics and bibliometrics. very useful comments; I also thank Frédéric Sigouin for his excellent assistance with the database. The paper benefited from initial discussions with Charles H. Smith and comments on an earlier draft. Most of the data used in this project was provided by Charles H. Smith in the preparation of two earlier co-authored articles. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Appendix: BID versus CSC rankings: the BID index as a quadratic function of the CSC index The binomial index of dispersion (or similarity) used in Smith andGeorges (2014, 2015) can be computed for any pair (i, j) as: where a, b, c, d, and n are the count/number of composers in each of the five sets CI i;j , I i;Àj , I j;Ài , C À CI i;j À DI i;j and C (see Table 3). Table 3 permits computation of frequency of joint presence, frequency of joint absences, and frequency of mismatches. When two composers are independent (lack of association), the proportion or frequency of joint influences (a/n) is equivalent to the product of the proportions (a ? b)/n and (a ? c)/n (that is, the proportion of composers in the database that have influenced i and the proportion of composers that have influenced j). If the observed frequency is greater than the one expected under independence, then the two composers are said to be positively associated. Under the condition that all expected frequencies in the presence/absence table (which is computed assuming independence of composers) are at least five and the sample size is sufficiently large, BID is asymptotically v 2 distributed with one degree of freedom. The v 2 test of independence can then be used to assess whether there is a statistically significant association between two composers. A concrete example is given in Table 4 for composer Debussy. One intriguing point in Table 4 is that the rankings produced by the binomial index and the centralised cosine measure are exactly the same for a large portion of the table but then start to dissociate with composer Carter (identified at the 480th position in the first column of Table 4 and at the 241rd position in the fifth column). This result has, however, a simple explanation-There is a quadratic relationship between BID and CSC, as proved mathematically in , and CSC can take negative values. Observing Eqs. (3) and (5), it is clear that the relationship between CSC and BID is quadratic: Given Eq. (6) and because CSC can take negative values, BID is not a monotonic function of CSC. Hence, the order (or ranking) between CSC and BID is not preserved over the full set of values for CSC. 42 It is clear for example that a value for CSC = ?x for a pair of composer and CSC = -x for another pair will generate a unique value BID = nx 2 for both pairs. Hence, the ranking of both pairs of composers will be the same using the BID index but quite different with the CSC index. This is shown in Fig. 7 using some data given in Table 4. The CSC index for Carter and Debussy is -0.038 while the CSC for Hindemith and Debussy is ?0.039. The CSC values for the two pairs of composers are clearly different (one is positive and the other negative, while their absolute value is roughly the same), and the rankings for Carter and Hindemith with respect to Debussy on the basis of CSC will therefore be quite distinct, even if these CSC values generate the (roughly) same positive value for BID (?0.7) according to Eq. (6), implying a (roughly) similar ranking under the BID. 43 Finally, that the rankings produced by BID and CSC are exactly the same for a large portion of Table 4 is due to CSC values not symmetrically distributed between Table 4
v3-fos-license
2021-12-17T16:19:02.202Z
2021-12-16T00:00:00.000
245251272
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "c365038e554efac41653bbce557cdc14afa4c0e5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41511", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2b437c54a9d62c58cfe8476c0fdef1c72d4aba9b", "year": 2021 }
pes2o/s2orc
Development and Application of a Semi quantitative Scoring Method for Ultrastructural Assessment of Acute Stress in Pancreatic Islets Background. Pancreas and islet transplantation outcomes are negatively impacted by injury to the endocrine cells from acute stress during donor death, organ procurement, processing, and transplant procedures. Here, we report a novel electron microscopy scoring system, the Newcastle Pancreas Endocrine Stress Score (NPESS). Methods. NPESS was adapted and expanded from our previously validated method for scoring pancreatic exocrine acinar cells, yielding a 4-point scale (0–3) classifying ultrastructural pathology in endocrine cell nuclei, mitochondria, endoplasmic reticulum, cytoplasmic vacuolization, and secretory granule depletion, with a maximum additive score of 15. We applied NPESS in a cohort of deceased organ donors after brainstem (DBD) and circulatory (DCD) death with a wide range of cold ischemic times (3.6–35.9 h) including 3 donors with type 1 and 3 with type 2 diabetes to assess islets in situ (n = 30) in addition to pancreata (n = 3) pre- and postislet isolation. Results. In DBD pancreata, NPESS correlated with cold ischemic time (head: r = 0.55; P = 0.02) and mirrored exocrine score (r = 0.48; P = 0.01). When stratified by endocrine phenotype, cells with granules of heterogeneous morphology had higher scores than α, β, and δ cells (P < 0.0001). Cells of mixed endocrine-exocrine morphology were observed in association with increased NPESS (P = 0.02). Islet isolation was associated with improved NPESS (in situ: 8.39 ± 0.77 [Mean ± SD]; postisolation: 5.44 ± 0.31; P = 0.04). Conclusions. NPESS provides a robust method for semiquantitative scoring of subcellular ultrastructural changes in human pancreatic endocrine cells in situ and following islet isolation with utility for unbiased evaluation of acute stress in organ transplantation research. INTRODUCTION Insulin-dependent type 1 diabetes (T1D) is characterized by loss of endocrine β cells within the islets of Langerhans in the pancreas associated with autoimmunity. 1 Transplantation of the vascularized whole pancreas or isolated islets can restore glycemic control in suitable recipients, [2][3][4] although β-cell loss and failure to attain or maintain insulin independence are frequent occurrences following islet transplantation. 5 Donor organs are subjected to multiple stresses during the peritransplant process, including stress associated with the following: death, trauma, intensive care management, organ procurement, and processing, which can adversely impact upon recipient outcomes. [6][7][8][9][10] In particular, increased cold ischemia time (CIT) is associated with poorer isolated-islet yield and reduced graft survival. 11,12 Various interventions and technologies have been developed to reduce or reverse the injuries to donor organs in the transplantation process, such as preservation solution additives, normothermic and hypothermic machine perfusion, and persufflation. [13][14][15] Donor risk factors impacting graft success have been integrated into predictive scores for organ allocation 16 ; however, we have previously shown that light microscopy analyses alone are insufficient to resolve the full extent of subcellular acute stress in the pancreas. 17 A method for the quantitative electron microscopy (EM) examination of pancreatic endocrine cell stress would provide an invaluable addition to the current available techniques and aid in evaluating innovations to improve transplantation outcomes. We have recently developed and validated the Newcastle Pancreatic Acinar Stress Score (NPASS), a novel scoring method for ultrastructural assessment of the acinar compartment of the pancreas. 17 Here, we sought to extend this method to the endocrine pancreas and‚ in parallel‚ perform detailed subcellular characterization of islet cells across a range of donor organs and in isolated islets. Donor Organ Procurement and Biopsy Collection Research was performed with written donor-relative consent in compliance with the UK Human Tissue Act of 2004 under specific ethical approvals by the UK Human Research Authority (05/MRE09/48 and 16NE0230). A primary cohort of 30 deceased organ donors was selected to cover a broad spectrum of donor demographics and included 3 donors with T1D and 3 with type 2 diabetes (T2D) ( Table 1). The cohort comprised organs with an intentionally wide range of CIT from 3.6 to 35.9 h and in donation after circulatory death (DCD) donors a range of warm ischemia time (WIT) from 9 to 103 min (Table 1) to facilitate the observation of the full spectrum of tissue changes. Tissue sampling was undertaken within the Quality in Organ Donation (QUOD) MRC-Expand program using established protocols. 17 Analyses of isolated islets in comparison to the preisolation biopsy from the head of the pancreas were performed in an additional cohort of 3 donors with no history of diabetes (Table 2). Donor pancreata were procured by the UK National Organ Retrieval Service utilizing standardized procedures. Pancreas dissection was performed in a standardized manner in a 4°C cold room, as previously described, 17 and tissue was sampled from each of 8 anatomic regions of the pancreas (P1-P8), with P1 corresponding to the head/uncinate process and subsequent blocks moving incrementally toward the tail region (P8). Samples were stained with dithizone (Merck; Gillingham, United Kingdom) to distinguish islets within tissue, microdissected into 1 to 2 mm 3 islet-rich biopsies, and fixed in 2% glutaraldehyde in 0.1 M sodium cacodylate buffer (Agar Scientific; London, United Kingdom). Optimizations confirmed that dithizone staining had no impact on downstream islet imaging. Islet Isolation Islet isolations were performed using a modified Ricordi method. [18][19][20] Briefly, the pancreas was perfused with collagenase and neutral protease (PELOBiotech; Munich, Germany), with islet dissociation carried out in a Ricordi chamber (Biorep; Miami, FL), followed by density gradient centrifugation in a COBE 2991 processor (Terumo; Shibuya, Japan). A single tissue biopsy was taken from the head of the pancreas upon cannulation of the main pancreatic duct before commencing enzyme perfusion. Following isolation, islets were cultured in CMRL (Corning Life Sciences; Tewksbury, MA) supplemented with 0.5% human serum albumin (BioIVT; London, United Kingdom), Hepes (Merck; Gillingham, United Kingdom), L-glutamine‚ and penicillin/streptomycin (Fisher Scientific; Loughborough, United Kingdom). Islets were sampled 1 to 2 h after being placed in culture (D0) and 12 to 18 h postisolation (D1). Transmission EM Glutaraldehyde-fixed tissue specimens were postfixed with osmium tetroxide, dehydrated in acetone, and embedded in epoxy resin (TAAB; Aldermaston, United Kingdom). Seventy nanometer ultrathin sections were obtained using the Ultracut E (Reichert; Vienna, Austria), mounted on pioloform-coated Cu-grids, and poststained with uranyl acetate and lead citrate. Images were acquired on the Hitachi HT7800 120 kV transmission electron microscope (Hitachi High-Technologies; Abingdon, United Kingdom). One section from the head (P1) and 1 from the tail (P8) of the pancreas were imaged for each donor. The whole section was scanned for presence of endocrine‚ tissue and, when present, endocrine cells were assessed at random up to a maximum of 25 cells. If fewer than 25 endocrine cells were present, a minimum of 10 cells was set as the cutoff for inclusion in the cohort. If more than 1 islet was present, cells were selected from all islets. A single image of each cell was captured at 4000 to 6000× magnification depending on cell size. This method was also applied to isolated-islet imaging. A single operator experienced in the development and application of NPASS 17 scored all samples. The NPASS criteria use a scale from 0 to 3 to assess acute stress in acinar cells, with 0 representing normal appearance and 3 representing the most severe damage. Four subcellular compartments were assessed: nuclear chromatin condensation, mitochondrial swelling, endoplasmic reticulum (ER) dilation, and vacuolization. 17 Initial evaluation of the severity of the ultrastructural changes in the endocrine cells was performed in alignment with these criteria. This revealed that a similar breadth of ultrastructural pathological changes was present in the endocrine cell organelles, and‚ therefore‚ these 4 categories were retained for the semiquantitative assessment of pancreatic islet cells (Table 3). Statistics Statistical analyses were carried out using Prism version 8 (GraphPad Inc; San Diego, CA). Two-tailed Spearman's r was used to test correlations. Differences between means in the Quality in Organ Donation cohort were calculated with 2-tailed paired/unpaired Student's t tests or 1-way ANOVA with Tukey's multiple comparisons test. Repeated measures 1-way ANOVA with Tukey's multiple comparisons test was used in the isolation cohort. Fisher's exact test was performed to assess cell-type proportions in different donor groups. Data are reported as mean (± SD). Statistical significance was defined as P < 0.05. Evaluation of Endocrine Ultrastructure Nuclei were similar in appearance to those in exocrine cells, with chromatin evenly distributed in healthy cells ( Figure 1A) and chromatin clumping apparent following acute stress ( Figure 1E). Mitochondria were abundant and small in size (0.075-0.6 µm 2 ) relative to those in acinar cells (0.2-0.9 µm 2 ). Healthy mitochondria had an elongated ovoid shape ( Figure 1B); under acute stress‚ this altered to a rounded, more electronlucent appearance, with swelling and destruction of mitochondrial cristae ( Figure 1F). The ER was recognized by parallel electron-dense lines in close proximity, generally surrounding a more electron-lucent interior and joined at the ends to form cisternae. The ER was often sparsely visible in endocrine cells‚ but in the majority of cases‚ ER cisternae could still be evaluated ( Figure 1C). Ribosomes were occasionally visible at a higher magnification. When heavily dilated, the ER appeared as irregularly shaped areas of low electron density ( Figure 1G). Vacuoles appeared in cells as circular areas of low electron density and could be distinguished from ER dilation by their rounded shape and lack of ribosomes ( Figure 1H). Endocrine granules were present throughout the interior of the cell and at the cell membranes ( Figure 1). Endocrine cell type was identified by granule morphology: α cells had uniformly circular, electron-dense granules ( Figure 1I); β cells could be identified by the characteristic halo surrounding smaller electron-dense granules‚ which were heterogeneous in shape ( Figure 1J); δ-cell granules were similar in size to glucagon granules but with a more variable electron-lucent density ( Figure 1K). Only a single morphological PP cell was observed in the whole cohort: the PP granules were rounded and electron-dense but approximately half the size (100-200 nm) of α-cell glucagon granules. Loss of endocrine granules, particularly at the cell membrane, was frequently observed (Figure 1L-O). The extent of loss varied from mild, with a minority of the cell area affected, to severe, in which only a small number of scattered vesicles remained. This observation led to the addition of a further scoring category of endocrine vesicle depletion, defined according to the percentage of vesicle loss from the cell membrane (Table 3). This modified scoring system for endocrine cells of the pancreas will subsequently be referred to as the Newcastle Pancreas Endocrine Stress Score (NPESS). NPESS Scores in a Cohort of Deceased Organ Donors Analysis of pancreatic head and tail regions in a cohort of 30 deceased donors (Table 1) showed mitochondrial swelling in all donors, with ubiquitously high scores (mean‚ 2.73; range‚ 2.1-3.0)‚ even with a short CIT of 4 h. In contrast, nuclear scores to be low (mean‚ 0.79; range‚ 0. 16-2.25). The widest range of scores (0.3-2.75) was seen in the ER. Evidence of at least mild endocrine vesicle depletion was ubiquitous, with a mean score of 2.17 (range‚ 1.32-2.83). Total NPESS scores were comparable between the head and tail of pancreata in both donation after brainstem death (DBD) and DCD donors (Figure 2). Head versus tail subscales were also comparable despite statistically significant differences in the nucleus and ER scores in DCDs, which was likely due to a type I error in view of the smaller sample size compared with DBDs. In the wide range of donors within the cohort, no differences in mean scores were seen between DBD and DCD ( Figure 2). The cohort included 3 DBD donors with T1D and 3 with T2D. Total NPESS scores were comparable with those without diabetes (Figure 2). There was a significant correlation between CIT and overall NPESS in the head of the pancreas but not in the pancreatic tail (Figure 3). Correlation between CIT and individual NPESS subscale parameters in the head of the pancreas only reached statistical significance for the vacuolization score ( Figure 3A). Subscale NPESS parameters within the tail of the pancreas did not correlate with CIT ( Figure 3B). Although numbers of donors with T1D and T2D were too small for statistical comparison testing, all were DBD donors with a relatively short CIT and did not appear to be outliers from the overall cohort for any of the NPESS parameter scores (Figure 3). There was no association of any score with WIT in DCD donors (data not shown). Comparison of NPESS With NPASS in a Cohort of Deceased Organ Donors In parallel with NPESS quantification, NPASS scoring (according to published methods) was performed on exocrine tissue within each section. Mirroring NPESS scores, and as previously published, 17 NPASS mitochondrial scores were ubiquitously high‚ with nuclear scores tending to be low and ER scores having the widest variation between donors. Plots of NPASS versus NPESS demonstrated comparable organelle stress in acinar and endocrine cells within biopsies obtained from the head and tail of pancreata in DBD donors ( Figure 4A). Again, deceased donors with known diabetes were not outliers, indicative of comparable subcellular organelle ultrastructural morphology in both endocrine and acinar cells. There were no significant correlations between endocrine and acinar stress scores in the smaller number of DCD donors evaluated ( Figure 4B). EM Identification and Analysis of Individual Endocrine Phenotypes In the donors without diabetes, proportions of α, and δ cells (identified by the morphological appearance of granules) were broadly in line with previously reported frequencies (28% α, 50% β, 3% δ) ( Figure 5A, left panel). 22 No β cells were observed in any of the T1D donors, and α cells predominated over δ cells (78% versus 18%). Proportions of α cells and β cells were comparable in T2D donors (43% α, 36% β, 4% δ) and β cell to α cell ratio was significantly lower in T2D compared with donors without diabetes (P = 0.005). Occasionally, cells containing granules that resembled those of more than 1 cell type, typically α and β, were observed (16% of cells). There was no significant difference in the prevalence of these heterogeneous cells in donors with/without diabetes. In severe cases of vesicle depletion, the lack of endocrine vesicles prevented identification of the cell type (4%, classified as unknown/other). No significant differences in cell-type proportions were detected when the head versus tail of pancreata or DBD versus DCD were compared across the whole cohort ( Figure 5A, right panel). When NPESS for individual cells across the whole cohort, including donors with and without diabetes, were stratified by endocrine cell type, cells defined as heterogeneous with more than 1 type of endocrine granule had significantly higher subscale and total stress scores, with the exception of the mitochondrial score‚ which was consistently high in all cell phenotypes ( Figure 5B). This persisted when donors with diabetes were omitted ( Figure S1, SDC, http://links.lww.com/ TXD/A397). Significantly lower vesicle depletion and vacuolization leading to lower overall NPESS were seen in δ cells relative to β cells. When donors with diabetes were omitted, only the difference in vesicle depletion scores remained significant. NPESS scores in α and β cells were comparable, including and excluding donors with diabetes. Intermediate Endocrine-Exocrine Cells A small number of cells exhibited features of both endocrine and acinar morphology ( Figure S2, SDC, http://links. lww.com/TXD/A397). These were identified by the presence of zymogen granules in addition to endocrine granules. Zymogen granules are similar in electron density to glucagon granules but are larger in size, with a diameter of 300 to 900 nm versus 200 to 400 nm. These mixed-phenotype "intermediate cells" were identified in 9 of 30 donors (30%) but were infrequent‚ with only 1 to 3 cells counted in each donor. Intermediate cells were distributed equally across both the head and tail of the pancreas, generally located toward the islet periphery or as isolated endocrine cells adjacent to acinar cells, and included α cells, β cells, and cells of ambiguous phenotype. One intermediate cell was present in D1 free islets from donor I-2. Demographic analysis of the donors with intermediate cells in the islets revealed no significant association with diabetes status, and although trends toward a greater age and lower body mass index were observed, these were not statistically significant. No further demographic associations were apparent. Donors with intermediate cells had significantly increased NPESS scores (P = 0.02) ( Figure S2, SDC, http://links.lww. com/TXD/A397). No association was observed with NPASS score (data not shown). Impact of Islet Isolation on NPESS Islets from 3 additional donor pancreata (2 DCD; 1 DBD) were fixed and analyzed both 1 to 2 h postisolation and following 12 to 18 h in culture in comparison with tissue biopsies collected before enzyme perfusion. Total NPESS scores were significantly lower in isolated islets than intact pancreata, with significant improvements in appearances of all organelles but unresolved endocrine vesicle depletion ( Figure 6). DISCUSSION Building on a recently validated method for standardized EM evaluation of human pancreatic acinar cells, we have developed a robust semiquantitative scoring system for systematic evaluation of subcellular organelles in human pancreatic endocrine cells within the intact organ and isolated islets. Application within a cohort of deceased donor whole pancreata showed comparable scores between DBD and DCD donors and similar endocrine and acinar stress scores in pancreatic head and tail biopsies. P1 endocrine stress scores in donors with or without diabetes correlated with CIT and were comparable to NPASS. NPESS has additional potential for subcellular morphological characterization of isolated islets. Ultrastructural alterations including nuclear chromatin condensation, mitochondrial swelling, ER dilation and vacuolization have previously been reported in islet cells undergoing ischemic stress. [23][24][25][26] Loss of endocrine vesicles has been demonstrated by EM in human islets following exposure to acinar cell proteases 27 and in rat pancreas in a study of cyclosporine A-induced injury. 28 More recently, quantitative scoring of EM changes has been used to assess cellular injury in a rodent model of streptozotocin-induced diabetes. 29 Unbiased FIGURE 6. Isolated islets have a reduced stress phenotype when compared with matched preisolation islets in situ. Scores for individual organelles and total scores of preisolation tissue and islets at day 0 and day 1 in culture from 3 donor organs are shown. D0 islets were sampled 1 to 2 h after being placed in culture; D1 islets were sampled 12 to 18 h after being placed in culture. *P < 0.05, **P < 0.01. ER, endoplasmic reticulum. assessment of pancreatic endocrine cell ultrastructural appearance in deceased donor organs within the current study has demonstrated the range of morphological appearances in the nucleus, mitochondria, ER, cytoplasmic vacuolization, and secretory granules enabling development of a novel semiquantitative scoring system comprising subscales for each of these parameters and a total score (NPESS). The utility of NPESS to quantify ultrastructural changes across a broad spectrum of organ donors including a wide CIT and WIT range has been confirmed showing comparable scores in DBD and DCD pancreata. Inclusion of a small number of donors with T1D and T2D showed that these were not outliers, suggesting that NPESS is more an indicator of acute subcellular stress than chronic cellular dysfunction. Mitochondrial scores were high in all donors‚ including both DBD and DCD donors (all of which were optimally retrieved following clinical standard operating procedures). 30 In contrast, nuclear scores were low and ER scores were most variable between organs. We have previously reported this organelle-specific pattern in pancreatic acinar cells. 17 In addition, endocrine secretory vesicle depletion was evident in all donors. NPESS and NPASS scores‚ including subscales‚ were closely matched in the current study within individual organs and head/tail tissue blocks. These findings support common pathways through which ischemia affects acinar and endocrine cells and comparable susceptibility of these different pancreatic cell phenotypes. CIT is a negative predictor of islet isolation outcomes and 90-d vascularized pancreas graft survival. 12,31 Pancreatic head but not tail NPESS correlated with CIT in this analysis. Differences in correlations between the 2 pancreatic regions analyzed may reflect the anatomical differences in the pancreas, with the head deriving from the ventral bud during pancreas development, whereas the body and tail derive from the dorsal bud 32 ; however, this study was designed for methodological development and validation and not powered to confirm associations; therefore‚ correlations were not absolute. Given the complexity of factors impacting upon donor organs, extended studies on larger cohorts will be necessary for sufficient power to clarify the effect of single variables and validate these hypotheses. Identification of individual endocrine phenotypes was undertaken by morphological analysis. Although absence of confirmation of cell type by immunogold-hormone labeling was a potential weakness of the current study, an a priori decision was made to maximize ultrastructural preservation through osmium tetroxide fixation despite its negative impact on tissue protein antigenicity. 33,34 Accuracy of identification was supported by absence of detected β cells in donors with T1D and increased α-cell to β-cell ratio in T2D. Although reliance upon aerobic versus anaerobic glycolysis 35 and vulnerability to oxidative stress with reduced antioxidant enzyme expression 36 in β cells in comparison with α cells in vitro have been reported, in situ NPESS scores were comparable in α and β cells. Although preliminary, the current data suggest that δ cells may be more resistant to ultrastructural damage associated with organ donation. Whether this reflects their distinct polygonal morphology with neuron-like cytoplasmic projections 22,37 requires further elucidation. Bihormonal cells expressing both insulin and glucagon have been described particularly in association with T2D and T1D. [38][39][40][41] Although there was no association with known diabetes in the current study, we observed a trend toward higher α-cell to β-cell ratios in donors with a high percentage of cells with heterogeneous appearance (r = 0.36; P = 0.067). Furthermore, these cells appeared sensitive to stress associated with organ donation‚ which had highest NPESS scores of all cell types. Without antibody-staining confirmation of bihormone expression, it cannot be ruled out that the heterogeneous appearance of the granules may be due to the impact of acute stress on normal granule morphology or the loss of defined cell membranes in severely degraded cells, impeding cell distinction. Intermediate endocrine-exocrine cells have been previously reported in both the exocrine and endocrine pancreata. 42,43 One study identified cells containing both zymogen-like and insulin-like granules in T2D noting close proximity to macrophages and mast cells. 44 Another found a greater prevalence of intermediate cells in T1D and autoantibody-positive donors in addition to evidence of ER dilation and mitochondrial damage in these cells. 45 Intermediate cells were identified in donors with and without diabetes in the current study, and no immune cell infiltration was observed in the vicinity of these cells. Presence of intermediate cells was associated with a higher NPESS score within the endocrine compartment‚ suggesting the possibility that this phenotype may be induced by the stress associated with organ donation or from transdifferentiation resulting from chronic stress, which may increase susceptibility to acute stress. The process of islet isolation is associated with additional enzymatic, chemical, and mechanical stress leading to the loss of basement membrane, the induction of mitogen-activated protein kinase and nuclear factor κ-B stress signaling pathways, and the poly ADP-ribose polymerase activation of apoptotic and necrotic pathways. [46][47][48] Pilot data demonstrating the utility of NPESS in assessing isolated islets, however, showed consistently reduced NPESS scores in comparison with islets within the donor pancreas before isolation. Despite the negative impact of increased CIT on islet isolation outcomes, viable islets suitable for transplantation can still be obtained from high CIT organs. 31,49 All 3 isolations in this study had a similar pattern of "recovery" of the islets postisolation from donors with a CIT range of 5.6 to 13.1 h. The natural selection for healthier cells imposed by the isolation process may be an important factor accounting for this effect, although this may also demonstrate repair of ischemic damage following restoration of oxygenation. The difference in the mitochondria is particularly striking‚ with scores of 1.28 to 1.96 in the isolated islets substantially lower than those seen in all whole pancreata within this study. During the ischemic period of ischemia-reperfusion injury, hypoxia and reduced ATP results in electrolyte imbalance due to failure of ATPase pumps, causing cell and mitochondrial swelling. Following restoration of oxygen‚ generation of ROS in the mitochondria induces oxidative stress that can result in cell death. 50 The reduced mitochondrial swelling observed in isolated islets with an absence of signs of injury in other organelles may indicate recovery in culture with the removal of damaged mitochondria via mitophagy. 51,52 It is established that restoration of oxygen results in ischemia-reperfusion injury driven by rapid conversion of accumulated succinate to fumarate. This process may lead to a number of changes that could resolve over the 1-d postisolation recovery period‚ including apoptosis of critically damaged cells and autophagic removal of damaged organelles. [50][51][52] It should be noted that these are pilot data from a small number of islet isolations, and additional studies on free islets are needed to strengthen these observations. In particular, functional studies including oxygen consumption rate, glucose-stimulated insulin secretion‚ and analyses of apoptotic and necrotic pathways may shed further light on the cellular mechanisms underlying the NPESS, and we plan to follow up with these analyses in further work. Additionally, retrospective NPESS scoring on pretransplant islet preparations would enable the study of correlations with clinical outcomes. Impacts of islet culture on islet cells, both deleterious and beneficial, have been described with the upregulation of proinflammatory and stress-induced genes 53,54 and islet attrition, yet improved morphology and viability. 55 Determination of the optimal temperature and duration for islet culture before transplantation is an area of ongoing research, 56 and the NPESS may provide an additional tool for researchers in evaluating and optimizing islet culture. NPESS could be implemented in the evaluation of agents that may impact islet stress and function, such as free fatty acids or nicotinamide. 57,58 NPESS also has utility for in vivo transplantation studies. The original NPASS system was developed as a tool to assess (sub) cellular stress impacting the acinar cells of the pancreas. Here, we demonstrate that an adapted version of this method can be applied to endocrine cells in conjunction with the exocrine score for comprehensive assessment of pancreata or as a stand-alone tool for specific investigation of islet stress both in situ and in isolated islets. The NPESS has utility for retrospective analysis of donor tissue/islets following clinical transplantation and for prospective preclinical studies. Application in experimental models will facilitate a deeper understanding of how islet cells are impacted by ischemia and by interventions to mitigate this, ultimately enhancing clinical outcomes.
v3-fos-license
2018-12-21T11:20:57.714Z
2012-12-12T00:00:00.000
59607521
{ "extfieldsofstudy": [ "History" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journals.ed.ac.uk/forum/article/download/535/823", "pdf_hash": "da32d091ff35542f5dd00084b8f9bdd80daf454b", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41512", "s2fieldsofstudy": [ "Art" ], "sha1": "da32d091ff35542f5dd00084b8f9bdd80daf454b", "year": 2012 }
pes2o/s2orc
The Archontic Holmes: Understanding adaptations of Arthur Conan Doyle's Sherlock Holmes stories in the context of Jacques Derrida's “Archive” A consideration of Sir Arthur Conan Doyle's Sherlock Holmes detective stories and their subsequent adaptations reveals a complex web of interdependency, which is in keeping with Jacques Derrida's concept of the archive, and can be extended to describe the functions and relations of all texts, not just those that claim explicit inter-relations. The Archontic Holmes: Understanding adaptations of Arthur Conan Doyle's Sherlock Holmes stories in the context of Jacques Derrida's "Archive" Suzanne R. Black University of Edinburgh A consideration of Sir Arthur Conan Doyle's Sherlock Holmes detective stories and their subsequent adaptations reveals a complex web of interdependency, which is in keeping with Jacques Derrida's concept of the archive, and can be extended to describe the functions and relations of all texts, not just those that claim explicit inter-relations. When seeking examples of repetition in literature, adaptations -specifically fanfictionwould seem to offer easy sites of comparison between texts demarcated as "sources" and subordinate texts created via the repetition of key features.However, rather than describing a definitive hierarchy between sources and subordinate texts, a consideration of the fanfiction surrounding one particular source reveals a complex web of interdependency, one that can be extended to describe the functions and relations of all texts, not just those that claim explicit inter-relations.Sir Arthur Conan Doyle's series of Sherlock Holmes (SH) detective stories and their subsequent adaptations (including, but not limited to, works of criticism, screen adaptations, unofficial sequels in novel form, and fanfiction) provide a wealth of data in this vein.Between 1887 and 1927, Doyle wrote four novels and fifty-six short stories featuring the detective Holmes and Dr. John Watson.For clarity, these sixty, Doylepenned texts will be referred to collectively as the SH Canon, and the sum of the SH Canon and all its adaptations as the SH Archive, for reasons that will become apparent.The SH fandom (from "fanatic domain"), as a microcosmic example of multiple intertexts, provides evidence of how sources and adaptations interact, and a model for the intertextual nature of all literary production.Jacques Derrida's concept of the archive will be used as a theoretical basis for the examination of the SH Archive, though concepts from theorists as diverse as Gilles Deleuze, Mikhail Bakhtin and Roland Barthes are required to fully explicate the complexity of adaptations.Abigail Derecho, Daria Pimenova, Christopher Marlow and Barbara Johnson's applications of Derrida's theories will be used to analyse various adaptations of SH, with examples drawn from television adaptations, traditionally published texts and online fantexts. Fanfiction has as many definitions as the scholars who engage with it have agendas, but can be broadly described as unauthorised, amateur texts written in response to a popular media text or series of texts.This definition takes in such various texts as the Homeric myths, folktales, the Jane Austen continuation Old Friends and New Fancies written by Sybil Brinton and published in 1914, and Fifty Shades of Grey (which was originally written as an x-rated fanfiction of Stephenie Meyer's Twilight novels).Henry Jenkins has been influential in defining transmedia fiction and participatory culture, which he explores in Textual Poachers: Television Fans & Participatory Culture (1992) and on his blog Confessions of an Aca-Fan.He describes fanfiction as "an unauthorized expansion of these media franchises into new directions which reflect the reader's desire to 'fill in the gaps' they have discovered in commercially produced material" (Confessions), and thus positions fanfiction as an intertextual and supplementary activity.In this context, televisual or literary adaptations of SH -for example the series of 1940s SH films starring Basil Rathbone, which pitted him against the Nazi threat, Nicholas Meyer's 1974 interpretation The Seven Per Cent Solution, which altered the roles of key characters and introduced Sigmund Freud into the mix for a more psychoanalytical take on Holmes' adventures, and the 2010 BBC television series, Sherlock, created by Stephen Moffat and Mark Gatiss, which updated the action to contemporary London -all fall under this definition of fanfiction. Fanfiction is a flourishing genre that has existed online since the inception of the Internet (and previously enjoyed a healthy life in print media) 1 with the Sherlock fandom one of the most prolific.Fanfiction.net, the largest online database of fanfiction, hosts over 21,000 works relating to Sherlock.Archive of Our Own, a new online archive for fanworks (texts, art, and audio) launched in 2009, lists over 19,000 works in "Sherlock Holmes and Related Fandoms", which is its fifth most populated category and is growing rapidly 2 .This conflation by Archive of Our Own of the Canon and adaptations of the Canon into one category hints at the intertextual interdependence of the Canon and its adaptations.SH fanfiction exists in response to the SH Canon, Sherlock, Guy Ritchie's 2009 film, Sherlock Holmes, and the 2011 sequel, Sherlock Holmes: A Game of Shadows, starring Robert Downey Jr, and every possible combination of adaptations.Some authors make a feature of this intertextuality by having, for example, Watson from Sherlock meet Holmes from the 1984 Granada TV series, as in "Dream Kissing Sherlock... or Not?" by Random_Nexus.While many transformative works alter the focus to explore platonic, romantic or erotic relationships between the characters -a tendency that has generated the bulk of enquiries, scholarly and otherwise, into fanfiction as a phenomenon 3 -the structure of fanfictions and the codes inscribed into them are equally illuminating. Analysing the SH Archive as a whole raises difficulties of approach.The Canon could be said to describe a textual boundary containing a series of related narratives by the same author.However, when it comes to discussing an adaptation it would seem remiss not to make recourse to the source of the adaptation.So, for example, while it can be argued that Doyle's The Hound of the Baskervilles forms a single, bounded text, the Sherlock episode "The Hounds of Baskerville" would invite a definition that encompasses Doyle's story.When attempting to describe the text, the artificiality of imposing boundaries must be acknowledged and, particularly in a discussion of adaptations, textual relations must be taken into consideration.A theoretical approach, drawing heavily on the deconstructionist work of Derrida, is outlined below and followed by two case studies, each examining the relationships between texts in the SH Archive. The most appropriate theoretical paradigm for the inherent interdependency of adaptation is Derrida's concept of the archive.Abigail Derecho takes Derrida's definition of the "archontic text" and assesses its suitability to her attempt to describe fanfiction as being derivative and relying on a conscious and explicit relation to its intertexts (65).She describes an archontic text as having four defining features: open-endedness, continually shifting boundaries, a drive to expand and a hierarchical structure (64).The first three criteria are easily satisfied.When looking at the example of the SH Archive, it is self-evident that subsequent adaptations may always be added and that their addition will alter the content, and therefore the boundaries, of the archive.The inevitability of future texts referencing or reminding a reader of an entry in the SH Archive are proof of its drive to expand. Derecho points out that "an archontic text's archive is not identical to the text but is a virtual construct surrounding the text, including it and all texts related to it" (65).As Derrida writes, "one will never be able to objectivize it with no remainder" (68).Thus, the archive does not stand alone, but is organised around the delimiting gaze of a reader who imposes boundaries on intertextual material that would otherwise form an archive of all existing texts.For the purposes of this article, any text referring to the SH Canon or any SH adaptation is considered to be part of the SH Archive. Derecho's fourth feature, concerning hierarchy, is less easily determined and an examination of Daria Pimenova's criticism of Derecho on this matter can help to clarify the situation.To orient an archive around the SH Canon is to grant the Canon an originary power, since the Canon can be conceived of as existing separately from its adaptations but not vice versa.Pimenova -who subscribes to such a hierarchical model -describes Derecho as presenting an inconsistent argument.Pimenova emphasises the derivative aspect of fanfiction ( 45) and describes it as being cumulative yet preserving the source text as origin (51).She asserts that this is in contrast to Derecho's archontic model which, she believes, destroys or replaces the original by granting equivalence to all its entries (51), since Derecho claims that "all texts that build on a previously existing text are not lesser than the source text, and they do not violate the boundaries of the source text" (64-65).But while Derecho describes an equivalence, by differentiating a "source" text from those that "build" on it, she tacitly admits their non-equivalence: one entry in the archive has been singled out as being a necessary condition for the other(s), even if only chronologically.Though Derecho makes a claim for the equivalence of source and adaptation, the language she uses exposes a reliance on hierarchy and a model similar to the one Pimenova proposes. Derecho's apparent contradiction exposes the complex structure of the archive.In Derrida's description he does imply that interacting with and therefore adding to an archive reinforces the entries already in the archive but, in contrast to Derecho, he is referring to the archive as a whole rather than a single locatable source.For him, the structure is therefore not hierarchical but decentred, and works to disseminate its authority: "By incorporating the knowledge deployed in reference to it, the archive augments itself, engrosses itself, it gains in auctoritas" (68).This contradicts Derecho's claim that the boundary of the source text will not be violated; for Derrida, any addition to the archive recontextualises the preceding entries.Within the SH Archive, the Canon is therefore altered by each adaptation, with the result that each entry in the Archive exists in a web of complex, mutually constitutive relations with other entries.The Canon does not exist in any accessible, originary, pre-archive state; it has been transformed by and can only be accessed via its adaptations. Critical, as well as fictional, texts become part of the archives they reference and must be acknowledged in any theoretical approach.Following this logic, it is not possible to appeal to an originary Derrida, for example, as being the source of his theories.Rather, Derrida's work is approached with respect for the supplementary interpretations provided by Derecho, Pimenova and others.This article will not appeal to Derrida as holding the authority over his own meaning, but will instead utilise the Derridean Archive, in which each entry is equivalent and modifies other entries in a complex intertextual relationship.The texts within the SH Archive seem to be explained most accurately by Pimenova's (mis) A consideration of the reception of Mark Gatiss' televisual adaptation of Doyle's novel The Hound of the Baskervilles (HOTB) provides an example of textual folding.HOTB has been frequently adapted, to the extent that many people have at least a passing knowledge of it.Hence the title of Gatiss' Sherlock episode "The Hounds of Baskerville" can be assumed to raise certain preconceived notions in the mind of the viewer.In Doyle's original story, the prevalent fog ("a dense, white fog... low but thick and well defined... looked like a great shimmering ice-field", Doyle 289) adds a sense of concealment and danger that has become so familiar to the viewer in subsequent adaptations that its presence has become expected.In Sherlock, Gatiss is able to cleverly retool it into a clue concealed in plain sight.Rather than a natural weather phenomenon concealing and revealing Doyle's large dog with phosphorous painted on its jaws, it becomes, in Gatiss' hands, a hallucinogenic gas causing the characters to envision a monstrous beast.In this way Gatiss "folds" (manipulates, recontextualises) aspects of Doyle's story, relying on the viewer's shared understanding of genre conventions to provide those familiar with HOTB with a new mystery.He plays with the fact that the viewer belongs to a different cultural background from Doyle's contemporaneous readers, one that, crucially, includes knowledge of previous SH adaptations.He also utilises the fact that linguistic norms have shifted in the intervening century by making much of the fact that "hound" would now be an anachronistic way of referring to a stray dog.Thus, hinted at by the title of the episode, there are two "hounds" -the memory of Doyle's slavering beast and the new interpretation of a bio-chemical military research group with the acronym H.O.U.N.D. -which the viewer has to read in parallel and constantly negotiate. When considered together, The Hound of the Baskervilles and "The Hounds of Baskerville" reject the concept of a source and derivative adaptation.Though each can be approached individually, the meaning and experience of each is enhanced by knowledge of the other.This is most obvious in the case of the adaptation, where knowledge of Doyle's plot grants the viewer multiple interpretations of the action, but it also works in the other direction, so that after viewing Sherlock the viewers bring with them an awareness of alternatives which can recontextualise aspects of Doyle's story.As Derrida claims, each additional entry results in the archive gaining in auctoritas, and so, when consumed in tandem, the two hound texts are cumulatively enhanced.Thus, these two archive entries circumvent chronology to achieve an a-historical equivalence in which the notion of a "source" is lost.Such nonlinearity is challenged when considered from the perspective of a single reader for whom the chronology of the reading process must be taken into account and by whom each text encountered retroactively affects previously read texts and anticipatorily affects the future reading of texts. Recourse to an individual, circumstantial chronology (i.e. the idiosyncratic path a reader can take through an archive) evokes a series of shifting power interplays and could point to an endorsement of the source/adaptation model with the source being determined chronologically.However, it highlights the arbitrary designation of an entry in the archive as source or adaptation and also exposes the limitations of both positing an idiosyncratic reader and measuring only two texts in relation to each other. This pattern of constant reinscribing evokes Gilles Deleuze's Difference and Repetition, which Derecho also uses to understand the relationship between two texts in an archive.Deleuze writes that commentaries have "a double existence and a corresponding ideal: the pure repetition of the former text and the present text in one another" (Deleuze xx,qtd in Derecho 73,emphasis in original) and Derecho uses this to claim that reading fanfiction is akin to simultaneously reading two texts which reciprocally affect each other (73).Limiting the scope of the investigation to two texts gives an incomplete picture, as an archive comprises many of these overlapping double existences so that it invites the simultaneous reading of all its entries.Mikhail Bakhtin's heteroglossia better helps to explain the interplay of multiple texts, voices and registers at inter-and intra-textual levels.Bakhtin describes the situation as a dialogism created by heteroglossia in which "everything means, is understood, as part of a greater whole -there is constant interaction between meanings, all of which have the potential of conditioning others.Which will affect the other, how it will do so and in what degree is what is actually settled at the moment of the utterance" (426).The context of a single entry in an archive is formed by multiple texts and discourses, and the reader's understanding of that entry will rely upon the entry's interaction with its context and the reader's familiarity with the context.This can be seen clearly in "Second Verse, Same as the First" by Tyleet, a complex work of fanfiction that takes as its sources the Canon, Ritchie's films and Sherlock.As with many works of fanfiction, the plot concerns two characters realising the nature of their feelings for each other, in this case Holmes and Watson.It contains two alternating narratives.One is set in the Victorian era and is typical of fanfiction which aims to respond to the Canon, though the author prompts the reader that it can also be read as responding to Ritchie's period-set films: "The canon stuff has a distinctly 2009 flavor, but it could be read either way, I think".It uses the conventional appellations of "Holmes" and "Watson", follows Victorian social codes, and attempts to echo Doyle's writing style.The second responds to the present-day setting of Sherlock and follows more closely the dialogue style and conventions established by the television programme.It refers to the characters as "Sherlock" and "John" and accordingly assumes the appearances of the actors who play them (Benedict Cumberbatch and Martin Freeman).The reader is free to imagine the Victorian incarnations of Holmes and Watson as resembling the BBC actors, the Ritchie cast or any other incarnation.There are also direct quotations from Doyle.The text interacts not only with Doyle and adaptations of Doyle, but also with the entire bodies of both Victorian-set Holmes fanfiction and Sherlock fanfiction.The subject of the story is treated, in each strand, according to the style of the source text, but its suitability as subject matter belongs to the realm of contemporary fanfiction, since Holmes and Watson realising their romantic feelings for each other is not a feature of any of the canonical SH texts.Indeed, in the contemporary strand, the action is given meaning and impetus by Tyleet relying upon the reader's knowledge of the popular Sherlock fanfiction conceit that Sherlock identifies as asexual.In this way, "Second Verse..." dramatises the existence of multiple voices -heteroglossia -and the way in which the reader must be familiar with them to fully detect all the potential meanings of the text.This intertextuality is not necessarily specific to texts that define themselves explicitly as fanfiction or adaptations, since any text of criticism or commentary enters into the archive of the text it is addressing.Roland Barthes sums up the inherent quotational and non-originary nature of all texts in "The Death of the Author" when he describes the text as "a multi-dimensional space in which a variety of writings, none of them original, blend and clash.The text is a tissue of quotations drawn from the innumerate centres of culture" (1468).If then, all texts relate to all other texts, there is potentially only one archive and it contains every instance of writing.To avoid such unhelpful totalising, divisions must be (knowingly) imposed, such as placing an artificial boundary around what has been called the SH Archive, even though it is acknowledged that the Canon has no special originary power.The tension between the need to delimit a text for practical reasons and the realisation that any limit is artificially imposed gives rise to many of the features that shape fanfiction and extend to all texts.Louisa Stein and Kristina Busse discuss this problem when they describe the tension between limits and freedom as being essential to the creation and understanding of fanfiction, defined by them as "pleasure play within limits" (195).They describe a model of source text and fantext (as the accumulation of fanworks related to a specific source) in which limits are imposed by the framework of the source text (195), the fantext (198) and the discourses that police the fantext, for example, regarding genre, length, subject matter and explicit content (200).Against this is the freedom that arises from the motivating impulse of adaptation: to deviate from the source.They explain how each new adaptation can achieve canonical status within an interpretive community (a tactic used to avoid the problems associated with addressing the idiosyncrasy of individual readers) and beget further adaptations in a linear chain that constantly reinforces the indeterminability of a source text (198). However, they do acknowledge the inaccuracy of such a linear progression comprised of cumulative binary relationships, but stop short of endorsing the more accurate model of a branching network: "By definition, fan fiction is in intertextual communication with the source text; however, in practice it also engages with a host of other texts, be they clearly stated requests, shared interpretive characterizations, or even particular instantiations of the universes that the fan writer chooses to expand upon" (199)(200). Stein and Busse's insistence on retaining the integrity of a source text, while acknowledging that no one or two texts can be divorced from the intertext or the dialogism the reader participates in, demonstrates the oppositional forces at work in defining the boundaries of a text.Derrida addresses the imposition of a limiting interpretive "frame" on a text, and cites it as a necessary condition of interpretation (and hence adaptation).Following Derrida's archontic approach, Barbara Johnson's adaptation of Derrida's reading of Jacques Lacan's reading of Edgar Allan Poe's "The Purloined Letter" replicates Derrida's argument for the necessity of framing a text while also demonstrating the effects of the process of successive interpretations.Johnson explains how Derrida deliberately offers a misreading of Lacan to demonstrate how frames limit meaning but how without the imposition of an interpretive frame the text is unbounded, indistinct, has no definable inside and outside, and is subject to textual drifting since "no totalization of the border is even possible" (235).Derrida, she writes, refers to this as "the 'parergonal' logic of the 'frame'", i.e. a supplement to the work or "ergon" (226).Johnson points to the paradoxical nature of the situation in which, for interpretation to exist, a knowingly fictitious frame must be imposed: "The total inclusion of the 'frame' is both mandatory and impossible.The 'frame' thus becomes not the borderline between the inside and the outside, but precisely what subverts the applicability of the inside/outside polarity to the act of interpretation" (235).Derrida insists that texts resist such (necessary) framing, resulting in textual drifting, in which any attempt to locate the text and its meaning result in both being deferred (Storey 98). With text and meaning as continually evasive, the focus of attention becomes the processes and agendas at work in each attempt to locate or fix a text and its meaning: the "gaps" that Jenkins claims fanfiction is trying to fill.The archontic text's drive to proliferate and deviate, which is conducive to a rejection of totalising, singular definitions, renders adaptations conducive to discussions of boundaries, normativity and the politics at play in their creation.While the theoretical paradigm outlined by the Derridean Archive is demonstrably applicable to works of adaptation, and fanfiction in particular, as texts that announce their literary forebears, it can be extended to all texts. As remarked above, Deleuze proclaims that a commentary (and hence adaptation) replicates its source to create a kind of "double text".Echoed in Barthes' order in "Theory of the Text" to "Let the commentary be itself a text" (44, original emphasis), this creates an equivalence between all forms of writing as intertextual, whether their relationships are self-professed, exist in the mind of the reader or remain implicit. Notes 1 Francesca Coppa provides a useful history of fanfiction in "A Brief History of Media Fandom". 2 Data collected on 24 Sep.2012. 3 For an investigation into fanfiction's focus on relationships, see Elizabeth Woledge's "Intimatopia: Genre Intersections Between Slash and the Mainstream." 4 Highlighting the interrelations between the production of non-canonical SH and DW texts, Anderson was a member of the Sherlock Holmes fan society Baker Street Irregulars, and both Moffat and DW writer Mark Gatiss are responsible for Sherlock. reading of Derecho's interpretation of Derrida's archontic model, i.e. that it erodes the concept and authority of a source to instead describe a non-linear series of nonhierarchical relations.Christopher Marlow draws on Derrida for his examination of the Doctor Who (DW) Archive and provides a comparable model for examining the relationship between two entries in the SH Archive.The science-fiction television series DW has a similar history to the SH Archive.It originated as an adaptation of Poul Anderson's novel Guardians of Time and The Time Machine film (47) and canon episodes were broadcast from 1963 to 1989.When it went off air, official tie-in novels and fanfictions were written by professionals and fans, who opted to change the focus to relationships, emotions and romance (51).When DW was brought back to the small screen in 2005, it was with writers of these non-canonical novels and fantexts at the helm: first Russell T Davies, then StephenMoffat.4 Marlow, like Derecho, appeals to Derrida to explain how the constituent parts of the DW Archive relate to each other.Though he does not reference Archive Fever directly, he heavily alludes to it by defining the "DW text" as the sum of all published and non-published fiction dealing with the character of The Doctor (47).Drawing from Derrida's Living On: Border Lines, he makes use of the idea of textual folding, in which "specific narrative traits or sequences are adapted from one medium to another" (47), and offers examples of how characters (such as the Daleks and Sarah Jane Smith) are (re-)presented in each medium with regard for the level of familiarity different audiences can be assumed to have with them (49).
v3-fos-license
2022-08-31T15:18:40.983Z
2022-08-26T00:00:00.000
251944619
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/microorganisms10091717", "pdf_hash": "f302b803f7645e34c1105697aebc1820f721e3af", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41516", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "1c5e5dfa74cafa1913ca9ebed2b4b22d52289d81", "year": 2022 }
pes2o/s2orc
Long-Term Fertilization Strategy Impacts Rhizoctonia solani–Microbe Interactions in Soil and Rhizosphere and Defense Responses in Lettuce The long-term effects of agricultural management such as different fertilization strategies on soil microbiota and soil suppressiveness against plant pathogens are crucial. Therefore, the suppressiveness of soils differing in fertilization history was assessed using two Rhizoctonia solani isolates and their respective host plants (lettuce, sugar beet) in pot experiments. Further, the effects of fertilization history and the pathogen R. solani AG1-IB on the bulk soil, root-associated soil and rhizosphere microbiota of lettuce were analyzed based on amplicon sequencing of the 16S rRNA gene and ITS2 region. Organic fertilization history supported the spread of the soil-borne pathogens compared to long-term mineral fertilization. The fertilization strategy affected bacterial and fungal community composition in the root-associated soil and rhizosphere, respectively, but only the fungal community shifted in response to the inoculated pathogen. The potential plant-beneficial genus Talaromyces was enriched in the rhizosphere by organic fertilization and presence of the pathogen. Moreover, increased expression levels of defense-related genes in shoots of lettuce were observed in the soil with organic fertilization history, both in the absence and presence of the pathogen. This may reflect the enrichment of potential plant-beneficial microorganisms in the rhizosphere, but also pathogen infestation. However, enhanced defense responses resulted in retarded plant growth in the presence of R. solani (plant growth/defense tradeoff). Introduction Preservation of natural environments, including soil quality and fertility, is one of the major global challenges. Crop productivity, as certainly the main source of our food, depends on soil health. However, high input of synthetic agrochemicals in the long term exhibits negative effects on soil functioning and quality by changing physico-chemical as well as biological soil properties [1][2][3]. Therefore, agricultural/horticultural plant production systems should be regarded as vital living and biologically active ecosystems. The occurrence of plant pathogens, insects and weeds is responsible for around 25% of yield losses in economically relevant crops [4] and the major reason for the increasing use of agrochemicals. The introduction of pesticides in plant production systems breaks the link between organic amendments and soil fertility, resulting in a decrease in soil organic matter over time [5]. This affects not only physico-chemical and biological soil properties, but is also relevant for overall soil health. Plant pathogens are an integral part of soil microbial communities, and a decline in soil health was shown to be accompanied by the accumulation of soil-borne pathogens in agroecosystems [6]. The maintenance of soil health-for instance, through balanced crop rotation, reduced tillage practices or application of organic fertilizers-is considered to be important for disease control [7,8]. Therefore, such environmentally friendly strategies in plant disease control should be further investigated to gain more relevance in agricultural practice as a sustainable alternative. The ability of soils to suppress plant pathogens can be regarded as a manifestation of ecosystem stability and health, which is mediated to a large extent by soil microorganisms (general suppressiveness). Soil microbiota may control soil-borne pathogens through competition, antibiosis, parasitism or the improvement of plant immune responses [9]. Mechanisms by which soils inhibit the activity of plant pathogens are described for specific suppressiveness, which is only effective against one or a few pathogens [10,11]. Studies focusing on specific soil suppressiveness demonstrated that soil microbial communities respond to pathogen biomass accumulation, as found, e.g., for the "take-all" causing agent Gaeumannomyces graminis in wheat monoculture [12]. Hence, modification of the soil microbiota may contribute to plant protection through competitive effects or the enrichment of antagonists. The understanding of the functions and interactions among soil microorganisms in agroecosystems is still limited. Several studies have highlighted how soil microbial communities are influenced by farming practices [13][14][15]. Many efforts have been made in understanding the essential relationships between soil and plant microbiota for soil functioning and plant performance [13,[16][17][18][19][20][21]. This includes the beneficial effects of organic amendments on microbial diversity in the soil [13,22], linked with suppressive effects against soil-borne pathogens [9,[23][24][25][26][27]. However, research on soil suppressiveness has not yet achieved solutions to manage soil-borne pathogens [28,29]. The knowledge of plant responses towards rhizosphere microbiota assemblages shaped by agricultural management strategies is limited, and only a few studies have addressed the long-term effects of mineral and organic fertilization on the soil microbiota and their suppressiveness against soil-borne pathogens [30][31][32][33]. A recent study with contrasting soil types from two long-term field experiments (LTEs), each with a long-term organic and mineral fertilization history, showed that soil legacies induced by fertilization strategies shaped the bacterial and archaeal communities in soil, as well as in the rhizosphere of the model plant, lettuce, independent of the soil origin [34]. The results highlighted that several genes involved in plant defense signaling were upregulated in lettuce when grown in soils under long-term organic compared to mineral fertilization, which indicated an induced plant physiological status. These so-called defense-priming beneficial microorganisms [35,36], such as members of Bacillales and other taxa, were enriched in the rhizosphere of lettuce grown in organic-fertilized soils [34]. In the present study, two model plant pathogen systems, lettuce Rhizoctonia solani AG1-IB and sugar beet Rhizoctonia solani AG2-2IIIb (teleomorph Thanatephorus cucumeris), were used to investigate soil suppressiveness depending on the fertilization strategy based on the spread of the pathogens in soil. For the analysis of plant responses in the presence of R. solani AG1-IB, a pot experiment with lettuce was performed. In addition, this work aimed to answer the question of whether the presence of the inoculated model pathogen R. solani AG1-IB in the soil alters the soil microbiota and consequently the assembly of the rhizosphere microbial communities and health of the host plant lettuce depending on the long-term fertilization strategy. It was hypothesized that both previously observed defense priming/induced systemic resistance (ISR) by beneficial microorganisms and suppression Microorganisms 2022, 10, 1717 3 of 27 of plant pathogens by soil microbiota modulation in the rhizosphere of lettuce grown in organic-fertilized soil can contribute to the disease control of Rhizoctonia, as compared to plants grown in mineral-fertilized soil. Field Site and Soil Sampling Strategy The long-term field experiment of Humboldt University, Berlin (designated as HUB-LTE), located in Thyrow (Germany; 52 • 16 N, 13 • 12 E), was established in 2006. The soil was classified as Albic Luvisol [37]. This LTE provides access to soils with long-term organic (HU-org) and mineral (HU-min) fertilization practices. Soils were collected after the growing seasons in 2015, 2016 and 2017. In each year, 15 soil cores were randomly taken from the upper 30 cm soil horizons across the respective fertilization treatments and combined into a composite sample. Afterwards, soil samples were air-dried, sieved (4 mm mesh) and stored in the dark at 6 • C until use in growth chamber experiments. Soil characteristics, management practices and physiological parameters of the used soils are summarized in Windisch et al. [38]. Pathogens Used The soil-borne pathogen R. solani AG1-IB (isolate 7/3/14, accession number AJ868459) causes bottom rot of lettuce (Lactuca sativa L.) and R. solani AG2-2IIIb (isolate BBA69670, accession numbers CYGV01000001-CYGV01002065) damping-off disease of sugar beet (Beta vulgaris L.). Both isolates were used in growth chamber bioassays for the assessment of soil suppressiveness in three consecutive years (2015,2016,2017). The impact of R. solani on plant health and rhizosphere microbiota was studied in the lettuce-R. solani AG1-IB plant pathogen system with soils from 2017 since lettuce is a well-established model plant for plant-microbial interaction studies [34]. The inocula of the R. solani isolates were prepared with barley kernels, which were sterilized before pathogen inoculation by autoclaving (121 • C for 30 min) three times with 24 h intervals, as described by Schneider et al. [39]. Assessment of Soil Suppressiveness Disease spread of R. solani AG1-IB and R. solani AG2-2IIIb was determined after pathogen inoculation by scoring brown lesions or damping-off symptoms on the stems of lettuce and sugar beet seedlings at soil level using a similar method as described by Postma et al. [9]. The experiments were performed in a growth chamber (York, Mannheim, Germany; 20 • C/15 • C, 420 µmol m −2 s −1 photosynthetic active radiation, 60%/80% relative humidity, 16 h/8 h day/night). Pots (20 × 9.5 × 6 cm) were used with florist's foam blocks at the bottom (Baumann Creative, Westhausen, Germany; water holding capacity approx. 55%). In each pot, 600 mL soil was filled on top of a water-saturated foam block. Lettuce (cv. Tizian, Syngenta, Bad Salzuflen, Germany) or sugar beet (cv. Lisanna, KWS Saat SE & Co. KGaA, Einbeck, Germany) were seeded in eight lines at 2 cm distance (0.5 cm and 2 cm deep, respectively; nine seeds per line). After germination, one infested barley kernel with the respective R. solani isolate was placed slightly beneath the soil surface in front of each lettuce or sugar beet seedling row. Disease spread was assessed weekly by counting the respective host plants exhibiting symptoms per row. Each treatment included three replicates arranged in a randomized design. This experiment was conducted three times with soils collected in three consecutive years (2015, 2016, 2017). Growth Chamber Experiments to Study Lettuce Health and Rhizosphere Microbiota To study the effect of the bottom rot pathogen R. solani AG1-IB on lettuce growth, health and rhizosphere microbiota, soils from the growing season 2017 were applied. Lettuce (cv. Tizian) was grown in the absence (HU-org, HU-min) and presence of the pathogen R. solani AG1-IB (HU-org + Rs, HU-min + Rs). The soils were initially incubated at the intended cultivation conditions for lettuce (20 • C/15 • C, 60%/80% relative humidity day/night) in the dark for 2 weeks. During the experiment, the water potential was regulated to 100 hPa (T5 tensiometer, UMS AG, Munich, Germany). Single lettuce plants were sown into pots (10 × 10 × 11 cm) filled with the respective soils and exposed to a temperature of 18 • C and 80% relative humidity for 2 days and afterwards further cultivated in a growth chamber at the above-mentioned conditions. To ensure the availability of comparable amounts of nitrogen (N) in each treatment, the N content of soils was analyzed in the beginning and adjusted to the recommendations for lettuce (0.32 g N per pot) using calcium nitrate in two portions (each 50%), before sowing and 3 weeks later. Soils were inoculated with the pathogen shortly before lettuce sowing. For the presence of R. solani AG1-IB, each pot was inoculated with 12 infested barley kernels, which were placed at distances of 2 cm from the seed and at 3 cm depth. In control pots, the same number of autoclaved non-infested barley kernels was used. In addition, each treatment included one pot per replicate without lettuce to assess the impact of the pathogen on microbial communities in bulk soil. Four plants for each of the four replicates per treatment were arranged in a randomized block design. After 10 weeks of cultivation, plants were harvested, shoot and root dry masses determined, and samples for microbial analyses (soil and rhizosphere) and plant gene expression (leaves) were collected. Analysis of Plant Gene Expression The expression level of genes was analyzed for lettuce after a cultivation time of 10 weeks. A total of 18 target genes was selected from the Lactuca sativa, L. cv. Tizian draft genome at NCBI [40] based on the comparison with functional genes of Arabidopsis thaliana using the "Arabidopsis Information Resource" (www.arabidopsis.org, accessed on 12 April 2019, [41]). All primer pairs and target genes used in this study were previously described [34] and are listed in Supplementary Table S1. The glyceraldehyde-3-dehydrogenase gene served as an endogenous control for qPCR normalization. Four leaves from two plants per replicate were snap-frozen in liquid nitrogen. The RNeasy Plant Mini Kit (QIAGEN GmbH, Hilden, Germany) was used to extract total RNA from 100 mg pulverized lettuce leaves. After RNA quantification by a NanoDrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), cDNA was synthesized from 2 µg of total RNA with the High-Capacity cDNA Reverse Transcription Kit with RNase Inhibitor (Applied Biosystems, Foster City, CA, USA). The following qPCR was performed in three technical replicates using the same conditions as described previously [34]. Specific PCR products were confirmed by melting curve analysis and gel electrophoresis before relative quantification applying the 2 −∆∆Ct method [42]. Data were first normalized to the endogenous control and then logarithmically transformed to fold change differences. The standard error of the mean was calculated from the average of the technical triplicates. PERMANOVA (10,000 permutations) analysis was performed based on Bray-Curtis dissimilarities calculated from ∆Ct values in R (version 3.6.1) using package vegan [43] and subjected to Principal Coordinates Analysis (PCoA). Collection of Bulk Soils, Root-Associated Soils and Rhizosphere Samples and Total Community DNA Extraction The complete root systems of two plants per replicate were combined and intensely shaken in order to obtain the loosely adhering soil (here defined as root-associated soil). The roots were then washed briefly in sterile tap water and the remaining adhering soil was defined as rhizosphere. Subsequently, 5 g of roots were transferred to Stomacher bags with saline (1:10) and treated by a Stomacher 400 Circulator (Seward Ltd., Worthing, UK), followed by centrifugation according to Schreiter et al. [44] in order to recover rhizosphere microbial cells. Aliquots of the habitats' bulk soil, root-associated soil and rhizosphere pellets were stored at −20 • C until total microbial community (TC) DNA extraction. Subsequently, TC-DNA was extracted from bulk soil, root-associated soil (0.5 g fresh weight) and total rhizosphere pellets using the FastPrep-24 bead-beating system and FastDNA Spin Kit for Soil and subsequently purified with the GeneClean Spin Kit (both MP Biomedicals, Santa Ana, CA, USA). TC-DNA quality was checked by 0.8% agarose gel electrophoresis. Microbial Community Analyses Bacterial and archaeal community analysis focused on rhizosphere and bulk soil samples, similar to Babin et al. [45]. Briefly, the V3-V4 region of the 16S rRNA gene was amplified using the primer pair 341F and 806R [46,47], modified after [48] (Supplementary Table S2). In a second PCR, Illumina-specific sequencing adapters and sample identifiers were added, followed by amplicon purification and equimolar pooling, as previously described [45]. High-throughput amplicon sequencing of 16S rRNA genes was performed on an Illumina ® MiSeq ® platform (Illumina, San Diego, CA, USA) with MiSeq v2 kit (2 × 250 bp) in pairedend mode, according to the manufacturer's instructions. Unassembled raw amplicon data are available at the NCBI Sequence Read Archive (SRA, https://www.ncbi.nlm.nih.gov/sra, accessed on 16 August 2022) under accession number PRJNA725140. After de-multiplexing and trimming with cutadapt [49], the UPARSE pipeline [50] was applied for sequence merging, dereplication, removal of singletons and clustering of sequences into operational taxonomic units (OTUs, ≥97% sequence similarity). Representative OTUs were classified with the classify.seqs command (80% confidence) from mothur [51] using the RDP classifier [52] training set, version no. 18. Sequences unclassified at the domain level or of non-bacterial origin were discarded, as well as all 16S-OTUs with <10 reads over the whole data set, resulting in a total of 5957 final OTUs and, on average, 49,442 quality-filtered sequences per sample. Since archaeal reads were found only among the rare OTUs (<10 reads), they were not further considered. Thus, we will refer to "bacterial community" in the following. The PCR conditions for fungal community analysis based on the internal transcribed spacer (ITS2) were conducted for all three habitats (bulk and root-associated soils, rhizosphere) according to Sommermann et al. [14]. In brief, amplification was conducted in three independent PCRs per sample at different annealing temperatures (54 • C, 56 • C, 58 • C), using the primer pair ITS86F/ITS4 [53,54] (Supplementary Table S2 Table S3). For each PCR, 10 ng template DNA and bovine serum albumin (BSA; final concentration 0.5 mg ml −1 ) were added. Independent PCRs per sample were pooled and purified using the MinElute PCR Purification Kit (QIAGEN, Hilden, Germany) with a final elution step in 12 µL 10 mM Tris-HCl, pH 8.5. Subsequently, the concentration of each sample was determined by a Qubit ® fluorometer (Invitrogen, Carlsbad, CA, USA), followed by pooling of the amplicons to equimolar amounts. High-throughput sequencing of the ITS2 pool and the following taxonomic classification was processed as previously described [45] on the Illumina ® MiSeq ® platform using the MiSeq v3 kit (2 × 300 bp) in paired-end mode. Unassembled raw sequences were submitted to the European Nucleotide Archive (ENA) under the following BioProject accession number: PRJEB53229. Barcode primer and adapter trimming was performed including the FASTX toolkit [55] and then merged using FLASH v.1.2.10 [56] with a minimum overlap of 10 bp. A database-dependent strategy according to Antweiler et al. [57] with a local GALAXY Bioinformatics Platform in combination with the fungal UNITE database v8.0 [58,59] was conducted with the sequences by applying a closed reference approach. In brief, all sample sequences were aligned with the database (e-value ≤ 0.001) and only results with minimum alignment length ≥200 bp and similarity ≥97% were kept. In summary, 2,940,507 out of 4,252,478 sequences (69.5% ± 11.5% of all 48 samples) remained. The SH numbers of the UNITE database were used as identifiers for the ITS-OTU abundance table generated by counting the sequences per taxonomic assignment. Finally, a total of 1227 OTUs was obtained, with an average of 61,260 reads per sample. A qPCR approach according to Wallon et al. [60] was conducted to quantify the abundance of the inoculated pathogen R. solani AG1-IB in soils of the growth chamber experiment. Amplification was performed in a final volume of 20 µL containing PowerUp TM SYBR TM Green Master Mix (Applied Biosystems, Vilnius, Lithuania), 0.5 µM of each primer (AG1-IB-F3, AG1-IB-R [60]; Supplementary Table S2), 0.5 mg mL −1 BSA and 10 ng template DNA. The qPCR of four biological replicates was performed with the QuantStudio 5 qPCR System (Applied Biosystems, Darmstadt, Germany), each in four technical replicates. The thermal program consisted of initial heating (50 • C for 2 min) and denaturation (95 • C for 10 min) followed by 40 cycles: 95 • C for 5 s, 62 • C for 20 s. For quantification, a standard curve of serially diluted R. solani AG1-IB DNA was generated under the same conditions in five technical replicates (R 2 = 0.999, efficiency = 94.1%). Statistical Analysis A linear mixed model was used to predict the effects of fertilization strategy and R. solani AG1-IB inoculation on the shoot and root dry masses of lettuce. The model included replicates as a random effect. Tukey's HSD tests were performed post-hoc and heteroscedasticity was accounted for by using group variances. The spread (v d ) (cm day −1 ) of the R. solani pathogens was analyzed using the procedure ROBUSTREG in SAS 9.4 (SAS Institute Inc. 2019). The estimated slope a and its 95% confidence limits (CLs) of the model r × d = a × t + b, where r represents the last row of plants showing symptoms, d is the distance between rows (cm) and t is the number of days since inoculation, were used as estimates for v d and its CLs, respectively. Comparison of treatments was performed by observation of overlapping or non-overlapping CLs. Multivariate analyses of microbial communities were carried out in R [61] using the following packages: vegan [43], pheatmap [62], car [63], rcompanion [64], agricolae [65], plyr [66], edgeR [67,68], phyloseq [69], ggplot2 [70], RColorBrewer [71], MASS [72] and mvabund [73]. Alpha diversity indices (species richness, Shannon index) were determined based on subsampling to the lowest number of reads (17,505 for 16S rRNA gene or 31,967 for ITS2 data set, respectively). The effects of the fertilization strategy and the presence of R. solani AG1-IB on the fungal communities were tested by PERMANOVA (10,000 permutations) based on a Bray-Curtis dissimilarity matrix of count data. For 16S rRNA gene count data, a generalized linear model under negative binomial distribution was used, followed by analysis of deviance to test for the effect of factors "fertilization" and "R. solani presence" (likelihood ratio test, 999 bootstrap iterations). Non-metric multidimensional scaling (NMDS) analyses were conducted based on Bray-Curtis dissimilarities calculated from count data for both bacterial and fungal data sets. The mean relative abundance of the 30 most abundant microbial genera in each treatment was visualized by heatmaps (Euclidean distance clustering). Fertilization-dependent (HU-org vs. HU-min) or inoculation-dependent (absence vs. presence of R. solani) relative abundances of bacterial and fungal genera were analyzed by likelihood ratio tests under negative binomial distribution and generalized linear models (edgeR) separately per habitat, considering interaction effects between factors. The effects of fertilization and pathogen inoculation on microbial alpha diversity indices, relative abundances of phyla and qPCR abundance of the inoculated R. solani AG1-IB were tested separately for each habitat by using two-way analysis of variance (ANOVA) followed by post-hoc Tukey's HSD test (p < 0.05). Data transformation by Tukey's Ladder of Powers was carried out, if ANOVA assumptions failed. The qPCR abundance data of R. solani per gram bulk soil had to be transformed into ranks to obtain valid results for normal distribution and variance homogeneity. Long-Term Mineral Fertilization Reduced the Spread of Rhizoctonia solani Soil suppressiveness was analyzed by measuring the rate of disease spread of R. solani AG1-IB on lettuce seedlings and of R. solani AG2-2IIIb on sugar beet seedlings in six independent experiments with soil samples from the growing seasons in 2015, 2016 and 2017. By assessing the effect of the long-term fertilization, a significantly lower spread of both R. solani pathogens was observed in mineral-compared to organic-fertilized soils in each sampling year, except for AG2-2IIIb in the soil sampled in 2017 (Table 1). In addition, no significant differences in hyphal spread were determined depending on the sampling year (2015: 0.48 ± 0.09 cm day −1 , 2016: 0.40 ± 0.16 cm day −1 , 2017: 0.49 ± 0.09 cm day −1 ), without considering isolates or long-term fertilization history. However, averaged over all three years, a significantly faster spread was revealed for the pathogen R. solani AG2-2IIIb (0.59 ± 0.19 cm day −1 ) compared to R. solani AG1-IB (0.32 ± 0.09 cm day −1 ). Table 1. Disease spread of Rhizoctonia solani AG1-IB in lettuce (cv. Tizian) and of R. solani AG2-2IIIb in sugar beet (cv. Lisanna) 12 days after inoculation of the respective pathogen in soils sampled from HUB-LTE in 2015, 2016 and 2017. Small letters indicate significant differences between long-term mineral (HU-min) and organic (HU-org) fertilization using the procedure ROBUSTREG (p < 0.05). Treatment Spread Fertilization Strategy and Presence of R. solani AG1-IB Limited Lettuce Growth Lettuce plants were cultivated for 10 weeks in a growth chamber in soils (from 2017) under a long-term mineral or organic fertilization strategy. Significant effects of the factors long-term fertilization (p < 0.001) and R. solani AG1-IB inoculation (p < 0.05) on lettuce growth were revealed based on the linear mixed model. Significantly lower shoot (23%) and root (40%) dry masses of lettuce were detected in soil under long-term organic fertilization compared to the mineral-fertilized soil (HU-min vs. HU-org; Figure 1a,b). Fertilization Strategy and Presence of R. solani AG1-IB Limited Lettuce Growth Lettuce plants were cultivated for 10 weeks in a growth chamber in soils (from 20 under a long-term mineral or organic fertilization strategy. Significant effects of the fact long-term fertilization (p < 0.001) and R. solani AG1-IB inoculation (p < 0.05) on lett growth were revealed based on the linear mixed model. Significantly lower shoot (23 and root (40%) dry masses of lettuce were detected in soil under long-term organic fe lization compared to the mineral-fertilized soil (HU-min vs. HU-org; Figure 1a,b). The pathogen R. solani AG1-IB reduced the shoot growth of lettuce (16% in HU-m + Rs vs. HU-min; 20% in HU-org + Rs vs. HU-org) significantly, independently The pathogen R. solani AG1-IB reduced the shoot growth of lettuce (16% in HUmin + Rs vs. HU-min; 20% in HU-org + Rs vs. HU-org) significantly, independently of fertilization strategy. The root dry mass was also significantly reduced (32%) by the pathogen in mineral-fertilized soil but not in the organic-fertilized soil (Figure 1b). At the end of the cultivation period, the shoot nutritional status was analyzed and moderate deficiencies in nutrient concentrations such as N, P, K and S were identified in all treatments (Supplementary Table S4). Only for K, significant differences related to the fertilization strategy were observed, showing higher values in the mineral treatment (HU-min, HU-min + Rs). Other macro-and micronutrients in the shoot tissues, such as Ca, Mg, Mn and Zn, reached the sufficiency range in all treatments. R. solani AG1-IB and Fertilization Strategy Influenced Gene Expression Profiles of Lettuce To determine whether gene expression levels in lettuce were influenced by the different long-term fertilization strategies and the presence of the pathogen R. solani AG1-IB, qPCR for 18 different plant genes was performed (Supplementary Figure S1). PERMANOVA analyses confirmed that the fertilization strategy (HU-org/HU-min) had a moderate influence (explained variance 11.6%), while no significant effects of the pathogen on gene expression levels were found ( Table 2). The interaction of the factors fertilization strategy and pathogen presence had the highest influence (explained variance 43.7%) on gene expression patterns. When calculated separately for each fertilization strategy, the presence of the R. solani AG1-IB explained 53.5% of variance in gene expression in lettuce when grown in mineral-fertilized soil and 60.5% when grown in organic-fertilized soil (both p < 0.05). In the treatments without R. solani AG1-IB (HU-org vs. HU-min), the expression levels of PR1, PDF1.2, MYB10 and GST6 genes in lettuce shoots significantly increased when grown in organic-fertilized soil compared to plants grown in mineral-fertilized soil (Figure 2a). For all other analyzed genes, a lower level of expression in plants from organiccompared to mineral-fertilized soils was observed. However, the differences were not statistically significant. In the presence of the pathogen R. solani AG1-IB, the plants from soils with long-term organic fertilization showed significantly increased expression of several genes involved in abiotic and biotic stress signaling (PR1, LOX1, MYC2, ERF104, ERF6, GST6, HSP70, BGlu42, OPT3, RbohF and MYB15) in comparison to lettuce grown in soil with mineral fertilization (HU-org + Rs vs. HU-min + Rs; Figure 2b). Bacterial communities in the bulk soil and rhizosphere differed strongly (Figure 3). Fertilization strategy was the main driver of the bacterial community composition in bulk soil (deviance = 25,004 ***) and in the lettuce rhizosphere (deviance = 14,432 **), resulting in discrete clusters of samples with organic or mineral fertilization history in NMDS analysis ( Figure 3). No significant effect of the pathogen R. solani AG1-IB on the total bacterial community was detected in both habitats (analysis of deviance; Figure 3). lettuce (cv. Tizian): (a) plants grown in soils with long-term organic (H (HU-min) fertilization, (b) plants grown in presence of Rhizoctonia so (HU-org + Rs) compared to plants grown in mineral-fertilized soil (HU icant differences according to Tukey's HSD test (p < 0.05). Fertilization Strategy but not R. solani AG1-IB Shifted Bac Community Composition Bacterial communities in the bulk soil and rhizosphere di Fertilization strategy was the main driver of the bacterial comm soil (deviance = 25,004 ***) and in the lettuce rhizosphere (devi in discrete clusters of samples with organic or mineral fertilizat ysis ( Figure 3). No significant effect of the pathogen R. solani AG community was detected in both habitats (analysis of deviance Bacterial alpha diversity (species richness, Shannon index long-term fertilization nor by pathogen inoculation (Supplemen significant differences in the relative abundances of bacterial (only in bulk soil) or pathogen presence (only in rhizosphere) w tary Table S6). In bulk soil, the candidate phylum Saccharibact ative abundance (1-3%) in mineral-than in organic-fertilized s Bacterial alpha diversity (species richness, Shannon index) was neither affected by long-term fertilization nor by pathogen inoculation (Supplementary Table S5). Only a few significant differences in the relative abundances of bacterial phyla due to fertilization (only in bulk soil) or pathogen presence (only in rhizosphere) were observed (Supplementary Table S6). In bulk soil, the candidate phylum Saccharibacteria exhibited a higher relative abundance (1-3%) in mineral-than in organic-fertilized soils. In the rhizosphere of lettuce, a significant enrichment of Gammaproteobacteria (5-9%) was observed in organic-fertilized soil in the presence of R. solani AG1-IB (HU-org + Rs; Supplementary Table S6). When looking at the 30 most dominant bacterial genera (Figure 4), it became apparent that different taxa predominated in the bulk soil (e.g., Virgibacillus, Pseudarthrobacter, acidobacterial groups Gp1, Gp3, Gp6) and rhizosphere (e.g., Clostridium, Agrobacterium, Rhizobium, Asticcacaulis, Devosia). In bulk soil, fertilization-dependent differences in relative abundance were revealed among bacterial taxa. For instance, acidobacteria Gp1 (Figure 4), Tumebacillus and sequences with the closest affiliation to Ktedonobacterales were significantly higher in soils under mineral fertilization, independent of pathogen presence (Table 3). Bulk soils with organic fertilization history (HU-org, HU-org + Rs) exhibited a higher relative abundance of acidobacteria Gp4 than mineral-fertilized soils (HU-min and HU-min + Rs). However, pathogen presence-dependent responders to fertilization were also identified in bulk soil (Table 3A,B). In response to R. solani AG1-IB, a significantly lower relative abundance of Actinoallomurus (0.1 ± 0%) or Fontibacillus (0 ± 0%) was observed in bulk soils with mineral or organic fertilization history, respectively, when compared to the controls without pathogen inoculation (0.7 ± 1.3%; 0.5 ± 0.7%, respectively). Moreover, only a few minor pathogen presence-dependent differences (FDR < 0.05; abundance < 0.5%) were observed in bulk soils (data not shown). higher relative abundance of acidobacteria Gp4 than miner HU-min + Rs). However, pathogen presence-dependent re also identified in bulk soil (Table 3A,B). In response to R lower relative abundance of Actinoallomurus (0.1 ± 0%) or served in bulk soils with mineral or organic fertilization hi pared to the controls without pathogen inoculation (0.7 ± Moreover, only a few minor pathogen presence-dependent dance < 0.5%) were observed in bulk soils (data not shown Table 3. Bacterial genera in bulk soils differing significantly (FDR < 0.05) in relative abundance depending on long-term organic (HU-org) and mineral (HU-min) fertilization strategy (A) in the absence and (B) in the presence of Rhizoctonia solani AG1-IB (+Rs). Mean ± standard deviation of taxa with >0.5% relative abundance is displayed. Bold numbers indicate significant enrichment. (A) Bulk Soil Phylum Family Genus HU-org HU-min The comparison of the relative abundances of bacterial genera in the rhizosphere of lettuce grown in soils with different fertilization history or in the presence of R. solani AG1-IB showed statistically significant differences only among minor abundant genera (mean < 0.5%; FDR < 0.05; data not shown). Notably, sequences identified only at higher taxonomic levels, e.g., Selenomonadales, exhibited a significantly higher relative abundance in the rhizosphere of mineral-fertilized soil (HU-min; 3 ± 3%) when compared to the organic treatment in the absence of R. solani (HU-org; 0.2 ± 0.2%) and to the mineral treatment in the presence of R. solani (HU-min + Rs; 0 ± 0%). Fertilization Strategy and R. solani AG1-IB Shaped Fungal Community Composition In each habitat (bulk soil, root-associated soil, rhizosphere), the effect of fertilization practice was the main driver of fungal community structures, especially in the root-associated soil (Table 4). NMDS ordination of the fungal communities showed a clear separation between organic and mineral fertilization in all habitats ( Figure 5). Furthermore, fungal communities were significantly influenced by R. solani AG1-IB inoculation in the bulk soil and in the rhizosphere, but not in the root-associated soil (Table 4). Higher heterogeneity was observed among fungal communities in the rhizosphere of lettuce grown in organic-fertilized soil compared to mineral fertilization. Additionally, a distinct separation of fungal communities in the rhizosphere in organic soils depending on the presence/absence of the pathogen (HU-org, HU-org + Rs) was found (Figure 5b). Table 4. Effects of long-term mineral and organic fertilization strategy (Fertl) and the presence of the pathogen Rhizoctonia solani AG1-IB on fungal community composition in bulk soil (BS), rootassociated soil (RA) and the rhizosphere (RH) of lettuce (cv. Tizian). PERMANOVA analysis based on Bray-Curtis distances (10,000 permutations). Ev-explained variance. Factor BS Ev [%] p Alpha diversity indices (species richness, Shannon index) were calculated to ass the effects of the studied factors on fungal communities in each habitat. A significant fluence of fertilization strategy on the fungal diversity was found in root-associated s resulting in higher diversity indices in organic vs. mineral treatments (Supplementary T ble S7). Alpha diversity in bulk soil and the rhizosphere of organic treatments (HU-o HU-org + Rs) was also higher as compared to mineral treatments (HU-min, HU-min + R but not significant. The presence of R. solani AG1-IB reduced fungal diversity in all tre ments except in root-associated soil with organic fertilization history, but the differen were not significant (Supplementary Table S7B). The fertilization strategy affected the relative abundance of various phyla in all th habitats, whereas R. solani influenced only single phyla, such as Mortierellomycota in bu soil and Chytridiomycota in the rhizosphere (Supplementary Table S8 Alpha diversity indices (species richness, Shannon index) were calculated to assess the effects of the studied factors on fungal communities in each habitat. A significant influence of fertilization strategy on the fungal diversity was found in root-associated soil, resulting in higher diversity indices in organic vs. mineral treatments (Supplementary Table S7). Alpha diversity in bulk soil and the rhizosphere of organic treatments (HU-org, HU-org + Rs) was also higher as compared to mineral treatments (HU-min, HU-min + Rs) but not significant. The presence of R. solani AG1-IB reduced fungal diversity in all treatments except in rootassociated soil with organic fertilization history, but the differences were not significant (Supplementary Table S7B). Rhizoctonia solani AG1-IB Affected Relative Abundance of Fungal Taxa in Organic-Fertilized Soils The fertilization strategy and the pathogen R. solani AG1-IB altered the relative abundances of the most predominant genera (Figure 6). A higher impact than that of R. solani was given by fertilization, resulting in prevalence differences in fungal genera between organic and mineral fertilization (Tables 5 and 6). A high relative abundance of the genus Rhizopus was observed in all habitats of HU-min, especially in the presence of R. solani (HU-min vs. HU-org; HU-min + Rs vs. HU-org + Rs; Figure 6, Table 5). In the presence of R. solani in organic-fertilized soils, this genus exhibited a lower relative abundance in the rhizosphere of lettuce compared to R. solani absence (HU-org vs. HU-org + Rs; Figure 6, Table 6A). The fertilization strategy and the pathogen R. solani AG1-IB altered the relat dances of the most predominant genera (Figure 6). A higher impact than that of was given by fertilization, resulting in prevalence differences in fungal genera organic and mineral fertilization (Tables 5 and 6). A high relative abundance of t Rhizopus was observed in all habitats of HU-min, especially in the presence of (HU-min vs. HU-org; HU-min + Rs vs. HU-org + Rs; Figure 6, Table 5). In the pr R. solani in organic-fertilized soils, this genus exhibited a lower relative abundan rhizosphere of lettuce compared to R. solani absence (HU-org vs. HU-org + Rs; Table 6A). In contrast, the relative abundance of Talaromyces increased significantly in all habitats of the organic-fertilized soils with R. solani (HU-org + Rs vs. HU-org; Figure 6, Table 6A) and was also enriched in bulk soils with mineral fertilization in the presence of the pathogen (HU-min + Rs vs. HU-min; Table 6B). A higher relative abundance of Talaromyces in the rhizosphere of lettuce was also found when grown in mineral-compared to organicfertilized soil (HU-min vs. HU-org; Table 5A). Table 6. Fungal genera in bulk soil, root-associated soil and rhizosphere of lettuce (cv. Tizian) differing significantly (FDR < 0.05) in relative abundance depending on the absence and the presence of Rhizoctonia solani AG1-IB (Rs) in long-term (A) organic-and (B) mineral-fertilized soils. Mean ± standard deviation of taxa with >0.5% relative abundance is displayed. Bold numbers indicate significant enrichment. Several genera showed fertilization-dependent alterations in relative abundance. The genus Didymella was more prevalent in all habitats of organic-fertilized soils compared to mineral-fertilized soils, irrespective of the pathogen (Table 5, Figure 6). Humicola was more prevalent in organic-fertilized soils in all habitats in the absence of R. solani (Table 5A), but decreased similarly to Arthrobotrys in the rhizosphere in the presence of R. solani (Table 6A). With respect to sequence reads only identified at higher taxonomic levels, Sordariales were found to be enriched in bulk and root-associated soils of the organic treatments in the absence of R. solani AG1-IB (Table 5A). The genus Funneliformis (Glomeromycota) was also more prevalent in the rhizosphere of the organic treatments in the absence of R. solani (Table 5A), whereas the genera Ilyonectria and Rhizophagus were more prevalent in the rhizosphere of the organic treatments, independent of the presence of R. solani (Table 5A,B). Umbelopsis was enriched in all habitats of mineral-fertilized soils, independent of the presence of R. solani (Table 5A,B). The genera Apiotrichum and Fusicolla and sequences with the highest affiliation to the higher taxonomic level Bionectriaceae were enriched in root-associated soils and in the rhizosphere of mineral-fertilized soils in the absence of R. solani (Table 5A), but their relative abundances decreased in the rhizosphere of lettuce in the presence of R. solani (Table 6B). No Clear Indication of R. solani AG1-IB Establishment in the Differently Fertilized Soils The specific amplification of pure R. solani AG1-IB DNA by qPCR was performed under similar PCR conditions as ITS2 amplicon generation using universal primers. Both approaches yielded clearly positive results in conventional PCR. For quantification of R. solani AG1-IB by qPCR, the standard curve was based on seven dilution levels (10-1.0 × 10 −5 ng), but the lowest level could not be determined (below detection limit). R. solani was detected by qPCR only in three samples (one of each in root-associated soil of HU-min + Rs and in rhizosphere of HU-org + Rs and HU-min + Rs, respectively) within the calibration range. The remaining samples exhibited lower abundances outside the calibration range and were thus based on extrapolation. Rhizoctonia solani AG1-IB had a significantly higher abundance in the rhizosphere of inoculated soils compared to the absence of the pathogen (Table 7). A similar trend was observed in bulk soils. R. solani could not reliably be quantified in root-associated soils (close to/below detection limit). In addition, a significant effect of the fertilization strategy on R. solani AG1-IB abundance was observed in bulk soils, resulting in higher abundances in mineral fertilization. A similar trend was observed in the rhizosphere. Table 7. Quantity of Rhizoctonia solani AG1-IB (in pg DNA per gram soil) in bulk soil (BS), rootassociated soil (RA) and rhizosphere (RH) of lettuce (cv. Tizian) grown in soils with organic (HU-org) or mineral (HU-min) fertilization strategy in the absence and in the presence of R. solani AG1-IB (isolate 7/3/14, +Rs). Mean ± standard deviation is displayed. Different lower case characters indicate significant differences, tested separately per habitat by two-way ANOVA followed by Tukey's test (p < 0.05). Additionally, the presence of R. solani in the habitats was estimated by ITS amplicon sequencing based on four OTUs with the highest affiliation to Thanatephorus cucumeris (teleomorph of R. solani). Only in the bulk soil under organic fertilization and the presence of R. solani, Thanatephorus was detectable, but in low relative abundances (Supplementary Table S9). Furthermore, this genus showed no significant alterations depending on fertilization or presence/absence of R. solani. In contrast, the related genus Waitea, represented by one OTU with the highest affiliation to W. circinata (teleomorph of Rhizoctonia zeae), showed significantly higher relative abundances in the presence of R. solani AG1-IB in organic-fertilized bulk soils (HU-org + Rs vs. HU-org; Table 6A), however, depending on fertilization (HU-org + Rs vs. HU-min + Rs, Table 5B). Additionally, the relative abundances of W. circinata were significantly higher in the presence of R. solani in root-associated soils and in the rhizosphere of soils with organic fertilization (HU-org + Rs vs. HU-min + Rs, Supplementary Table S9B), as well as in the absence of R. solani in root-associated soils (HU-org vs. HU-min, Supplementary Table S9A). No Inhibition of the Spread of Rhizoctonia Pathogens in Organic-Fertilized Soil Organic fertilization was reported to alter soil microbial communities and to enhance their diversity [13] and thus decrease the incidence of plant diseases caused by soil-borne pathogens [11]. It was hypothesized that these microbial factors play a key role in inhibition of pathogens such as R. solani. Contrary to our hypothesis, a consistently higher suppressiveness of mineral-fertilized soil (HU-min) against both R. solani model pathogens was found. Bonanomi et al. [26] reported that disease suppressiveness varied largely under organic fertilization depending on the pathogen and was effective against R. solani only in 26% of studied cases. Based on our data, we suggest that organic fertilizers provide saprotrophic pathogens such as R. solani with substrates and support their growth and spread [74]. Genome analysis of R. solani AG1-IB indicated its ability to feed on organic substrates and to produce toxic compounds [75], which may explain their high competitiveness. Furthermore, the lower spread of R. solani in the soil with mineral fertilization history may be related to long-term pesticide use. Long-Term Organic Fertilization Impacted R. solani AG1-IB Interaction with Indigenous Soil Fungi Fungal communities are critically important components in soil processes such as nutrient cycling, organic matter decomposition and crop health and growth [76]. The results highlighted that the fertilization strategy strongly modified the fungal community in all studied habitats (bulk soil, root-associated soil, rhizosphere), likely due to changes in food web associations, as also reported by other studies [77][78][79]. This resulted in changes in the relative abundance, notably of the phyla Ascomycota, Glomeromycota and Mucoromycota, in all habitats. Organic fertilization (HU-org) led to the enrichment of Ascomycota and Glomeromycota, especially of the genera Funneliformis and Rhizophagus in the lettuce rhizosphere (Table 5). Zhu et al. [80] identified organic fertilization as an important factor impacting the composition and activity of mycorrhizal fungi, with the result of enhanced plant fitness. Less is known on how high fungal pathogen abundances affect soil fungal communities. In our pot experiment, the inoculation of the pathogen led to striking shifts in the fungal community structure. Interestingly, the genus Talaromyces (phylum Ascomycota, order Eurotiales) predominated the fungal communities in the organic treatments (root-associated soil, rhizosphere) in the presence of R. solani AG1-IB (Table 6A). Marois et al. [81] suggested that organic fertilization supports the population density of Talaromyces in the rhizosphere, as observed also in this study. This soil-inhabiting genus, notably T. flavus, is known to suppress fungal pathogens such as Verticillium dahliae and to parasitize R. solani [82][83][84]. Moreover, the presence of the pathogen seems to promote T. flavus, which is able to produce cell wall-degrading enzymes, antifungal-acting secondary metabolites and volatile compounds that contribute to its biocontrol activity [83,[85][86][87][88]. Talaromyces responded also in mineral-fertilized bulk soils to pathogen presence, which represents an indicator for antagonistic activity. The genus Rhizopus (phylum Mucoromycota, order Mucorales) was represented by one main OTU with the closest affiliation to the saprotrophic fungus R. arrhizus (syn. R. oryzae), which dominated the soils in the present study (up to 65%) compared to our previous study (up to 3%) [38], especially in mineral fertilization in the presence of R. solani AG1-IB (Table 5B). Hence, Mucoromycota was one of the most dominant phyla (at least 10% relative abundance per habitat and treatment), besides Ascomycota, Basidiomycota and Mortierellomycota. The known ability of Rhizopus strains (e.g., R. arrhizus) to release 1,3-1,4-ß-glucanases and glucoamylases allows for the hydrolyzation of plant cell wall components, and thus these fungi act as decomposers [89,90]. The use of R. solani-infested and non-infested (control) barley kernels for inoculation may have served as a nutrient and energy source and thus explain the up to 20-fold increased relative abundance of Rhizopus compared to our previous study [38]. However, in contrast to the mineral treatment, the genus Rhizopus showed highly decreased relative abundances in the organic treatment (root-associated soil, rhizosphere) in the presence of R. solani AG1-IB (Table 5B). The high relative abundance of Talaromyces in these samples could have contributed to the decrease in Rhizopus. Miyake et al. [91] reported on their antagonistic activity against Rhizopus oryzae. However, an increased relative abundance of Talaromyces should then also have to be observed in the non-inoculated organic soils, but this could not be shown. Mycoparasitism of R. solani against Rhizopus was also reported earlier by Butler [92]. In contrast to the bacterial community, organic fertilization increased the alpha diversity of the fungal community, particularly in the root-associated soil, and this may have increased the competition among fungal taxa including the inoculated pathogen, as was similarly observed for wheat [77]. Additionally, the higher alpha diversity was probably due to the enrichment of fungi involved in saprophytic processes and was in accordance with our previous study [38]. The better establishment of Rhizopus contributed to the decreased alpha diversity in mineral-fertilized soils. We simulated an increased density of R. solani AG1-IB in soil by inoculation in the pot experiment. However, a low abundance of the pathogen was revealed by molecular tools at the end of the experiment. In contrast to the expectation, a higher abundance of R. solani AG1-IB was determined in the bulk soil of mineral compared to organic soils but not in the rhizosphere (Table 7). We analyzed soil and rhizosphere samples after 10 weeks of lettuce growth. At earlier sampling time points, a clearer differentiation in pathogen density between the treatments could be assumed. Furthermore, it must be considered that, under field conditions, natural infestation takes place via infected plant residues and sclerotia formation [93]. This was not possible to replicate in the pot experiment and could explain the observed low abundances of the pathogen after 10 weeks. Bacterial Community Shifts in Response to Fertilization Practice but Not to Pathogen Inoculation The bacterial community structure in bulk soil and in the rhizosphere shifted in response to the fertilization strategy, similar to the findings of Chowdhury et al. [34] and Windisch et al. [38], but not in response to the pathogen R. solani AG1-IB. Our results confirmed the previous observation that rhizosphere bacterial communities differ significantly from those of the bulk soil. It was expected that organic fertilization increases bacterial diversity, but this was not the case. In accordance to the findings of Chowdhury et al. [34] and Schreiter et al. [44], an enrichment of, e.g., Devosia, Rhizobium, Saccharibacteria and Asticcacaulis in the rhizosphere of lettuce was found. In contrast to previous results with soils from the same field trial (HUB-LTE), the significant enrichment of genera belonging to Bacillales [34] by organic and of Pseudomonadaceae [38] by mineral fertilization in the rhizosphere was not observed in this pot experiment. Variability among rhizosphere replicates most likely hampered the ability to discriminate bacterial genera in the present study. Nevertheless, distinct rhizosphere communities differ in their ability to interact with cultivated plants and therefore affect their performance, as observed here in terms of plant gene expression in response to pathogen challenge. The fact that AG1-IB does not attack lettuce roots [93] but the stem base and lower leaves with soil contact seems to be the reason for the only minor changes in the soil bacterial communities in the pot experiment, which is in line with the findings of Schreiter et al. [44,94] at field scale. Correspondingly, only a few taxa with significantly changed relative abundances upon pathogen inoculation were detected. In mineral-fertilized soils, the relative abundance of the actinobacterial genus Actinoallomurus decreased in the presence of the pathogen. Strains of Actinoallomurus possess several pathways for the production of secondary metabolites with antimicrobial properties [95] and have therefore the potential to directly interact with R. solani. However, their decreased relative abundance may indicate the strong competitiveness of R. solani AG1-IB. Moreover, indirect effects of the pathogen on rhizosphere bacteria via altered plant root exudation and activation of antagonistic traits must be assumed [96,97]. Gammaproteobacteria were enriched in the presence of R. solani in the rhizosphere of lettuce grown in organic soil. Since many members of Gammaproteobacteria are considered to be plant-beneficial [98], we suggest that their higher relative abundance in HU-org + Rs might have contributed to the defense priming of the plants and consequently to the observed upregulated gene expression. R. solani AG1-IB Induced Systemic Expression of Defense-Related Genes in Lettuce Plants Grown in Soils with Long-Term Organic Fertilization After a cultivation time of 10 weeks in the absence of R. solani AG1-IB, the upregulation of genes involved in (a)biotic stress responses was detected in lettuce plants when grown in organic-compared to mineral-fertilized soils (HU-org vs. HU-min; Figure 2a), as previously also found, independently of the field site [34,38]. For instance, the jasmonic acid (JA) marker gene PDF1.2, which results in the production of a defensin-like protein with antimicrobial functions, the salicylic acid (SA) marker gene PR1 [99], and the GST6 gene involved in stress protection [100] were upregulated (Figure 2a). Possibly, this observation was due to the presence of the genus Waitea in the indigenous fungal community, observed in higher relative abundances in organic-fertilized soil (Supplementary Table S9). This could explain the increased defense responses of lettuce against Rhizoctonia-like structures compared to mineral-fertilized soil. Additionally, the significant enrichment of putative pathotrophs (e.g., Didymella) in organic-fertilized soils, in combination with higher gene expression levels (i.a., PDF1.2), was in accordance with our previous study [38]. As a second possibility, it was previously discussed that the higher expression levels of defenserelated genes in lettuce from organic-fertilized soils were induced by potentially beneficial microbes (e.g., Bacillales, Gammaproteobacteria) in the rhizosphere [34,38]. Rhizosphere microorganisms are able to induce MYB72/BGLU42-dependent ISR responses [101,102]. Liu et al. [103] reported on the upregulation of the gene MYB15, a member of the R2R3 MYB family of transcription factors, in Arabidopsis, under (a)biotic stress conditions. The BGLU42 gene encodes a β-glucosidase known to play a role in plant protection through reactive oxygen species (ROS) scavenging [104]. Although no significant differences of beneficial bacterial microorganisms depending on fertilization strategy could be determined, being in contrast to our recent findings [34,38], the impact of other taxa with similar functions cannot be excluded. In the presence of the pathogen, increased transcription levels of several genes such as PR1, LOX1, MYC2, ERF104, ERF6, GST6, HSP70, BGlu42, OPT3, RbohF and MYB15 were found in lettuce plants grown in organic soils compared to the plants grown in mineral soils (Figure 2b). It was hypothesized that the upregulation of defense-related genes indicates ISR or "defense priming" in the plants, which may have contributed to Rhizoctonia disease control in the organic treatments. However, it seems that R. solani AG1-IB induced the observed upregulation of genes involved in plant stress responses through a direct interaction with lettuce tissue in the organic treatments (HU-org + Rs). These genes have been shown to function in (a)biotic stress signaling [103,105] and were modified by different hormone signaling pathways involved in plant immune responses. As mentioned, PR1 is regulated by SA, while ERF104 and OPT3 are regulated by ethylene (ET) signaling pathways [100,106,107]. The SA-and ET/jasmonic acid (JA)-mediated signal cascades were considered to be important for plant immune responses against pathogen attacks [23]. The ability of the plants to perceive and rapidly respond to pathogens has been regarded as critical for survival. This form of first-line defense response is known as pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI) and effector-triggered immunity (ETI). The enhanced expression of the SA marker gene PR1 in leaves in the presence of the pathogen indicated the induction of such types of defense responses. The genes RbohF, GST6, HSP70 and OPT3 are as well involved in the regulation of ROS [108,109] and are important chemical signals in systemic acquired resistance (SAR). Pathogen recognition by the plant triggers oxidative bursts required for further defense reactions. ROS-derived signaling interacts with the essential downstream component SA of the SAR pathway [110]. Therefore, the observed induction of several genes involved in oxidative stress, SA-and ET-mediated defense responses in lettuce shoots seems to be the result of defense reactions due to encounters with effectors of the pathogen. The enhanced defense responses to R. solani in organic-fertilized soil could also be a result of previous priming by microbe-associated molecular patterns (MAMPs) of beneficial rhizosphere microorganisms, as found in an earlier study [34]. Moreover, the increased relative abundance of Talaromyces could have also induced systemic resistance in lettuce, which then showed enhanced defense gene expression in the presence of R. solani AG1-IB [111]. Lettuce grown in organic-fertilized soils had 23 percent less shoot growth than plants grown in mineral-fertilized soils (Figure 1), which is in line with previous findings [34,38]. All plants were facing moderate K deficiency, but in the soil with mineral fertilization, the plant K status was significantly higher compared with organic fertilization (Supplementary Table S4). This might have of course contributed to better plant growth. The pathogen reduced lettuce growth independently of the fertilization strategy (Figure 1), as also observed in previous studies [112,113]. Based on the faster spread of R. solani AG1-IB in organicfertilized soils, a better establishment of the pathogen compared to mineral fertilization was expected, which results in earlier pathogen attack and thus a stronger impact on the more susceptible young lettuce plants. Indeed, a more negative impact of R. solani on lettuce growth was found in the organic compared to the mineral treatment (20% vs. 16%), but was lower than expected considering the spread results. However, reduced lettuce growth was observed in organic soils in the presence of the pathogen. Plant defense responses demand energy resources, which may be the reason for the lowered lettuce growth [107] and is known as the plant "growth/defense tradeoff" [114]. No differences in root dry masses in organic-fertilized soils in the presence and absence of the pathogen may support the hypothesis of higher defense reactivity. Less reduced lettuce growth due to R. solani attack was observed in plants grown in soils with mineral fertilization (HU-min + Rs). Based on gene expression analyses, it can be concluded that when challenged by the pathogen, the plants grown in organic soil showed enhanced expression of several genes involved in plant stress and defense signaling pathways in comparison to the plants grown in mineral soil. It could be possible that induced defense regulation helped lettuce to survive the early and continuous confrontation with the aggressive pathogen, with a tradeoff in growth. However, an additional analysis of plant stress metabolites would be helpful to answer the question of whether organic fertilization considerably improves plant health. Conclusions Changes in the structure and increased diversity of the soil microbiota due to organic fertilization are postulated as possible influencing factors in the control of soil-borne phytopathogens by enabling microorganisms to enhance plant defenses and the suppression of pathogens. In contrast to mineral fertilization, organic fertilizer supported in our study the spread and activity of the R. solani pathogens, most probably because of their ability to efficiently use organic compounds as energy sources. In the pot experiment with lettuce/R. solani AG1-IB, analysis of the microbiota in the different habitats (bulk soil, root-associated soil, rhizosphere) showed that fertilization history shaped the microbial community structure (Figure 7). In contrast to the bacterial community, organic fertilization enhanced the alpha diversity of the fungal community in root-associated soil, with consequences for the competition/interaction between the indigenous soil fungi and the artificially applied pathogen. Interestingly, the presence of R. solani AG1-IB shifted the fungal but not the bacterial community structure (Figure 7). In accordance with previous results, an induced physiological status (defense priming) of lettuce plants was observed in organic compared to mineralfertilized soils. Moreover, when confronted with the pathogen R. solani AG1-IB, the plants grown in organic soil showed enhanced expression of genes involved in plant stress and defense signaling pathways. Interestingly, microbial taxa with putative plant-beneficial traits were enriched in the rhizosphere of lettuce grown in organic-fertilized soils in response to pathogen inoculation (e.g., Talaromyces, Gammaproteobacteria). Hence, it can be concluded that the upregulation of genes involved in defense pathways as a systemic response to the pathogen was probably enhanced by the priming effect of beneficial microorganisms in the rhizosphere. This was, however, compensated by retarded lettuce growth in the presence of R. solani AG1-IB. In summary, our results suggest that lettuce grown in soil with organic fertilization history exhibited higher fitness despite presumably better conditions for the pathogen compared to mineral fertilization. Therefore, further research is needed in order to elucidate underlying plant-microbial interactions and especially interactions between microbial populations and target pathogens under consideration of the consequences for plant health. In addition, more research regarding the effects of beneficial microorganisms enriched in response to agricultural management practices is required to support the development of sustainable plant production systems. PEER REVIEW 23 of 29 Figure 7. Graphical model summarizing the main results of fertilization strategy and the pathogen Rhizoctonia solani AG1-IB on shoot and root growth, on gene expression levels and on the most relevant microorganisms in root-associated soil (fungi, outside of the dashed lines) and in the rhizosphere (bacteria and fungi, inside of the dashed lines) of lettuce. In summary, long-term organic fertilization altered the competition of indigenous soil microorganisms and led to a better establishment of R. solani in organic-fertilized soils. In the absence of the pathogen, higher relative abundances of Rhizoctonia-like structures (Waitea) and putative pathotrophs (Didymella) in organic-fertilized soils likely resulted in increased gene expression (defense-priming). In the presence of the pathogen in organic-fertilized soils, plant-beneficial microbial taxa (Talaromyces, Gammaproteobacteria) were significantly enriched (green arrow) and genes that are part of defense mechanisms in a systemic response were more upregulated, resulting in reduced lettuce growth. In general, the genus Rhizopus was more abundant in mineral-fertilized soils and less abundant in organic-fertilized soils due to the mycoparasitism of R. solani (red arrow). Figure was created with BioRender. In contrast to the bacterial community, organic fertilization enhanced the alpha diversity of the fungal community in root-associated soil, with consequences for the competition/interaction between the indigenous soil fungi and the artificially applied pathogen. Interestingly, the presence of R. solani AG1-IB shifted the fungal but not the bacterial community structure (Figure 7). In accordance with previous results, an induced physiological status (defense priming) of lettuce plants was observed in organic compared to mineral-fertilized soils. Moreover, when confronted with the pathogen R. solani AG1-IB, the plants grown in organic soil showed enhanced expression of genes involved in plant Rhizoctonia solani AG1-IB on shoot and root growth, on gene expression levels and on the most relevant microorganisms in root-associated soil (fungi, outside of the dashed lines) and in the rhizosphere (bacteria and fungi, inside of the dashed lines) of lettuce. In summary, long-term organic fertilization altered the competition of indigenous soil microorganisms and led to a better establishment of R. solani in organic-fertilized soils. In the absence of the pathogen, higher relative abundances of Rhizoctonia-like structures (Waitea) and putative pathotrophs (Didymella) in organic-fertilized soils likely resulted in increased gene expression (defense-priming). In the presence of the pathogen in organic-fertilized soils, plant-beneficial microbial taxa (Talaromyces, Gammaproteobacteria) were significantly enriched (green arrow) and genes that are part of defense mechanisms in a systemic response were more upregulated, resulting in reduced lettuce growth. In general, the genus Rhizopus was more abundant in mineral-fertilized soils and less abundant in organic-fertilized soils due to the mycoparasitism of R. solani (red arrow). Figure was created with BioRender. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/microorganisms10091717/s1, Figure S1: Principal coordinates analysis (PCoA, Bray-Curtis dissimilarity) based on the expression of 18 selected lettuce genes; Table S1: List of plant genes selected for expression analysis, Table S2: Primers and probes used in this study, Table S3: Overview of used Illumina barcodes and ITS2 primer combinations for each sample, Table S4: Nutritional status of lettuce (cv. Tizian), Table S5: Bacterial alpha diversity indices, Table S6: Relative abundance of the prevalent bacterial phyla, Table S7: Fungal alpha diversity indices, Table S8: Relative abundance of the prevalent fungal phyla, Table S9: Fungal genera Thanatephorus and Waitea. Reference [115] is cited in the supplementary materials.
v3-fos-license
2021-08-28T05:19:53.683Z
2021-08-18T00:00:00.000
237314504
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/sci/2021/9969372.pdf", "pdf_hash": "6307e7ea751a8ca9c56b9e9260b1d3b45e6badc1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41518", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "6307e7ea751a8ca9c56b9e9260b1d3b45e6badc1", "year": 2021 }
pes2o/s2orc
Protective Effect of Adipose-Derived Mesenchymal Stem Cell Secretome against Hepatocyte Apoptosis Induced by Liver Ischemia-Reperfusion with Partial Hepatectomy Injury Ischemia-reperfusion injury (IRI) is an inevitable complication of liver surgery and liver transplantation. Hepatocyte apoptosis plays a significant role in the pathological process of hepatic IRI. Adipose-derived stem cells (ADSCs) are known to repair and regenerate damaged tissues by producing bioactive factors, including cytokines, exosomes, and extracellular matrix components, which collectively form the secretome of these cells. The aim of this study was to assess the protective effects of the ADSCs secretome after liver ischemia-reperfusion combined with partial hepatectomy in miniature pigs. We successfully established laparoscopic liver ischemia-reperfusion with partial hepatectomy in miniature pigs and injected saline, DMEM, ADSC-secretome, and ADSCs directly into the liver parenchyma immediately afterwards. Both ADSCs and the ADSC-secretome improved the IR-induced ultrastructural changes in hepatocytes and significantly decreased the proportion of TUNEL-positive apoptotic cells along with caspase activity. Consistent with this, P53, Bax, Fas, and Fasl mRNA and protein levels were markedly decreased, while Bcl-2 was significantly increased in the animals treated with ADSCs and ADSC-secretome. Our findings indicate that ADSCs exert therapeutic effects in a paracrine manner through their secretome, which can be a viable alternative to cell-based regenerative therapies. Introduction Hepatic ischemia-reperfusion injury (HIRI), a precursor to liver dysfunction and liver failure [1], is an inevitable complication of shock, trauma, hepatectomy, liver transplantation, and other surgical procedures [2][3][4]. The pathophysiological process of HIRI involves excessive production of reactive oxygen species (ROS), activation of Kupffer cells and other inflammatory cells, and calcium overload, which eventually lead to hepatocellular apoptosis [5]. Although orthotopic liver transplantation (OLT) is an effective treatment for terminal liver dysfunction, it is limited by organ shortage, high costs, immune rejection, and transplant-related complications [6]. HIRI is still an unresolved clinical issue, and an effective strategy is urgently needed to alleviate HIRI and improve patient prognosis. Stem cell therapy is a promising approach for tissue repair and regeneration [7][8][9]. Mesenchymal stem cells (MSCs) in particular have shown encouraging results against inflammatory, degenerative, and ischemia-reperfusion diseases [10][11][12] since they can be isolated from multiple sources, including adipose tissue [13], bone marrow, dental pulp [14], umbilical cord blood [15], tonsils [16], oral cavity [17], and amniotic fluid [18]. Adipose-derived stem cells (ADSCs) are increasingly being considered a promising tool for cellular therapy and tissue engineering [19]. Depending on the environmental stimuli, ADSCs can differentiate into osteoblasts, adipocytes, and hepatocytes [20] and are therefore highly suitable for cell-based therapy in multiple organ systems. However, the clinical application of stem cells is limited by long-term safety concerns, such as unwanted differentiation [21], potential tumorigenicity [22], and elimination by the receptor immune system [23]. Therefore, stem cell transplantation is still at the experimental stage [24]. The regenerative effects of transplanted stem cells are mainly attributed to the paracrine regulation of endogenous cells via secreted factors [25]. The secretome of a cell population refers to the biologically active factors secreted by the cells into the extracellular space, including soluble proteins, free nucleic acids, lipids, and extracellular vesicles [26], that aid in intercellular communication and transport. Several studies have demonstrated the regenerative potential of the stem cell secretome, which can obviate some of the pressing concerns of cell-based therapies, including immune rejection, tumorigenicity, and emboli formation. Adipose-derived stem cell conditioned medium (ADSC-CM) or secretome has shown remarkable therapeutic effects in small-animal models of angiogenesis [27], diabetic pain [28], wound healing [29], glucose metabolism [30], etc. While stem cell therapy has been investigated in animal models of partial hepatectomy, little is known regarding the effect of ADSC-secretome on HIRI in large animals. In this study, we established laparoscopic hepatic ischemiareperfusion and partial hepatectomy in miniature pigs and transplanted ADSCs or the ADSC-secretome directly into the liver parenchyma. The ADSC-secretome alleviated apoptosis in the hepatocytes and improved cellular ultrastructure. Our findings show that the ADSC-secretome is a safe and effective strategy against HIRI. Materials and Methods 2.1. ADSC Culture and Preparation of Conditioned Medium (CM). Adipose tissues were obtained from the subcutaneous abdominal fat and digested with collagenase I at 37°C for 45 min with continuous shaking. After neutralizing enzyme activity with L-DMEM (low glucose-Dulbecco's modified Eagle medium) supplemented with 10% FBS (Clark, USA), the homogenate was filtered and centrifuged, and the ADSCs were suspended in L-DMEM supplemented with 10% FBS, 2 mM L-glutamine, and 100 μg/ml penicillin and streptomycin (Solarbio, China). The cells were cultured at 37°C under 5% CO 2 in a humidified incubator (Galaxy 170 S, Eppendorf, Germany). The ADSCs were characterized as previously described [31], cultured in serum-free L-DMEM for 48 h. Then, the medium was aspirated and centrifuged at 1000 g for 15 min at 4°C to remove the cell debris. The supernatant was then centrifuged at 5000 g for 50 min at 4°C using 3 kDa MWCO (Millipore, Billerica, USA) to concentrate it by 25-fold. The CM aliquots were transferred to sterile 1.5 ml EP tubes and stored at -80°C. Surgical Procedure. Twenty-four miniature pigs (age: 4-6 months, body weight: 20-25 kg) were provided by the Miniature Pig Farm of the College of Life Sciences (Harbin, China). The animals were housed at 20°C under a 12 h light-dark cycle and fed piglet diet (Shenzhen Jinxinnong Feed, China) and tap water ad libitum. The animals were randomly divided into the (untreated) IRI, DMEM control, CM, and ADSC groups (n = 6 per group). After 12 h fasting and 2 h water deprivation, the animals were anesthetized with isoflurane inhalation and subjected to laparoscopic left hepatectomy after right hepatic ischemia for 60 min. Immediately after the operation, the liver parenchyma was injected with saline (IRI group), DMEM, ADSCs (1 × 10 6 P4 cells/kg body weight) or ADSC-secretome (CM equivalent to 1 × 10 6 P4 ADSCs/kg). The animals were euthanized by injecting 4% tolfedine (Vetoquinol S.A., France). Liver tissues were laparoscopically harvested preoperation and 1, 3, and 7 days postoperation. All the animals survived during the entire duration of the experiment due to the minimally invasive surgery. Transmission Electron Microscopy. The liver samples were cut into 1 mm 3 pieces and fixed with 2.5% glutaraldehyde. After routine dehydration, the tissues were embedded, sectioned, and stained with lead citrate and uranyl acetate. The ultrastructural changes in the hepatocytes were observed using an H-7650 transmission electron microscope (Hitachi, Japan). TUNEL Analysis. The liver tissues were fixed with 4% paraformaldehyde (n = 6 per group), embedded with paraffin, and cut into sections. After dewaxing and dehydrating, the sections were stained with TUNEL assay using an In Situ Cell Death Detection Kit (Roche, Germany) according to the manufacturer's instructions. The TUNEL-positive cells were counted in five random fields of each section, and its percentage relative to the total number of hepatocytes was calculated. 2.5. Caspase Activity Analysis. Liver tissues were homogenized with lysis buffer (n = 6 per group), and the activities of caspase 3, caspase 8, and caspase 9 were determined using specific Caspase Activity Assay Kits (Solarbio, China) according to the manufacturer's instructions. 2.6. Real-Time Quantitative PCR Analysis. Total RNA was extracted from the liver tissues using a TRIzol reagent (Invitrogen, Shanghai, China) according to the manufacturer's instructions. Reverse transcription was performed using a PrimeScript™ RT Reagent Kit (Takara, Japan). RT-qPCR was performed using the SYBR Green Kit in a LightCycler 480 System (Roche Applied Science, Penzberg, Germany) with the following cycling parameters: predenaturation at 95°C for 30 s, followed by 40 cycles of denaturation at 95°C for 5 s, and annealing and elongation at 60°C for 1 min. The threshold cycle (CT) values were calculated by the 2 −ΔΔCt method (Livak and Schmittgen, 2001). The primers are listed in Table 1. Western Blotting. Liver tissues were homogenized using a Tissue Protein Extraction Reagent (Beyotime, Shanghai, China) for 30 min at 4°C. The homogenates were centrifuged at 12,000 g for 15 min at 4°C, and the protein concentration was determined using a Bicinchoninic Acid (BCA) Protein Assay Kit (Beyotime, Shanghai, China). Equal amounts of protein were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred onto nitrocellulose (NC) membranes. After blocking with 5% nonfat milk in TBST for 2 h at room temperature, the membranes were incubated overnight with anti-P53, anti-Bcl-2 (Wanlei 2 Stem Cells International Biology, Shenyang, China), anti-Bax, and anti-β-actin (Sangon Biotech, Shanghai, China) primary antibodies. The membranes were then washed with TBST and incubated with horseradish peroxidase-conjugated secondary antibody (ImmunoWay, Plano, USA) at room temperature for 2 h. Following another wash with TBST, the blots were developed using a Meilunbio® fg super-sensitive ECL luminescence reagent (Meilunbio, Dalian, China) and imaged using the Tanon 5200 Imaging System (Tanon Science & Technology Co., Shanghai, China). The relative density of the target bands was quantified using ImageJ software. 2.8. Immunohistochemistry. Paraffin-embedded tissue sections were deparaffinized, dehydrated, and treated with 3% hydrogen peroxide in the dark for 10 min to inactivate the endogenous peroxidases. After heating in 0.01 M citrate buffer in the microwave for 10 min for antigen retrieval, the sections were cooled to room temperature, blocked with bovine serum albumin (BSA), and incubated overnight with anti-Fas and anti-Fasl primary antibodies (ImmunoWay, Plano, USA) at 4°C. The sections were then incubated with a streptavidin-labeled HRP secondary antibody (ZSGB, Beijing, China) for 30 min at room temperature, followed by a DAB solution for 3 min. After counterstaining with hematoxylin, the positively stained sections were quantified with the Image-Pro Plus 6.0 software (Media Cybernetics, USA). Six tissue sections per group and five random fields per slide were analyzed. 3 Stem Cells International 2.9. Statistical Analysis. All data were analyzed with Graph-Pad Prism 7.0 (GraphPad Software, USA) and expressed as mean ± SD. One-way ANOVA was used to compare different groups, and P < 0:05 was considered statistically significant. Results 3.1. ADSCs/ADSC-Secretome Relieved the Ultrastructural Damage in Injured Hepatocytes. As shown in the electron micrographs in Figure 1, the nuclei, mitochondria, and endoplasmic reticulum (ER) of the preoperative hepatocytes were normal. Within a day after HIRI, however, significant ultrastructural changes were observed, such as nuclear membrane shrinkage, chromatin condensation at the edges, mitochondrial swelling, and severe ER expansion. These changes were less evident 3 days postoperation and largely subsided by day 7 with only a slight expansion of the ER. Transplantation of either ADSCs or the ADSC-secretome significantly alleviated Stem Cells International mitochondrial swelling and ER expansion postoperation, whereas DMEM had no effect. The results indicate that ADSCs/ADSC-secretome can improve the ultrastructural changes in hepatocytes after ischemia-reperfusion and partial hepatectomy. ADSCs/ADSC-Secretome Decreased Postischemic Hepatocyte Apoptosis. As shown in Figure 2(a), numerous TUNEL-positive apoptotic cells were present in the hepatic tissues 1 and 3 days after surgery. The apoptosis rates at both time points were significantly higher in the untreated and DMEM control animals compared to those transplanted with ADSCs and the ADSC-secretome (Figure 2(b); P < 0:01). Thus, the ADSCs and their secretome can alleviate IRIinduced apoptosis in the hepatocytes. ADSCs/ADSC-Secretome Decreased Caspase Activity in Hepatocytes. To further elucidate the mechanistic basis of the antiapoptotic effects of ADSCs/ADSC-secretome, we analyzed the activity of multiple caspases in the liver tissues after ischemia-reperfusion and partial hepatectomy. As shown in Figure 3, the activity of caspase 3, caspase 8, and caspase 9 peaked 1 day after surgery but was significantly reduced in the ADSCs/CM-treated groups (P < 0:01). In addition, caspase 8 activity remained significantly higher on day 3 postoperation in the IRI and DMEM control groups compared to the CM-treated group (P < 0:01, P < 0:05). Caspase 3 activity levels dropped by day 3 even in the untreated groups, although the reduction was more significant in the CM-treated versus DMEM groups (P < 0:05). In contrast, caspase 9 activity was similar across all groups 3 days after operation. Caspase activity levels were restored to normal in the untreated animals at day 7 postoperation. ADSCs/ADSC-Secretome Altered the Expression of Apoptosis-Related Factors. The antiapoptotic effects of the ADSCs/ADSC-secretome were further confirmed by analyzing the expression levels of apoptosis-related proteins including Bax, Bcl-2, P53, Fas, and Fasl. As shown in Figure 4(a), Bax mRNA levels increased significantly after surgery and were downregulated by both ADSCs and the ADSCsecretome on day 1 (P < 0:01) and day 3 (P < 0:05) postoperation. On the other hand, the antiapoptotic Bcl-2 was downregulated after surgery and increased in the animals treated with ADSCs/ADSC-secretome on days 1 and 3 postoperation (P < 0:01 compared to the DMEM group; Figure 4(b)). Consistent with this, the ADSCs and ADSCsecretome significantly decreased the Bax/Bcl-2 ratio at both time points (P < 0:01, Figure 4(c)). The upstream regulator P53 was also downregulated by the ADSC-secretome and ADSCs on days 1 (P < 0:01 for both) and 3 (P < 0:05 and P < 0:01) compared to the untreated IRI and DMEM groups (Figure 4(d)). The Fas and Fasl transcripts also showed similar trends (P < 0:01 for all; Figures 4(e) and 4(f)). Taken together, the ADSCs and ADSC-secretome upregulated the antiapoptotic genes and suppressed the proapoptotic genes 5(b)). However, both ADSCs and the ADSC-secretome significantly reduced the in situ expression of both on days 1 and 3 after surgery (P < 0:01 and P < 0:05, respectively, as indicated in Figures 5(c) and 5(d)). Discussion Laparoscopic hepatectomy has been successfully used to establish liver injury in large animal models [32,33]. Multiple studies show that the stem cell-derived secretome plays an active role in alleviating the symptoms of ischemiareperfusion [34][35][36]. However, it is unclear whether the ADSC-secretome in particular exerts an active therapeutic effect on HIRI. Therefore, the aim of our study was to evaluate the effect of ADSCs and its secretome on hepatocyte apoptosis after HIRI combined with partial hepatectomy. Stem Cells International ADSCs are adult stem cells with immune regulation, secretion of growth factors, promotion of blood vessel formation, and tissue regeneration. Compared with the secretome from other stem cells, the ADSC-secretome has obvious advantages, including no bioethical restrictions of embryonic stem cells, large-scale production, easy storage and transportation, and fast therapeutic effect. So, it is economical and practical in clinical practice and brings hope to the application of cell-free therapy. The miniature pig is a kind of experimental animal with abundant adipose tissue. In addition, miniature pigs are suitable experimental animals for studying pathological changes in organs due to the anatomical and physiological similarities with humans. ADSCs from miniature pigs secrete proteins such as ANG-1, ANG-2, VEGF, and b-FGF [37]. Previous studies have demonstrated that ANG-1 can promote the expression of the Bcl-2 protein [38]. b-FGF can participate in the process of cellular mitosis and induce cell proliferation and differentiation which prevent cell apoptosis [39]. Therefore, the secreted protein may have a role in the antiapoptotic effect of the ADSCsecretome. TUNEL staining is a shared method to detect cell apoptosis. After HIRI, TUNEL staining showed an increase in the number of apoptotic liver cells [40]. Yi-Xing et al. found that MSC-CM has a direct inhibitory effect on sinusoidal endothelial cell apoptosis by TUNEL staining [41]. And the CM of human umbilical cord mesenchymal stem cells can also reduce the percentage of TUNEL-positive cells to play a protective effect on the cells [42]. Similarly, our results show that the ADSC-secretome from miniature pigs reduced the numbers of apoptosis cells by TUNEL staining after HIRI combined with partial hepatectomy. HIRI is a complex pathophysiological process that involves ischemia, hypoxia, early reperfusion, and reperfusion injury. Liver tissue reperfusion generates a large amount of ROS, and the resulting oxidative stress accelerates tissue inflammation and cell death [43]. In addition, the Ca 2+ overload during ischemia-reperfusion alters mitochondrial membrane permeability, which lowers ATP production and oxygen consumption, thereby affecting the survival of liver cells. Apoptosis was first described by Kerr [44] in hepatocytes. We detected a significant increase in apoptotic cells in the liver tissue after ischemia-reperfusion, which correlated to ultrastructural changes such as chromatin disintegration, mitochondrial swelling, and endoplasmic reticulum expansion. Although antioxidants improve the symptoms of HIRI, they are not feasible for clinical application [45]. MSC-CM protects cells from apoptosis [42,46] and can improve mitochondrial function and reduce hepatocyte apoptosis in nonalcoholic fatty liver disease [47]. In addition, previous studies have demonstrated antioxidative and antiinflammatory effects of ADSCs [11,37]. Consistent with this, both ADSCs and the ADSC-secretome alleviated apoptosis following HIRI, as indicated by improved organelle structure, lower levels of caspases, and downregulation of proapoptotic genes and proteins. Hepatocyte apoptosis is involved in maintaining the normal physiological functions of the liver and plays an important role in acute or chronic diseases of the liver, such as I/R injury, viral hepatitis, alcoholic and nonalcoholic liver diseases, and cholestatic diseases [48]. Therefore, understanding the mechanism of hepatocyte apoptosis is of great significance for the treatment of liver diseases. Apoptosis is mediated via the endogenous mitochondrial pathway, exogenous death receptor pathway, and endoplasmic reticulum pathway and regulated by the caspase family, Bcl-2, and P53 among others. The P53 protein forms a complex that transports the Bax protein to the nucleus, which promotes Bax expression and inhibits Bcl-2 to mediate early apoptosis [49]. Furthermore, various death signals depolarize the mitochondrial membrane by opening the transition pore, which releases cytochrome C into the cytoplasm. Cytochrome C forms multimers with Apaf1 and ATP/d ATP, which activate the caspase 9 precursor by promoting self-cleavage. Cleaved caspase 9 triggers the downstream caspase 3 and caspase 7 cascade, eventually culminating in apoptosis. A previous study showed that the CM of bone marrow mesenchymal stem cells (BMSCs) alleviated neuronal apoptosis by downregulating Bax and the cleaved caspase 3/caspase ratio and Stem Cells International increasing Bcl-2 levels [50]. The Bcl-2 protein family is currently the most valuable protein family that regulates apoptosis. Antiapoptotic proteins Bcl-2, Bcl-x, and Bcl-w block the apoptotic cascade by inhibiting cytochrome C release [51]. Another study demonstrated that ADSC-CM can significantly reduce the expression of proapoptotic proteins such as Bax during ischemia-reperfusion-induced cardiac injury [52]. P53 directly activates Bax to permeabilize the mitochondrial membrane and initiate the apoptotic program [51]. In our study, ADSCs and the ADSC-secretome downregulated P53 and Bax levels in the injured hepatocytes, reduced caspase 3 and 9 activity, and upregulated Bcl-2 following liver ischemia-reperfusion. The exogenous death receptor apoptosis pathway includes the tumor necrosis factor receptor (TNFR) signaling pathway, TNF-related apoptosis-inducing ligand (TRAIL) signaling pathway, and Fas ligand (Fas/Fasl) signaling pathway [29]. Studies show that all three receptors can activate caspase 8 after binding to their corresponding ligands, resulting in caspase 8 and caspase 3 cleavage causing cell apoptosis [53]. Among them, the Fas/Fasl pathway is the most detailed study of the death receptor family. Following Fas-Fasl binding, Fas undergoes trimerization, and its death domain activates caspase 8, which subsequently triggers the apoptotic cascade. Previous studies have shown that Fas and Fasl expression levels, as well as caspase 8 activity, increased rapidly after Figure 6: Expression levels of apoptosis-related proteins in liver tissue: (a) representative immunoblot showing P53, Bax, Bcl-2, and β-actin levels in the indicated groups. (b-e) Quantification of P53, Bax, Bcl-2, and Bax/Bcl-2 ratio. * P < 0:05 and * * P < 0:01 compared to IRI group; # P < 0:05 and ## P < 0:01 compared to DMEM group. 9 Stem Cells International liver ischemia-reperfusion and partial hepatectomy [20]. We found that ADSCs and the ADSC-secretome significantly reduced the expression of the above factors. Consistent with our findings, Kappy found that human ADSC-derived CM protected neuroblastoma cells from apoptosis by significantly reducing Fas expression levels [54]. In addition, MSC-CM effectively reduced radiation-induced apoptosis in hepatic sinusoidal endothelial cells [41], and BMSC-CM alleviated hepatocyte apoptosis in the carbon tetrachloride-induced acute liver injury mouse model [55]. Thus, the secretome of stem cells can target both endogenous and exogenous apoptotic pathways. At present, the route of administration of stem cells, secretome, and exosomes include systemic administration and local administration [56]. Intravenous injection is a better method of systemic administration for experimental animals, but the effective ingredients that home to the liver via the peripheral venous blood are limited, so it takes a long time to exert a therapeutic effect. The procedure of portal vein injection in large animals is complicated. In addition, there is a risk of vascular embolism, and most of the stem cells may be cleared by the liver in the early stage after portal vein injection [57]. Liver parenchymal injection for ADSC and ADSC-secretome transplantation was used in this study. However, this method of administration is also a kind of damage to the liver. Therefore, it is necessary to develop a new route of administration which should be simple to operate, widely used, safe, and effective, such as noninvasive nasal inhalation [58] or dressing in combination with a hydrogel [9]. Nevertheless, the ADSC-secretome injected into the liver parenchyma still exerts an antiapoptotic effect after HIRI combined with partial hepatectomy. Conclusion The ADSCs and ADSC-secretome can mitigate liver injury after HIRI combined with partial hepatectomy by blocking the endogenous and exogenous apoptotic pathways. Our findings indicate that the ADSC-secretome can inhibit hepatocyte apoptosis. The paracrine therapeutic effects of ADSCs are mediated by their secretome. Therefore, the ADSCsecretome can overcome the limitations of cell-based therapies which is a viable alternative for stem cell-based tissue repair and regeneration. Data Availability The datasets used and/or analyzed during this study are available from the corresponding author upon reasonable request. Conflicts of Interest The authors declare no conflict of interest.
v3-fos-license
2022-11-22T15:41:52.948Z
2017-03-30T00:00:00.000
253742765
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00227-017-3126-9.pdf", "pdf_hash": "642d97739717aa3d8a60e9d985d9e6527e8585fc", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41519", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "642d97739717aa3d8a60e9d985d9e6527e8585fc", "year": 2017 }
pes2o/s2orc
The relative effects of upwelling and river flow on the phytoplankton diversity patterns in the ria of A Coruña (NW Spain) Phytoplankton species assemblages in estuaries are connected to those in rivers and marine environments by local hydrodynamics leading to a continuous flow of taxa. This study revealed differential effects of upwelling and river flow on phytoplankton communities observed in 2011 along a salinity gradient from a river reservoir connected to the sea through a ria-marine bay system in A Coruña (NW Spain, 43° 16–21′ N, 8° 16–22′ W). With 130 phytoplankton taxa identified, the assemblages were dominated in general by diatoms, particularly abundant in the bay and in the estuary, but also by chlorophycea and cyanobacteria in the reservoir. Considering the entire seasonal cycle, the local assemblages were mainly characterized by changes in cryptophytes and diatoms, small dinoflagellates and some freshwater chlorophycea. Salinity, nitrate, and organic matter variables, were the main environmental factors related to the changes in the phytoplankton communities through the system, while phosphate and nitrite were also important for local communities in the estuary and the bay, respectively. The corresponding local phytoplankton assemblages showed moderate levels of connectivity. The estuarine community shared a variable number of taxa with the adjacent zones, depending on the relative strength of upwelling (major influence from the bay) and river flow (major influence of the reservoir) but had on average 35% of unique taxa. Consequently, local and zonal diversity patterns varied seasonally and were not simply related to the salinity gradient driven by the river flow. Introduction Estuarine phytoplankton is typically under the influence of river and marine fluxes. Gradual changes in salinity and temperature caused by changes in these fluxes are recognized as the main environmental factors affecting phytoplankton composition (Olli et al. 2015). Both freshwater and marine species are transported to the estuary where they must survive in their salinity tolerance limits. In addition, biological processes such as grazing are also able to modify estuarine phytoplankton communities with alterations propagated through the food web at multiple temporal and spatial scales (Cloern and Dufford 2005;Lucas et al. 2016). The influence of spatial and temporal scaling in estuarine ecosystems is larger than for any other aquatic system. Connectivity across the various scales and within 93 Page 2 of 16 marine coastal habitats, where changes in species composition were mainly driven by the balance between freshwater and marine inputs (Muylaert et al. 2009;Dorado et al. 2015). This feature has major implications for phytoplankton diversity. First by increasing the total number of species expected when considering the whole salinity gradient and including all the connected local habitats. Second, by decreasing the local diversity in the different habitats created by the salinity (and other environmental factors) as only a subset of species is adapted to survive in a particular combination of environmental conditions. Finally, by allowing a continuous flow of species to the estuary and marine coastal habitats that may affect the persistence of populations in the context of environmental fluctuations (Aiken and Navarrete 2011). Connectivity of habitats in space and time implies the migrations of species or their reproductive products, such as spores and eggs (McKinnon et al. 2015), but has also large implications in ecosystem properties affecting nutrient availability, productivity and food web processes (Cloern and Jassby 2010). In well-mixed estuaries (for instance when tides are strong) a homogenization of plankton populations is expected in the marine-influenced habitats, thus enhancing population connectivity by symmetric dispersal. Intense mixing will allow the coexistence of species even when some of them are not fully adapted to local conditions, at least at short timescales when biological interactions (e.g., competition or grazing) are relatively less important than physical transport (Leibold and Norberg 2004). Such mixing would affect diversity by increasing the local component and decreasing between-habitat diversity. In addition, the flow of a dominant current (as the river flow or the currents induced by coastal upwelling) can be considered as a mechanism of asymmetric dispersal in the brackish water domain. This flow will increase the probability of persistence of the populations when facing large environmental perturbations (Aiken and Navarrete 2011). In the coast of Galicia (NW Spain), most estuaries are included in larger hydrological systems or rias composed by the river, a marine embayment and the adjacent shelf (Alvarez-Salgado et al. 2000;Prego 2002). The flow of the rivers is generally slow, compared to the major flow of seawater driven by tides and the seasonal upwelling (Barton et al. 2015). Upwelling is driven by northerly winds with maximum intensity and frequency between March and October and forces shelf water into the rias, where the input of nutrients is effectively translated in an increase in phytoplankton growth rates. The export of surface water from the rias to the adjacent shelf and subsequent reinjection of coastal water with remineralized nutrients into the rias by repeated upwelling events greatly amplifies the phytoplankton productivity cycle and sustains high biomass of plankton and benthos in this region (Figueiras et al. 2002). In contrast, southerly winds, more frequent between November and February, produce downwelling conditions and accumulate warm, low salinity and less dense shelf surface waters towards the coast and inside the rias (Alvarez-Salgado et al. 2000;Gómez-Gesteira et al. 2003). The upwelling affects large spatial areas and water turnover inside the rias. Typically, wind-driven alongshore flow dominates in the marine domain of the ria, while the downwelling-upwelling cycle is the main driver of the circulation in the middle areas, and tides and freshwater inputs determine water exchanges in the inner, estuarine zone (Barton et al. 2015). In addition, the flow of the rivers contributing to most rias is regulated by the operation of reservoirs for urban and industrial freshwater supply (Gómez-Gesteira et al. 2003;Gago et al. 2005). Upwelling has a major impact on the composition of phytoplankton communities in some of the rias where the exchanges with the shelf are favored by a geographical orientation parallel to the dominant wind flow (e.g., Varela et al. 2004) and lower impact on rias with other orientations (e.g., Bode et al. 2005). In the latter rias, the interactions between upwelling conditions and freshwater inputs greatly affect the species composition (Varela et al. 2001. While most studies of phytoplankton communities in estuaries and rias focus on the description of the assemblages as a result of appropriate combinations of environmental variables (e.g., Figueiras and Pazos 1991;Figueiras et al. 2002;Varela and Prego 2003;Brito et al. 2014;Sin and Jeong 2015), in this study we focus on changes in diversity patterns to address the differential role of the major drivers of water exchange on the taxonomical composition of phytoplankton. This approach was applied to a system composed of a marine bay affected by coastal upwelling and downwelling processes, an estuary, and a river regulated by a reservoir. The objective is to analyze the differential effects of upwelling intensity and river flow on the similarity and diversity of phytoplankton communities along the salinity gradient in the coast of A Coruña (Galicia, Spain). Study area The study was conducted in the ria of A Coruña, a complex system characterized by a salinity gradient provided by the river Mero, the estuary of Ria do Burgo and the bay of A Coruña (Fig. 1). The river Mero is the main freshwater contributor to the estuary and its flow is regulated by the Cecebre reservoir which provides water supply for A Coruña and nearby urban areas. The river Mero and its tributaries have a catchment area of 345 km 2 and a main channel Page 3 of 16 93 46 km long before reaching the Ria do Burgo. With 70% of the total flow obtained from precipitation reaching the estuary, this river basin has relatively little seepage and runoff. It has a mean annual flow of 6.6 m 3 s −1 , with the high water period between December and March, and the minimum in September (Gómez-Gesteira et al. 1999). Because of the Cecebre reservoir the flow of the river into the ria can be adjusted independently of precipitation. The Ria do Burgo, with a total length of 4 km and an average depth of 2 m, has the characteristics of a tidal estuary where the influence of the river Mero is detected at the surface by the salinity gradient (González 1975;Gómez-Gesteira et al. 1999). The Bay of A Coruña, with an average depth of 25 m, an area of 24 km 2 and a mouth of 3 km, is characterized by a dominant marine influence (Fraga 1996;Varela et al. 2001;Varela and Prego 2003). Physical and chemical properties of water Ten sampling campaigns were carried out, at approximately monthly frequency between February and December 2011. Samples were collected during the flood tide from eight stations distributed along the salinity gradient between the Cecebre reservoir and the internal side of the breakwater of the port of A Coruña (Fig. 1). Surface water was collected from the existing bridges or from the shore using a 10-L acid-washed polycarbonate container equipped with a polyester rope of 5 m. Subsamples for determination of particulate and dissolved substances were collected from the container and stored according to the specific analyses. Water temperature (t) and salinity (Sal) were measured with a probe (YSI Model 30). Salinity measurements were given according to the Practical Salinity Scale (UNESCO 1984). Concentration of inorganic nutrients (nitrate, nitrite, phosphate, ammonium and silicate) was determined in samples preserved by freezing (−20 °C) and then analyzed colorimetrically on a segmented flow system Braun-Luebbe AAII (Grasshoff et al. 1983). Samples for the determination of chlorophyll were collected in dark bottles of 150 ml, stored in a cool, dark place until arrival at the laboratory, where they were filtered through glass fiber filters (GF/F 25 mm in diameter) under vacuum. Chlorophylls a, b and c were extracted in cold (−20 °C) acetone 90% and quantified on a Perkin Elmer LB-50s spectrofluorimeter using the procedure of Neveux and Panouse (1987). In this study we only consider chlorophyll a values. Particulate organic carbon and nitrogen concentrations (POC, PON) were determined using an elemental analyzer (Carlo Erba CHNSO 1108) on 0.5 to 1 L subsamples vacuum filtered through glass fiber filters (GF/F). Total organic carbon (TOC) was determined in subsamples fixed with 1 mL of H 3 PO 4 (25% v/v) until a pH <2 to remove inorganic carbon. In the laboratory, samples were analyzed by high-temperature catalytic combustion on a Shimadzu 5000A analyzer (Doval et al. 2016). Dissolved organic carbon concentrations (DOC) were estimated by the difference TOC-POC. The concentration of two types of DOC (humic acids and amino acids) were estimated from direct measurements of induced fluorescence on a spectrofluorimeter Perkin Elmer LS50 B. Fluorescence values were converted into concentrations in ppb (µg L −1 ) equivalents of quinine sulfate (humic acids) or tryptophan (amino acids) using calibration lines . The samples for these determinations were collected in 15-mL Teflon-caped glass tubes and stored in a cool place until measurement (less than 6 h of collection). In this study the concentrations equivalent to the fluorescence maxima corresponding to generic humic acids (HG, excitation: 250 nm, emission: 435 nm) and tryptophan (TRP, 280, 350 nm) were determined following Nieto-Cid et al. (2005). Humic acids were used as descriptors of dissolved organic matter of low biological degradation (recalcitrant) while TRP was used as an indicator of easily degradable organic matter . Raw data of physical and chemical variables can be accessed through the PANGAEA repository (https://issues. pangaea.de/browse/PDI-13428 submitted on 04/11/2016). Phytoplankton determinations Taxonomic characterization of phytoplankton was made in samples from three stations representative of the bay (St. 1), estuary (St. 5) and the river-reservoir (St. 10) collected during each of the 10 sampling events. Previous studies in the area showed that the monthly sampling in the selected zones provided information representative of the main states of the phytoplankton community during the year (Casas et al. 1999;Varela et al. 2001). Water subsamples (50 mL) were preserved with Lugol's solution and kept in darkness until phytoplankton identification and counting using the Utermöhl's technique (Casas et al. 1999;Varela et al. 2001). Depending on phytoplankton concentration, 10-25 mL of sample were allowed to settle in the Utermöhl chamber for up to 24 h. Observation of samples was carried out using a Nikon Eclipse TE3000 inverted microscope with Nomarsky interference contrast system. Magnification powers of 100×, 200× and 400× were used, according to the size of organisms. The entire slide was examined at 100× to account for large species while only transects or smaller areas were examined at higher magnification. At least 250 cells were counted for each sample. Only well preserved cells were counted, excluding damaged or dead cells (e.g., diatom frustules without visible organic content) that were particularly abundant in the estuarine station. Taxonomical identification was carried out at the lowest (species) level where possible. Species nomenclature was validated according to the World Register of Marine Species (http://www.marinespecies.org). Meteorological and hydrological drivers Daily irradiance and rainfall data were provided by the observatory of the Agencia Española de Meteorología (AEMET) in A Coruña (http://www.aemet.es/). The upwelling intensity was estimated by calculating the Ekman transport from surface winds as an upwelling index (km 3 s −1 km −1 ) computed by the Instituto Español de Oceanografía (http://www.indicedeafloramiento.ieo.es/) in a cell of 1° × 1° centered at 44°N, 9°W, using data from atmospheric pressure at sea level derived from the WXMAP model (González-Nuevo et al. 2014). Positive values of this index indicate net upwelling periods when surface water is transported offshore while negative values indicate an accumulation of surface water against the coast (downwelling). River flow was estimated from the discharge values of the Cecebre reservoir (m 3 s −1 ) provided by the regional water authority (http://augas.cmati.xunta.es/). Daily discharge values were decreased by 1 m 3 s −1 to account for the average water flow diverged from the river for the urban supply to the city of A Coruña. The water inputs to the Mero river downstream of the reservoir were considered negligible (Gómez-Gesteira et al. 1999). In this study, values of precipitation, upwelling and river discharge were accumulated during 15 days prior to each sampling date. These values represent the accumulated effect of the main meteorological and hydrological factors on phytoplankton dynamics, as shown by other studies in the Galician upwelling (e.g., Nogueira et al. 1997). Statistical analysis Composition and connectedness of phytoplankton assemblages were studied using several diversity indices. At local scale (i.e., for each combination of station and sampling date) species richness (S, number of lowest level taxa), Shannon index (bits indiv −1 ) and equitability (evenness with which individuals are divided among the taxa present) indices were computed: where n i and n are the abundance of taxon i and total abundance, respectively. Equitability assumes a value between 0 and 1, with 1 being complete evenness (Magurran 2004). For combinations of station and sampling dates we examined two measures of zonal diversity: the number of shared species (co-occurring low-level taxa between two or more stations or sampling dates), and β-diversity (difference between neighboring assemblages) using the index of Harrison et al. (1992): where S p is the total taxon richness of the pooled N set of samples compared and ̄ is their average number of taxa. This index is based on the relative number of taxa and measured the proportional change in richness, reaching maximum values when the percentage of species shared in common between neighboring assemblages is small and the percentages gained and lost are similar (Koleff et al. 2003). Differences in Shannon index between stations were studied with a modified version of the t test (Hutcheson 1970). This test and all diversity indices were computed using the PAST package v 3.0 (Hammer 2015). Ordination of phytoplankton assemblages was made using multidimensional scaling (MDS) on a Bray-Curtis similarity matrix constructed from log-transformed abundance data after excluding the categories without a clear taxonomic allocation (e.g., microflagellates). Related samples were grouped by hierarchical cluster analysis by applying the group-average method to the similarity matrix. Species characteristic of each group were identified with the SIMPER procedure (Clarke and Warwick 2001). Selection of environmental variables for comparison with phytoplankton data was made after correlation analysis to exclude highly correlated or redundant variables (e.g., ammonium and phosphate, Table 1S in the Supplement). The relationships between normalized environmental variables and phytoplankton taxa for each station were analyzed by the BEST procedure (Clarke and Warwick 2001), based on a weighted Spearman correlation between environmental and phytoplankton abundance similarity matrices. Partition of spatial (station) and temporal (sampling date) variance components of both the environmental and phytoplankton variables was examined with PER-MANOVA+ tests on the corresponding similarity matrices (Anderson et al. 2008). All similarity-related analysis were made using PRIMER V 6 (Clarke and Gorley 2006) and PERMANOVA+ (Anderson et al. 2008). Meteorological variability The study area is characterized by a seasonal cycle with high values of solar irradiance and upwelling index, and low rainfall and river flow during spring and summer (March to September, Fig. 2). In 2011, there was a relatively long rainfall period from mid October-December in addition to the episodic rains recorded in previous winter and spring (January-May). Several upwelling events occurred along the year but more persistently during spring and summer, while downwelling prevailed in winter and also in autumn. The river flow showed high values during the winter-spring period but was between 0.5 and 3 m 3 s −1 for most of the summer and autumn (Fig. 2c). River flow was uncorrelated with rainfall when data from the same calendar day were compared but showed a positive correlation at lags of up to 7 days, with a maximum correlation value with the rainfall recorded 5 days earlier (r = 0.212, N = 365, P < 0.05). This latter correlation, along with the low flow measured during spring and summer points out to the major role of the reservoir in regulating the freshwater flow to the estuary, while attending the demands for urban freshwater and flood control during periods of heavy rain. When accumulated in periods of 15 days, values of upwelling index were also negatively correlated with rainfall and positively with irradiance (Table 1S in the Supplement). Either precipitation, irradiance or upwelling index were significantly correlated when accumulated in periods of 7, 15 or 30 days (values not shown); therefore, values accumulated for 15-day periods prior to each sampling date were used in subsequent analysis. Variability in physical and chemical properties of water The thermal cycle of progressive warming of surface water during spring and summer, and cooling during autumn and winter was more pronounced in the river and reservoir compared to the bay and estuary (Fig. 3a). The seasonal variability in salinity was reduced when compared with the large spatial variability, with a marked saline front delineating the influence of the saline waters in the estuary near the location of St. 5 (Fig. 3b). The range of salinity values observed was 3.0, 28.3 and 0.0 at stations 1 (bay), 5 (estuary) and 10 (reservoir), respectively. These values are considerably smaller than the range of salinity observed across the 15 km separating the bay station from the reservoir at all sampling times (salinity range >35). Similarly, all water variables mainly showed spatial gradients, while temporal (i.e., seasonal) variability was comparatively smaller (Table 2S in the Supplement). For instance, nitrate had higher concentrations in freshwater than in marine water (Fig. 3c), and phosphate displayed maximal values in the estuary (Fig. 3d). However, relative increases in nitrate concentrations in the bay and decreases in the estuary and river waters during summer must be noticed. Phosphate concentrations were more variable near the saline front in the estuary. Silicate and ammonium concentrations (not shown) displayed similar variability to either nitrate or phosphate, respectively, as indicated by their correlations ( Table 2S in the Supplement). Maximum values of particulate organic matter concentrations were found in the estuary in spring, in the river and reservoir in late summer and in autumn in the bay (Fig. 3e). In general, POC was significantly correlated with chlorophyll a (Table 1S in the Supplement), which always showed the highest values in the reservoir (Fig. 3f). However, concentrations exceeding 5 µg L −1 indicated blooms during spring and late summer in the estuary and in the bay. In turn, dissolved organic matter was always higher in freshwater, with concentrations increasing during spring and summer, reaching maximum values during autumn (Fig. 3g, h). Phytoplankton communities A total of 130 phytoplankton taxa were identified (118 at least at genus level), including 63 diatoms (Bacillariophyceae), 32 dinoflagellates (Dinophyceae), 25 Chlorophyceae and other groups with less than 10 taxa each (Table 1, 3S). Considering the entire annual cycle, the number of taxa decreased progressively from the bay (St. 1) to the reservoir (St. 10) while the Shannon index reached minimum values in the estuary (St. 5), which also showed the highest abundance (Table 1). There was an even distribution of abundance among taxa in the bay, moderately even in the reservoir but highly uneven in the estuary, as indicated by the values of equitability. The differences in Shannon index were significant for all pairs of stations (Hutcheson t test, t St. 1-5 = 470.73, t St. 5-10 = −1810.8, t St. 1-10 = 386.93, P < 0.001 in all cases). Unique taxa (i.e., those found only at one of the sampling stations) accounted for more than half of all taxa recorded in the bay and the reservoir, but only 35% of those found in the estuary ( Table 1). Most of the variations in abundance were due to cyanobacteria (Cyanophyceae), almost permanent in the Table 1 Number of taxa of the lowest level (species whenever possible) of phytoplankton groups, accumulated abundance (×10 7 cells mL −1 ), H (Shannon index, bits indiv −1 ) and equitability (bits indiv −1 taxa −1 ) observed at each station Diversity indices were computed after integrating all samples by station. The total number of taxa and the number of taxa found exclusively in each station (unique taxa) are also given reservoir but also present in the estuary and even reaching the bay (Fig. 4a). Cyanophyceae, mainly Chroococcus spp. (Table 3S in the Supplement), reached maximum abundance in February and decreased during spring and summer. The second group in abundance was composed by small (2-8 µm) flagellate monads which increased in abundance from February to December in all zones (Fig. 2b). This group was not employed in further analysis because it was not possible to separate the autotrophic and heterotrophic organisms with the counting technique employed. Apart from these groups, the phytoplankton communities were dominated by Bacillariophyceae (Fig. 4c), Dinophyceae (Fig. 4d) and Cryptophyceae (Fig. 4e) in all zones and Chlorophyceae in the stations under the influence of freshwater (Fig. 4f) The number of taxa and Shannon index values increased in general from spring to late summer at all stations, with the highest values almost always in the bay and the lowest in the reservoir (Fig. 5a, b).There were taxa present in several zones (Fig. 5c); only a few taxa were shared between the estuary and the reservoir at any single sampling time, while the bay and the estuary showed an increase from spring to autumn in the number of taxa present in both zones. Four of the taxa shared were high-level taxa, not resolved at the species level, as Cryptophyceae or unidentified dinoflagellates and diatoms, but in all cases, there were characteristic species that were found in several stations (Table 3S). For instance, F. crotonensis was identified not only in the reservoir and estuary but also in the bay. The highest number of shared species occurred when comparing the bay and the estuary stations. Conversely, β-diversity showed high values for the assemblages of St. 5 and St. 10 through the year and also for those of St. 1 and St. 5 in spring (Fig. 5d) when the number of shared species was relatively low (Fig. 5c). In the later, the decrease of β-diversity in summer and autumn was accompanied by a sharp increase in the number of shared taxa. However, this correspondence was lower in the case of the assemblages of end-member stations ( The taxonomic composition defined the characteristics of each sampling zone throughout the year, as shown both by the MDS (Fig. 6) and cluster analysis (Fig. 1S in the Supplement). As observed for environmental variables, the composition of phytoplankton communities varied mainly with the spatial component (i.e., station) while the temporal variability was comparatively smaller (Table 2S in the Supplement). The samples from the bay (St. 1) were at all times clearly separated from the other stations (Fig. 6) and their similarity was mainly due to medium-sized Cryptophyceae, the diatom Nitzschia longissima and small dinoflagellates (Table 2). Samples from the estuary (St. 5) and the reservoir (St. 10) were also separated but in this case, there were more similarities in the composition of the communities between stations in some periods of the year. For instance, half of the samples from the reservoir clustered with either summer and autumn samples or winter and spring samples of the estuary at the 20% similarity level (Fig. 6). The main contributors to the similarity of St. 5 were Cryptophyceae, small diatoms, N. longissima, and F. crotonensis, which was also the main contributor to the similarity of St. 10 along with the Chlorophyceae Ankistrodesmus falcatus, Cryptophyceae and small dinoflagellates (Table 2). It must be noted that the contribution of most taxa to similarity within stations was small (<1%), while the main contributors were none of those identified above as bloom producers. Environmental effects on the phytoplankton communities Salinity, nitrate, and dissolved organic carbon concentration were the main variables correlated with the composition of phytoplankton communities when all zones were considered (Table 3). In the bay, where variations in salinity were relatively low, the main variables contributing to (Harrison et al. 1992). Symbols for panels a and b are different from those in c and d the correlation between environmental and taxonomic data were the concentration of phosphate and dissolved organic matter. In the estuary, salinity, phosphate, and concentrations of humic acids and tryptophan-like substances contributed to a relatively high correlation, compared to that found in other stations. In turn, nitrate, phosphate and organic matter components were the main environmental variables correlated with phytoplankton composition in the reservoir. In general, the meteorological variables showed a low correlation with phytoplankton taxa, but there was asymmetric covariation of zonal diversity indices with the meteorology ( Fig. 7; Table 4S; Fig. 2S in the Supplement). Rainfall and river flow were related with both the number of shared taxa and β-diversity by a saturation-type function, but the pattern was different for the combination of the bay and estuarine stations or the estuarine and the reservoir stations. In the former case, there was a rapid decrease of the number of shared taxa (and conversely an increase in β-diversity) with the increase in river flow (Fig. 7a, b). Apparently, the effects on zonal diversity depend on a critical value of the flow (ca. 20 hm 3 per 15 days). River flow values larger than this critical value had little effect on zonal diversity while there were large changes at lower flow values. For instance, when the flow was lower than the critical value there were more taxa shared between the bay and the estuary than during periods of high flow (Mann-Whitney test, p < 0.05, n = 10). Conversely, there was a slight increase in the number of shared taxa (and a decrease in β-diversity) between the estuary and the reservoir when the river flow exceeded the critical value. Similar patterns could be applied to the accumulated rainfall, but with lower confidence than for the river flow (Fig. 7c, d; Table 4S). In contrast, upwelling did not show the described saturating Table 4S and Fig. 2S in the Supplement) response and its covariation with zonal diversity indices was less clear (Fig. 7e, f; Table 4S). The only significant effect of upwelling was a linear and negative effect on the number of taxa shared between the estuary and the reservoir (Fig. 7e). The combination of river flow and upwelling conditions thus affected the number of phytoplankton taxa found in nearby zones (PERMANOVA+ test, Table 5S in the Supplement). As summarized in Fig. 8, the number of taxa shared between the bay and the estuary peaked in periods of upwelling and low river flow (<20 hm 3 in 15 days) when also the number of estuarine taxa reached a maximum. In turn, the maximum number of taxa shared between the estuary and the reservoir was found in downwelling conditions and during periods of high river flow (≥20 hm 3 in 15 days). For all combinations of upwelling and river flow, the number of taxa found only in the estuary was higher than the number of taxa shared with the other zones. Unique vs. imported species in the estuary The results of this study align with the current paradigm of salinity as a primary environmental driver of estuarine communities taking into account also the interactions with hydromorphology (Elliot and Whitfield 2011). Tolerance to salinity variations rather than tolerance to a specific salinity is thus the principal environmental factor regulating the distribution of estuarine organisms, including phytoplankton. Taxonomic composition and phylogenic relatedness of estuarine phytoplankton communities are strongly correlated to salinity gradients (Olli et al. 2015). Because of the large range of salinity variation, only a subset of species is well adapted to the salinity range experienced at a given location (e.g., Balzano et al. 2011). As found in the present study, minimum values of local diversity (Shannon index and equitability but not in species richness) occurred in the estuarine zone characterized by the highest salinity gradient. Other studies in estuaries, however, found no local minima of phytoplankton diversity near the salinity front but reported increases in zonal diversity that were attributed to the contribution of allochtonous taxa from nearby locations (Muylaert et al. 2009). In our study, some of the taxa from the sea and the river imported to the estuary may eventually did not survive because of the salinity changes but still form a pool of taxa increasing the species richness. However, these rare taxa will have low impact on Shannon index values because this index weights taxa proportionally to their abundance. While our results also indicate a large number of taxa shared between the estuary and the other zones, almost a third of the total number were unique to the estuary, which can be also considered an ecotone because of the abrupt change in salinity conditions. Previous studies in the Galician rias have stressed the major role of upwelling-downwelling cycles on the composition of phytoplankton assemblages by means of changes in water column stratification and its influence on the nutrient availability (e.g., Figueiras and Pazos 1991;Nogueira et al. 2000), but most studies did not include samples in the zone of maximum salinity gradient nor in the contributing rivers. Only a few studies reported the presence of freshwater species, generally associated to runoff in periods of intense rainfall (Varela et al. 2001(Varela et al. , 2004. The inclusion of communities from the end-member zones of the salinity gradient in this study allows for a first analysis of the importance of the connectivity of these potential sources of taxa in determining the composition of phytoplankton assemblages in the estuary. The taxa found in this study were already described in previous studies in marine (e.g., Casas et al. 1999;Nogueira et al. 2000;Varela et al. 2001Varela et al. , 2005Varela and Prego 2003) and freshwater habitats (e.g., Vasconcelos and Cerqueira 2001;Negro et al. 2000) in this region. The most characteristic taxa in all zones were diatoms, along with increasing importance of Cryptophyceae and Chlorophyceae in the zones with highest freshwater influence, as reported for other estuaries (Cloern and Dufford 2005;Ferreira et al. 2005;Muylaert et al. 2009;Carstensen et al. 2015;Sin and Jeong 2015). Diatom dominance is expected when high nutrient supply and turbulence conditions prevail but also in areas of siliceous rocks as in Galicia, where they are the main bloom-forming species (Varela and Prego 2003;Varela et al. 2004Varela et al. , 2005. More interesting is the importance of Cryptophyceae as characterizing taxa for the different communities. Even when the species identification by morphological details is not generally achieved, Cryptophyceae have been used as indicators of major changes in estuarine communities (Seoane et al. 2012;Brito et al. 2014;Šupraha et al. 2014;Sin and Jeong 2015) and molecular studies further illustrate the relevance of this taxon for the analysis of changes in phytoplankton diversity (Bazin et al. 2014). The increasing presence of Cryptophyceae in estuaries and bays has been related to the warming (Brito et al. 2014) and eutrophication of waters mainly due to increasing loads of phosphate (Šupraha et al. 2014;Sin and Jeong 2015) and nitrogen (Brito et al. 2014). In addition, the presence of high numbers of Cyanophyceae in the estuary during periods of high river flow was reported here for the first time for a Galician ria, while it seems a common feature of estuaries receiving much large river inputs (Galvão et al. 2008). The abundance of taxa from Cryptophyceae and Cyanophyceae found in our study challenges the expected dominance of diatoms in the estuary and bay areas and suggests a potential eutrophication of these areas due to river inputs. Salinity was thus the major environmental factor affecting phytoplankton community distribution by the selection of species characteristic of each salinity domain. Nitrate was the main nutrient correlated with changes in phytoplankton assemblages through the salinity gradient, as expected not only by the influence of the upwelling in the marine domain (Figueiras and Pazos 1991;Varela et al. 2001Varela et al. , 2004Varela et al. , 2005Varela and Prego 2003) but also in the reservoir. Cyanobacterial blooms during summer and autumn are a common feature of most reservoirs in the region and can be related, at least in part to nitrogen and phosphorous availability (Galvão et al. 2008). In other estuaries, cyanobacteria (including picocyanobacteria) reach peak biomass levels during summertime, when the temperature is maximal and there is an increase in inorganic phosphorus released from the sediments (Gaulke et al. 2010). However, we found unusual patterns of cyanobacterial abundance in the reservoir, with maximal abundances in winter which affected the estuarine zone. These winter blooms could be induced by the accumulation of phosphate in the previous autumn, as suggested by the increase in phosphate observed during the study in the reservoir and the estuary. The co-occurrence of minimum phosphate concentrations and maximum abundance of cyanobacteria also supports this hypothesis. In addition, phosphate concentration was correlated with phytoplankton assemblages in the estuary (Table 3), where local maximum concentrations in late summer and autumn may be related to point sources (González 1975), but otherwise (as silicate), its correlation was not particularly high in the other zones. Because of the high concentrations of nutrients found in the entire salinity gradient (up to 160 µM for nitrate or silicate and 10 µM for phosphate) compared with those observed in typical coastal waters (e.g., Varela and Prego 2003) nutrient limitation does not appear a direct factor influencing phytoplankton composition in our study area, or at least with less influence than hydrological fluxes. In contrast, other studies attributed a major role to the alteration of nutrient ratios as one of the main causes of change in the taxonomic composition of phytoplankton assemblages in estuaries influenced by reservoirs (e.g., Galvão et al. 2008) and in rias influenced by upwelling (Figueiras and Pazos 1991). Seasonal accumulations of organic matter have been described in the region as the result of biogeochemical processing of phytoplankton blooms occurring in spring and summer, both in the rias and coastal waters Bode et al. 2005;Doval et al. 2016) and in reservoirs . Hydrological effects on diversity patterns Notwithstanding the existence of locally adapted species, water fluxes strongly influenced the connectedness of phytoplankton assemblages in the estuary. The different measurements of diversity provide complementary descriptors of the communities. While the number of taxa shared between zones is one of the most intuitive and explored measures (Koleff et al. 2003) it only records the continuity in the taxonomic composition but does not take into account the turnover of species, i.e., the gains and losses between zones. In contrast, the β-diversity index selected in this study (Harrison et al. 1992) provides a measure of turnover because it measures the proportional change in richness rather than absolute changes in the species pool (total richness). Taxonomic turnover (and β-diversity values) between two zones is high when there is a low fraction of taxa shared and the relative gains and losses are similar (Koleff et al. 2003). This is observed in this study as the negative correlation between shared taxa and β-diversity values. Other measures of zonal diversity relied on the rate of change of taxa with the distance between zones, as shown by Muylaert et al. (2009) in estuarine phytoplankton. In this case, maximum values of zonal diversity were found in zones with frequent inputs of species from other domains (thus focusing on species gains). Our results also agree with the prediction of a decrease in zonal diversity (β-diversity) with increasing connectivity between zones differing in taxonomic composition (Sin and Jeong 2015). Connectedness of the studied phytoplankton communities was affected by the combination of fluxes driven by upwelling and river dynamics (Fig. 8). This result agrees with the effects of habitat connectivity on estuarine phytoplankton in regions affected by coastal upwelling, but described at larger time and space scales (Cloern and Jassby 2010). In contrast, phytoplankton in other estuaries was only affected by river flow and tides. Tidal mixing can be the main driver of taxa distribution in the water column over imposing its effects on the salinity gradient (Brito et al. 2014) while freshwater inflows greatly modify phytoplankton diversity (Muylaert et al. 2009;Bazin et al. 2014;Brito et al. 2014;Dorado et al. 2015;Sin and Jeong 2015) by altering the residence time of water and species in each zone (Ferreira et al. 2005). Our study points out to a new role of upwelling in determining the composition of phytoplankton assemblages in the Galician rias. Upwelling-downwelling cycles increase connectivity between the estuary and both the bay and the river, but reservoir discharges only increase connectivity with the river. Maximum connectivity will be expected when salinity gradients are maintained in the estuary during periods of downwelling and neap tide. All these hydrological processes displace biological populations and their functions through the whole salinity gradient, transcending the value of individual habitat types, as shown with phytoplankton productivity in estuaries (Lopez et al. 2006). The hydrology drivers facilitate the survival of the estuarine assemblages even when local or regional conditions change, by ensuring a continuous supply of species from source zones (Aiken and Navarrete 2011). However, high water fluxes and mixing do not immediately favor local and zonal adaptation, as shown for estuaries receiving freshwater flushes from reservoirs (Ferreira et al. 2005;Galvão et al. 2008;Sin and Jeong 2015) or rivers (Brito et al. 2014;Dorado et al. 2015), as adaptation is maximized at intermediate levels of connectedness (Leibold and Norberg 2004). In this context, estuaries receiving different hydrological influences, such as the one shown in this study, are model systems to analyze the response of ecosystems to multiple drivers. The connectedness of such systems has major implications for management. For instance, the results of this and previous studies indicate that the regulation of freshwater discharges by reservoirs greatly affects phytoplankton assemblages along the entire salinity gradient and even with delayed effects after the peak in the discharge (Sin and Jeong 2015). In addition, upwelling affects the transport of cells and nutrients but also interacts with local drivers. One example is the variability in the use of anthropogenic versus marine nutrients caused by the confinement of phytoplankton in the estuary by upwelling (Cloern and Jassby 2010). Also local food webs can be affected, as filter-feeders may be able to control the increase of phytoplankton populations despite the availability of nutrients (Lucas et al. 2016). While upwelling and runoff are largely regulated by climatic factors operating at regional scale that are difficult to overcome, the management of local drivers, as freshwater fluxes, must maintain moderate levels of connectivity to maximize phytoplankton diversity at regional scale. Challenges for future monitoring Complete assessment of phytoplankton diversity is a major challenge. The classical morphological identification of species is limited to relatively large organisms (generally >10 µm) because of the limitations of microscopical techniques and the lack of enough external differences in the flagellated forms (e.g., Casas et al. 1999;Varela et al. 2001). Molecular techniques allow much greater taxonomical detail but were rarely implemented in field studies (Bazin et al. 2014). In addition, sample size is a limitation when recording rare species (e.g., Rodriguez-Ramos et al. 2014). While these limitations can be overcome in part by statistical approaches, as the use of rarefaction methods (Magurran 2004), the consideration of species traits in addition to simple records of abundance or biomass will reveal different ecological strategies among major phytoplankton taxa and their response to environmental changes, as shown for cell size (Segura et al. 2013). Monitoring the resilience of estuarine phytoplankton must include all zones connected by the local and regional hydrology, as the intensity of the taxonomic fluxes is largely determined by the fluxes of water. Conclusions The phytoplankton assemblages in the transitional waters system of A Coruña are the result of the seasonally variable influence of marine and freshwater components driven by the relative strength of upwelling and river flow discharges. However, the moderate connectedness of local assemblages allows the persistence of unique taxa at local scales. Consequently, local and zonal diversity patterns vary seasonally and are not simply related to the salinity gradient driven by the river flow, as found in other estuaries. These results suggest that alteration of the hydrologic regime by either influencing freshwater discharges, rainfall or upwelling dynamics would modify connectedness of phytoplankton communities in transitional waters affected by these drivers. The final effect on the species diversity and composition would depend on the resilience of the assemblages, implying the analysis of phytoplankton diversity at increasing spatial and temporal scales.
v3-fos-license
2023-08-17T15:08:20.854Z
2023-08-09T00:00:00.000
260939813
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1200452/pdf", "pdf_hash": "33e0c8c46d91ea2ae6be932840eeb0e202fb77b7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41521", "s2fieldsofstudy": [ "Psychology" ], "sha1": "155d05a7217eec2217cecbc8080291c48abfc240", "year": 2023 }
pes2o/s2orc
Promoting appetitive learning of consensual, empowered vulnerability: a contextual behavioral conceptualization of intimacy Vulnerability is emphasized in a number of theoretical models of intimacy (e.g., Intimacy Process Model), including from behavioral and contextual behavioral perspectives. Vulnerability is generally defined as susceptibility to harm and involves behaviors that have been historically met with aversive social consequences. From these perspectives, intimacy is fostered when vulnerable behavior is met with reinforcement. For example, interventions have trained intimacy by building skills in emotional expression and responsiveness with promising results. Vulnerability has divergent functions, however, depending on the interpersonal context in which it occurs. Functional intimacy is explored through the lens of functional relations, which play a key role in interpersonal processes of power, privilege, and consent. This conceptualization suggests that vulnerability must be under appetitive functional relations, consensual, and empowered for safe intimacy to emerge. The responsibility to promote appetitive learning of consensual, empowered vulnerability to foster intimacy falls to the person with more power in a particular interaction and relationship. Recommendations are offered for guiding this process. Promoting appetitive learning of consensual, empowered vulnerability: a contextual behavioral conceptualization of intimacy Louisiana Contextual Science Research Group Vulnerability is emphasized in a number of theoretical models of intimacy (e.g., Intimacy Process Model), including from behavioral and contextual behavioral perspectives. Vulnerability is generally defined as susceptibility to harm and involves behaviors that have been historically met with aversive social consequences. From these perspectives, intimacy is fostered when vulnerable behavior is met with reinforcement. For example, interventions have trained intimacy by building skills in emotional expression and responsiveness with promising results. Vulnerability has divergent functions, however, depending on the interpersonal context in which it occurs. Functional intimacy is explored through the lens of functional relations, which play a key role in interpersonal processes of power, privilege, and consent. This conceptualization suggests that vulnerability must be under appetitive functional relations, consensual, and empowered for safe intimacy to emerge. The responsibility to promote appetitive learning of consensual, empowered vulnerability to foster intimacy falls to the person with more power in a particular interaction and relationship. Recommendations are offered for guiding this process. KEYWORDS intimacy, vulnerability, consent, power, well-being, appetitive, context, behavioral Introduction Intimacy has long been considered a fundamental aspect of human well-being and development (e.g., Erikson, 1950Erikson, , 1963, and remains a key social factor in modern scientific explorations of well-being. In children, friendship intimacy buffers the relationship between symptoms of attention deficit hyperactivity disorder (ADHD) and social problems such as rejection by peers, emotional regulation, and social reciprocity (Becker et al., 2013). Naturally occurring increases in physical intimacy predict concurrent and subsequent decreases in somatic symptoms for people in romantic relationships (Stadler et al., 2012). Intimacy also mediates the positive effects of decreased loneliness and increased happiness associated with social media use (Pittman, 2018). At the societal level, overall experiences of intimacy attenuate the impact of negative outgroup experiences on attitudes toward that outgroup (Graf et al., 2020). In short, intimacy is considered a hallmark of both relational and personal well-being, despite the homogeneity of sample populations in the research (Williamson et al., 2022). The English words "intimacy" and "intimate" are derived from Latin roots, intimus (innermost) and intimare (to make innermost known; Partridge, 2006). By literal definition, intimacy is "the state of being intimate; something of a personal or private nature" (Merriam-Webster, n.d.). In the behavioral sciences, several conceptual models of intimacy have emerged (e.g., Waring, 1985;Reis and Shaver, 1988;Wilhelm and Parker, 1988;Register and Henley, 1992;Prager, 1997, Gaia, 2002, each of which vary slightly on common themes. These models converge on defining intimacy as dynamic, contextually-bound (see Gaia, 2002), and involving the disclosure of thoughts, feelings, and personal information with reciprocal trust and emotional closeness (see Timmerman, 2009). In other words, historical accounts of intimacy emphasize a dynamic interpersonal process of reciprocal vulnerability. The role of reciprocal vulnerability is seen explicitly in behavioral and contextual behavioral models of intimacy, which emphasize intimacy as the product of interactions in which vulnerable behaviors are reinforced by one's partner's responsiveness (Cordova and Scott, 2001). Likewise, a contextual behavioral reformulation of the Interpersonal Process Model (IPM; Reis and Shaver, 1988) posits the evolution of intimate relating as involving vulnerability being met with reinforcing responsiveness, thereby increasing the likelihood of vulnerable behaviors being emitted in the future . Thus, intimacy emerges when an interaction evokes and reinforces bidirectional vulnerability. Vulnerability The English word "vulnerability" is derived from the Latin roots, vulnus (wound), and habilitatem (ability or capacity; Partridge, 2006). Defined literally, to be vulnerable is to engage in behavior that results in an increased capability "of being physically or emotionally wounded" (Merriam-Webster, n.d.). In other words, vulnerability colloquially involves socially risky behavior. Research on vulnerability typically revolves around describing populations that are at risk of being taken advantage of (e.g., Msall et al., 1998) and considering individual differences in emotional responding (e.g., Timmers et al., 2003). Vulnerability is increasingly being explored, however, as an aspect of well-being rather than a threat. For example, social worker, speaker, and author Brown (2013) stated that vulnerability entails "uncertainty, risk, and emotional exposure, " and is understood as necessary for personal growth and well-being. Recognizing and accepting personal vulnerability, or an "openness to attack, " is seen as a critical aspect of shame resilience (Brown, 2006). A similar definition of emotional vulnerability as an "aversive state" of openness to feeling hurt or rejected can be found in Vogel et al. (2003). Vulnerability, particular to the relational context, has been observed as fear of abandonment emerges in some relationships but not others (e.g., with real or potential threats of rejection; Fowler and Dillow, 2011). In this way, a person's experience of vulnerability may change as a function of the relational context(s) that are present (Jordan, 2008). Behaviorally, vulnerable behaviors are those that have historically been punished in social situations (Cordova and Scott, 2001). According to this perspective, what behaviors, topographically speaking, are vulnerable (i.e., what behaviors have been punished) vary between individual learning histories interacting with cultural norms. Extending from behavioral to a contextual behavioral perspective, Kanter et al. (2020) further characterize this class of previously interpersonally punished behaviors as including self-disclosure, emotional expressiveness, and emotional responsiveness. In other words, contextual behavioral explorations of vulnerability consider the effects of sharing personal information, communicating emotional state, and shifting verbal and affective communication to respond to another's emotional state. This line of research positions vulnerability as a feature of relational closeness (Aron et al., 1997), emotional regulation (Panayiotou et al., 2019), relational aggression (Shea and Coyne, 2017), anxiety sensitivity associated with posttraumatic stress disorder (Bardeen et al., 2015), and more. The centrality of vulnerability to important outcomes has further supported its role in interventions designed to directly train intimacy (e.g., see Kanter et al., 2020). Vulnerability-based intimacy interventions Interventions have been developed to improve intimacy, but traditionally with a fairly narrow scope. Specifically, most have targeted persons in romantic relationships (see Kardan-Souraki et al., 2016 for a review of interventions to increase marital intimacy). Contextual behavioral interventions designed to promote intimacy (i.e., Functional Analytic Psychotherapy; FAP) have aimed for a broader scope. FAP involves directly training functionally vulnerable interactions, in which emotional expressions (i.e., emotional expressiveness combined with self-disclosure and invitations to selfdisclose) evoke and reinforce emotional responsivity, and vice versa . In this way, contextual behavioral interventions for building intimacy emphasize interlocking behavioral contingencies (IBCs; Glenn, 2004), in which one person's behavior is functionally related to (i.e., sets the context for) another person's behavior. These interventions also allow for consideration of cultural norms in terms of metacontingencies (Glenn, 2004), or the aspects of context that select for particular IBCs across groups. Functional Analytic Psychotherapy (FAP; Kohlenberg and Tsai, 1991;Holman et al., 2017;Tsai et al., 2019) is a talk therapy approach wherein therapists address a client's presenting problems by intervening on client's in-session clinically relevant behaviors (CRBs) to enhance the client's intimate relationships. Put another way, therapists working from a FAP perspective work to evoke and reinforce vulnerable interactions with their clients (CRB2s) as alternatives to the behaviors contributing to their difficulties (CRB1s). Systematic reviews investigating the effectiveness of FAP (e.g., Kanter et al., 2017;Singh and O'Brien, 2018) call for additional research with improved rigor, but emphasize that techniques and identified mechanisms of change (i.e., shifts in CRBs) are well supported when considering the therapist-as-social-reinforcer functions of FAP. FAP has been proposed as particularly appropriate for establishing a therapeutic relationship in contexts where clients are likely to have punishing interpersonal histories, making these clients inherently more vulnerable (e.g., racially diverse client-therapist dyads, Miller et al., 2015; people struggling with gender and sexual minority stress, Skinta et al., 2018;transcultural or culturally sensitive services, Vandenberghe, 2008;Vandenberghe et al., 2010). FAP has also been extended beyond the psychotherapy context to training emotional rapport and responsiveness in ways that significantly improve medical doctors' interactions with Black patients . Finally, FAP has been applied in groups to promote intimacy (i.e., Frontiers in Psychology 03 frontiersin.org connectedness) in college students across racial differences (Kanter et al., 2019) with promising results, particularly for white participants (Williams et al., 2020). One topic of particular importance in vulnerability-based intimacy interventions, especially as they are extended to benefit inherently vulnerable interactions outside of the therapy context, is safety. Kanter et al. (2020) describe safety as foundational to emotional responsiveness. The authors described promoting safety functionally as "engaging in non-verbal and verbal responses that decrease a speaker's perceptions of threat and emotional arousal when engaged in non-verbal vulnerable emotional expressions" (p. 79). Kanter et al. (2020) further specify three formal categories of safetyproviding responses: (1) synchronized emotional expressiveness, (2) indicators of interest, care, and affiliative intent, and (3) reciprocal vulnerable self-disclosures. This model acknowledges that these responses are "functionally complex" (i.e., have multiple functions), but that safety functions are imperative . Whether such "safety-providing responses" function to decrease threat and nervous system activation to foster intimacy may require further conceptualization of the range of complex functions vulnerability can take on. Re-considering functions of vulnerability The vulnerability of behavior in a particular context has been functionally defined using its historical consequences (i.e., previously interpersonally punished; Cordova and Scott, 2001). Similarly, the intimacy of an interaction in a particular context is functionally defined in terms of both historical and immediate consequences (i.e., previously interpersonally punished, currently interpersonally reinforced; Cordova and Scott, 2001). No distinction has been made, however, between the overarching effects of different types of immediate reinforcement (i.e., positive or negative reinforcement) and corresponding antecedents (i.e., motivating operations and discriminative stimuli) involved in the IBCs that comprise a vulnerable interaction. In particular, it may be that vulnerability can emerge in appetitive or aversive functional relationships with a context, the distinction having important practical implications for facilitating intimacy in applied contexts. Aversive vs. appetitive functional relations Punishment and negative reinforcement both involve behavior interacting with aversive events, or situations that the organism will work to avoid or escape (see Hineline, 1984;Hineline and Rosales-Ruiz, 2013). Punishment is a process in which a behavior decreases in probability or frequency due to contact with aversive contexts, and negative reinforcement is a process in which behavior increases due to decreased contact with aversive contexts. In other words, punishment and negative reinforcement contingencies can be collectively described as involving aversive control, or, more broadly speaking, aversive functional relations between behavior and context. Aversive functional relations are characterized by a narrowing of the entire contingency, or the field of factors comprising the interaction between behavior and context [e.g., conditioned suppression, Lyon (1968)]. Aversive functional relations thus involve a narrowing of context, where those stimuli available and accessible (i.e., to serve eliciting, evocative, discriminative, and/or consequential functions) are limited to aversive events and events that predict their reduction or absence. Aversive functional relations also involve a narrowing of behavior, where the available repertoire is limited to those operant behaviors involved in escape or avoidance and the co-occurring elicited subtle behaviors (e.g., Lovibond, 1970). The relative constriction of ongoing aversive functional relations between context and behavior results in an insensitivity to shifts in context (Ramnerö et al., 2015), thereby making aversive functions particularly persistent (e.g., Hoffman et al., 1966). The cumulative effect of aversive learning is increased sensitivity to aversive contexts and, in turn, an increasingly narrow and rigid repertoire (Hineline, 1984;Ramnerö et al., 2015). Positive reinforcement, on the other hand, involves behavior interacting with appetitive events, or those that the organism will work to access. Indeed, positive reinforcement is a process in which a behavior increases in probability or frequency due to resulting increased contact with appetitive contexts. As such, positive reinforcement contingencies can be described as involving appetitive control, or, more broadly speaking, appetitive functional relations between behavior and the contexts, antecedent and consequential, in which that behavior occurs. Appetitive functional relations are characterized by a broadening of the entire contingency, or the field of factors comprising the interaction between behavior and context (Wilson and DuFrene, 2009). Appetitive functional relations thus involve a broadening of context, where those stimuli available and accessible to serve eliciting, evocative, discriminative, and/or consequential functions are expansive and flexible. Access to a broader range of events that may function as context comes with a broader range of accessible behaviors, including operant behaviors generally involving seeking, exploring, and engaging, and the co-occurring elicited subtle behaviors. The relative breadth and flexibility of ongoing appetitive functional relations between context and behavior results in sensitivity to shifts in context (Skinner, 1958). In this way, appetitive functional relations are associated with increased degrees of freedom (i.e., alternative accessible behaviors; Goldiamond, 1975Goldiamond, , 1976, and the subjective experience of choice. In contrast with aversive functional relationships, the cumulative effect of appetitive learning is increased sensitivity to appetitive contexts, and, in turn, an increasingly broad and flexible repertoire (Louisiana Contextual Science Research Group, 2022). Intimacy involves vulnerability under appetitive functional relations Vulnerability is central to intimacy, but it may not be a sufficient condition for intimacy to emerge. Instead, the current conceptualization suggests that intimacy requires that vulnerable behaviors (i.e., self-disclosure, emotional expressiveness, and emotional responsiveness), despite a history of being met with Frontiers in Psychology 04 frontiersin.org aversive consequences, emerge under appetitive functional relations with the context. Appetitive functional relations are observable in both operant form of vulnerability (where behaviors are shaped by a broad range of appetitive consequences and the evocative and discriminative contexts associated with them), and respondent forms of vulnerability (where emotions and their neurological correlates naturally and easily co-vary with the changing interpersonal context). Consequently, the vulnerability repertoire that contributes to intimacy emerges as broad, flexible, and sensitive to expansive appetitive learning experiences and continual adaptation to new interpersonal connections. Unfortunately, not all contexts that foster vulnerability are appetitive. The present conceptualization of vulnerability also suggests that self-disclosure, emotional expressiveness, and emotional responsiveness can emerge in aversive functional relations. In fact, because vulnerable behaviors have, by definition, been historically met with aversive consequences, contexts where vulnerability is available (i.e., situations that are emotionally evocative) necessarily have some aversive functions. Kanter et al. (2020) note the salience of aversives in vulnerable interactions in their discussion of safety, emphasizing that safety-providing behaviors reduce threat and nervous system activation. This conceptualization suggests the importance that safety (i.e., the reduction of threat and activation) be offered as an antecedent for vulnerable behavior, rather than a consequence. To the extent that vulnerability is consequated with reduced contact with aversives (i.e., via negative reinforcement), vulnerability becomes more probable, but the functional relations at play are aversive. Aversive functional relations are observable in both operant aspects of vulnerability (where behaviors are shaped by a narrow range of aversive consequences and their antecedent evocative and discriminative contexts), and respondent aspects of vulnerability (where emotions and their neurological correlates diverge). Thus, the vulnerability repertoire that prevents intimacy emerges as narrow, rigid, insensitive to learning experiences outside of those that foster quicker or more effective avoidance and overgeneralized to any emotionally evocative interpersonal situation. Certain contexts may include aversive functional relations that call for vulnerability, but vary in the extent to appetitive antecedents and consequences promote intimacy and subsequent well-being. For example, a student may recognize the need for accommodations in a course taught by a new professor, which would require an uncomfortable disclosure of their medical or psychological history. If the professor has not made explicit what accommodations may be available, how they can be accessed, or how they influence learning, the student may be forced to either initiate a vulnerable exchange without the safety of intimacy or simply proceed without the needed accommodations. Conversely, the professor could pre-emptively describe certain easily accessible accommodations as part of the learning environment with clear instructions on how to access them, how to know that they are needed, and how learning outcomes might be impacted by. In doing so, the context, despite having some aversive aspects for some inherently vulnerable students, is now better organized to foster appetitive functional relations with the vulnerable behavior involved in accessing needed accommodations. This allows not only for appetitives available in the intimate exchange, but also access to broader appetitives available in the course. Intimacy involves vulnerability with consent Considerations of functional relations in terms of their appetitiveness and aversiveness bring to bear a behavioral conceptualization of freedom vs. coercion. Skinner (1971) stated that freedom was defined by (1) the absence of aversive control via negative reinforcement or punishment, and (2) the absence of control via immediate positive reinforcement with deferred long-term aversive consequences. Freedom has also been related to the possibility or availability of choice, either choice of response options (Baum, 2017) or choices of alternative conditions (Catania, 1980). Similarly, coercion has been defined as control mediated by threats of punishment (Sidman, 1989(Sidman, , 1993, limited availability of choices (Goldiamond, 1975(Goldiamond, , 1976Catania, 1980), and reduced access to resources needed to generate responses (Goltz, 2020). Said functionally, appetitive functional relations are associated with genuine choices and more degrees of freedom by Goldiamond (1975Goldiamond ( , 1976) -greater the sensitivity to various contexts (antecedents and consequences), greater the alternative accessible behaviors, greater freedom associated with the behavioral repertoire. Likewise, aversive functional relations are associated with limited options and greater degrees of coercion (Goldiamond, 1976). According to this conceptualization, contexts that foster vulnerability will only foster intimacy to the extent they maximize degrees of freedom and minimize degrees of coercion. Kanter et al. (2020) approach this issue by specifying asking-giving relations as part of their model of intimacy. In this model, asking involves requests by the speaker for relational and/or non-relational needs to be met, and giving involves responding to the specific needs of the speaker by the listener. The authors discuss the risks inherent in the asking-giving interaction for both the speaker engaging in a vulnerable disclosure, and the listener accurately and empathically responding with emotional validation for such disclosures. For example, people asking may fear that their expression will result in conflict, rejection, or threats to their autonomy. This heightens the aversive functional relations involved in their vulnerable behavior. Furthermore, individuals giving may respond to the speaker's requests inaccurately, insufficiently, or excessively. Therefore, asking behaviors may function aversively for the speaker. In line with the current conceptualization, the more that aversive functions dominate asking and giving at the individual level, the more likely they are to dominate the IBCs involved in the interaction. The asking-giving exchange can be extended functionally by considering the negotiation of consent between interacting individuals. Consent is a complex interpersonal phenomenon with ethical implications in a range of contexts (Miller and Wertheimer, 2010). Affirmative consent, involving asking for and earning enthusiastic approval for an interaction, was first introduced in the context of sexual interactions (see Mettler, 2018) and is increasingly applied in functionally similar interactions (e.g., online interactions on social media; Im et al., 2021). Behaviorally, affirmative consent is an appetitive functional response class that (1) allows for the interacting people to tact (i.e., a verbal response evoked by an event or aspect of an event; Skinner, 1957) appetitive contingencies for themselves and each other, (2) allows for the interacting people to mand (i.e., a verbal response reinforced by a characteristic consequence associated with setting events; Skinner, 1957) for others Frontiers in Psychology 05 frontiersin.org to do the same, and (3) expands the degrees of freedom for the interacting behavioral repertoires with an ongoing availability of genuine choices that are responsive to shifting contingencies (Louisiana Contextual Science Research Group, 2021). In extension, this conceptualization would suggest that vulnerability fosters intimacy not only to the extent that that vulnerability is under appetitive functional relations, but also to the extent that an affirmative consent process has taken place. In other words, to foster intimacy, asking and giving should involve the naming of and responding to needs (1) under appetitive functional relations, and (2) with specification of not only the aversive but also the appetitive contingencies involved in those needs. Both requesting and providing ongoing consent mitigate some of the risks of contacting aversives for all persons in the interaction and increase emotional closeness as consent signals shared values around safety and well-being, both of which are necessary to foster intimacy. Certain contexts may include aversive aspects that call for vulnerability but vary in the extent to which they foster affirmative consent. For example, two queer therapists (A and B) are having lunch in their practice's kitchen when one (Therapist A) brings up the topic of discrimination at their practice and in the profession broadly. Therapist Aspeaks with great emotion about their past experiences and fears about taking on queer trainees. They also offer to listen and to provide support if Therapist B has similar experiences to share. An affirmative consent process is likely to begin if Therapist A not only tacts the aversive contingencies involved in their present vulnerability (e.g., "I'm feeling really upset by an unpleasant interaction I had with the boss, particularly considering the pressure to hire more queer trainees next term!") but also (1) tacts the appetitive contingencies (antecedents and consequences) present in this context (e.g., "I' d really like to share what happened and how I'm carrying it. I think I'm looking for a sort of gut check. ") and (2) effectively mands for the first therapist to do the same (e.g., "How are you hearing all this? Do you have the space to listen? Do you have something you' d like to use lunch today for instead?"). The consent process continues to the extent that Therapist B is able to offer the same tact-mand combination (e.g., "Whoa. I wasn't actually prepared for all that. And I do not know that I've thought about my experiences through the lens that you are asking for. I think I' d like more time to process what you have shared already before we go any further. I' d love to schedule a time to revisit this when I'm not hungry and stressed. Could I also help brainstorm some other ways you could get some support around this? Does that feel ok?"). Such affirmative consent interactions might be even more important when vulnerability is being invited in relationships with apparent disparities in power, such as in challenging training activities, therapy exercises, or employee feedback sessions. Intimacy involves vulnerability with empowerment Relative aspects of interacting repertoires with respect to the availability and accessibility of appetitives may contribute to the likelihood of vulnerability being (1) under appetitive functional relations, and (2) functionally consensual, both of which may be necessary for fostering functional intimacy. Maximizing appetitive functional relations involved in IBCs necessarily involves addressing and mitigating barriers in access to appetitives, and thus, addressing and responding effectively to privilege and power. Privilege A feminist understanding of privilege as an "unearned advantage.. [and].. conferred dominance" (McIntosh, 1988, p. 1) has enabled a prior contextual behavioral conceptualization of privilege as differential access to important reinforcers (Terry et al., 2010). A similar behavioral conceptualization expands upon this idea, describing privilege as a dynamic ratio of appetitives to aversives accessible in any given context (Louisiana Contextual Science Research Group, 2022). In this way, disparities in privilege can be understood in terms of relative access to appetitives proportional to aversives both in their learning history and brought to bear in the immediate context. Thus, the repertoire of a person with more relative privilege is more broad, flexible, sensitive to appetitives, and likely to enter appetitive functional relations with the context. In contrast, the repertoire of a person with less relative privilege is more narrow, rigid, sensitive to aversives, and likely to enter aversive functional relations with the context. For example, a Black woman serving as the dean of a college may experience microaggressions and tone-policing based on gendered and racial stereotypes (e.g., "angry Black women;" Walley-Jean, 2009) when delivering a call-to-action to a predominantly white faculty body following a publicized occurrence of police brutality and systemic racism. Despite her leadership position as the dean and the appetitives that that position makes available, a learning history involving intersecting dimensions of racism, sexism, and misogyny brings aversives to bear in the current context, including speaking in group meetings, crafting written statements, and even processing their personal emotional reaction to the tragedy. The same gendered and racialized stereotypes contribute to disparate performance evaluations and leadership assessments (see Motro et al., 2022) serving to further the aversive contextual functions that contribute to her lack of privilege in this context. Power is a central theme in feminist theory defined in a number of ways, including as a resource, as domination of others (i.e., "powerover"), and as empowerment to foster change (i.e., "power-to"; Allen, 2005). Contextual behavioral conceptualizations of power have also varied along similar themes. For example, Baum (2005) defined power as "the control that each party in a relationship exerts over the other's behavior" (p. 235). This access to control remains central to other proposed definitions of power (Guerin, 1994;Biglan, 1995). It has also been specified that this access to control is exerted relationally via control over a relatively greater number of significant reinforcers (Terry et al., 2010). Consistent with the contextual perspective on privilege, power has been conceptualized as the degrees of freedom afforded by access to appetitives and the resulting expansive repertoire (Louisiana Contextual Science Research Group, 2022). In this way, disparities in power can be understood in terms of relative degrees of freedom fostered by one's relative privilege. More power involves greater degrees of freedom fostered by greater privilege and the associated ease of access to appetitives relative to aversives. Less power, on the other hand, involves fewer degrees of freedom fostered by less privilege and the associated dominance of aversives relative to a scarcity of available and accessible appetitives. Thus, power is contextually-bound, where some contexts may function as empowering (i.e., fostering greater degrees of freedom via improved access to appetitives and buffering the impact of aversives) Frontiers in Psychology 06 frontiersin.org and others may be disempowering (foster reduced degrees of freedom via increased salience of aversives, and reduced access to appetitives). For example, a gender-marginalized faculty member working in a graduate school is at increased risk of interpersonal threats, ranging from microaggressions to overt harassment, and self-advocacy in these contexts may adversely impact their work experience and career trajectory (see Blithe and Elliott, 2020). Here, the graduate school could be described as a disempowering context for that faculty member. Applied to interpersonal interactions, this conceptualization suggests that privilege and power are not static, finite resources allocated in an interaction according to persistent identities. Instead, privilege and power are dynamic, contextually-bound functional aspects of the stimulating context and the current repertoire, respectively. IBCs do not function in such a way as to empower one person (i.e., increasing access to appetitives and increasing degrees of freedom) by disempowering the other (i.e., increasing access to aversives and decreasing degrees of freedom). Rather, IBCs could emerge that empower all parties involved in a vulnerable interaction. In fact, this centering of appetitive functional relations that are mutually expansive may be exactly what is necessary for vulnerability to cultivate intimacy. The less power and privilege a person has in an interpersonal interaction, the more likely they are to respond to invitations to vulnerability under aversive functional relations due to the relative dominance of aversive learning in their history in similar contexts. In other words, the more disempowering an interpersonal context is (i.e., the fewer degrees of freedom available there), the more likely invitations for vulnerability will function aversively, evoking more vulnerability or less, depending on which has historically allowed them to minimize contact with the aversive in similar contexts. The gender marginalized faculty member mentioned above will require more support (i.e., appetitives) in their vulnerable interpersonal interactions with other faculty to overcome the broadly disempowering context to connect intimately. Promoting mutually appetitive vulnerable IBCs may be most challenging when power and privilege are disparate between people in an interpersonal interaction. Disparities in power and privilege involve disparities in the distribution of aversive vs. appetitive functional relations obtaining in any one moment and, thus, the relative likelihood of aversive vs. appetitive learning opportunities in that situation. Such disparities are problematic in several ways (see Louisiana Contextual Science Research Group, 2022), but perhaps most so in vulnerable interactions, where the probability of vulnerable behaviors occurring under appetitive functional relations can be significantly reduced despite best efforts. While disparities in privilege and power are unavoidable in most interpersonal interactions, introducing vulnerability to those interactions is likely to evoke behaviors that emerge from and maintain such disparities in power and privilege and prevent true intimacy (i.e., sociopolitical problematic behaviors, SP1s; Terry et al., 2010). The effects of aversive functional relations around vulnerability vary depending on the person's repertoire with such contexts. To the extent that the disempowered person's lack of privilege and power are generalized across interpersonal contexts (e.g., with intersecting identities that limit power and privilege broadly), they are also more likely to have an explicit learning history about the emotions of more powerful people being aversive. For example, the phenomenon of white tears, where people of color are oppressed by the emotional expressions of white people, is well documented (Accapadi, 2007). Here, a less powerful person may learn to engage in vulnerability (i.e., self-disclosure, emotional expressiveness, and emotional responsiveness) as a way of calming the more powerful person, not in pursuit of connection or soothing for themselves, but as a way to escape a historically threatening interaction (see Menakem, 2017). This dynamic would also be considered problematic when the more powerful and privileged person's vulnerability is under aversive functional relations. This can occur due to some aversive aspect of context outside of the interpersonal interaction (e.g., an upsetting conflict with a family member, a stressful financial challenge, a frightening storm outside). This can also occur when a learning history where interacting with people with less power is aversive in and of itself, reducing degrees of freedom without equalizing the disparity. For example, some conceptualizations of racialized trauma highlight the pervasive socialization in the U.S. around Black bodies as impervious, dangerous, hypersexual, and dirty, along with the resulting physical, emotional, and mental constriction experienced in their presence (see Menakem, 2017). If the more powerful person's current behavior is being dominated by aversives, the interaction is likely to be increasingly and rigidly focused on reducing their distress. In other words, if the emotional expressions and responsiveness of the person with relative ease of access to appetitives is still under aversive functional relations, those functional relations are likely to dominate the IBCs for both members of the interaction. Consider the example of a professor serving as a thesis advisor arriving late for a meeting with their graduate student. The interaction may begin with the professor apologizing and explaining to the student that they had been fighting with their partner, which resulted in them leaving home late. As they are sharing this story, the professor offers some background as to why their conflict with their partner is so upsetting, becoming teary-eyed and expressing other overt signs of emotional distress. The professor is demonstrating vulnerability and may struggle to contact the empowering appetitives available in the thesis work, the mentoring relationship, or the pride in their professional position. Meanwhile the student is confronted with the professor's vulnerability without the power and privilege that would allow them to contact their own empowering appetitives. For example, it is unlikely that the student would have the degrees of freedom, shaped by an appetitive learning history, to initiate a consent process in which they could name their desire to return to the meeting's original agenda (their thesis), their need for support around that work, and their preference to reschedule the meeting if their professor cannot meet that need. The student's learning history may also involve specific aversive consequences for engaging in such behaviors such as acute punishing feedback or longerterm damage to the relationship. So instead, the student is likely to find themselves trying to calm their professor to allow them relief. Such aversive functional relations around vulnerability could also arise with the more empowered and privileged person inviting vulnerability. For example, a therapy trainee finds themselves in a clinical supervision meeting, being asked by their supervisor to share their painful feelings, self-deprecating thoughts, and patterns of unworkable action. The supervisor is alarmed by the trainee's rigidity and wants to offer them an opportunity to build their repertoire before it negatively impacts their therapy work. The therapy trainee is aware of their suffering and how important their personal growth could be to their professional development but finds themselves feeling overwhelmed by their supervisor's softened tone and intense eye Frontiers in Psychology 07 frontiersin.org contact. The therapy trainee may additionally experience concerns about their vulnerabilities (e.g., painful feelings, self-deprecating thoughts, unworkable actions) being used against them in formal evaluations. In this way, both the supervisor's and the trainee's repertoires are dominated by aversive functional relations. The disparity in power and privilege further limits the trainee's capacity to object to the line of questioning. The trainee discloses as requested by the supervisor but leaves the meeting confused about what the purpose of their disclosure was and how to move forward with their therapy sessions. On the one hand, the trainee feels heard and accepted by their supervisor, but on the other hand, they are primarily dreading having their personal psychological struggles present in their future meetings. Further, this dread may be founded, as the supervisor experienced relief at the trainee's openness and their probing was reinforced. Elaborated contextual behavioral conceptualization of intimacy This conceptualization builds on existing behavioral and contextual behavioral approaches to understanding and intervening on intimacy. From this perspective, intimacy involves vulnerable behaviors, or responses to aversive antecedents, that are under appetitive functional relations, consensual, and empowered (see Table 1). To this end, contexts that aim to intervene to increase intimacy will, in the presence of aversive antecedents: (1) evoke behavior that functions to increase ongoing contact with shifting appetitives, (2) involve tact-mand combinations that make appetitive contingencies salient, and (3) include behaviors that support increasing degrees of freedom across IBCs. Implications for creating contexts for intimacy vary across power and privilege disparities, relational goals, and time points within the interaction. Creating contexts for intimacy According to this conceptualization, the person with more relative power and privilege in an interaction and overall relationship bears responsibility for managing vulnerability in such a way as to promote intimacy instead of coerced vulnerability. A person with more relative power and privilege is likely to be more sensitized to available appetitives and have appetitive learning as a more robust aspect of their repertoire. Sensitivity to appetitives is necessary for consensual interactions and empowerment, both being critical features of training or intervening on intimacy. In this way, the reader is invited to reflect upon the relational contexts in which their ratio of appetitive to aversive functional relations (i.e., privilege) maximizes their degrees of freedom (i.e., power), and to consider the following recommendations for fostering appetitive functional relations when aversives (i.e., conditions for vulnerability) are present. Figure 1 offers a process for moment-to-moment assessment of conditions for functional intimacy along with response options based on observed conditions. 5.1.1. Modulate mands for intimacy according to relative power, the consented relationship, the consented purpose of the interaction, and other aspects of the immediate context Disparities in power and privilege will always be present in relationships and will fluctuate across the different contexts in which relating occurs. Further, relationships, both personal and professional, come with distinct responsibilities that may or may not involve intimacy. For example, a Psychology professor's responsibilities to their Intimacy involves… Relevant terms Vulnerability Vulnerability-socially risky behavior; behavior that has been previously punished; has aversive antecedents Vulnerability under appetitive functional relations Appetitive functional relations-seeking, exploring, and engaging behaviors; broad and flexible repertoire, broad and flexible context; subjective experience of choice; strengthening of appetitive learning vs. Aversive functional relations-running, fighting, and hiding behaviors; narrow and rigid repertoire, narrow and rigid context; subjective experience of coercion; strengthening of aversive learning Vulnerability with consent Affirmative consent-requesting and receiving enthusiastic approval for an interaction; appetitive functional response class involving a tact-mand combination; interacting people tact appetitive contingencies at play and mand for others to do the same; expands degrees of freedom with an ongoing, shifting availability of genuine choices vs. Coercion-interaction persists with absent, limited, or threatening communication; aversive functional response class focused on promoting the interaction with little attention to current function; if tacted at all, appetitives are presented in ways that suggest scarcity or otherwise, narrow degrees of freedom Vulnerability with empowerment Empowerment-positions one with the power to act to resource current needs; fosters greater degrees of freedom via improved access to appetitives and buffering the impact of aversives vs. Disempowerment-positions one to act to resource the needs of the more powerful person; fosters reduced degrees of freedom via increased salience of aversives, and reduced access to appetitives Functionally intimate contexts will, in the presence of aversive antecedents: (1) evoke behavior that functions to increase ongoing contact with shifting appetitives, (2) involve tact-mand combinations that make appetitive contingencies salient, and (3) include behaviors that support increasing degrees of freedom across IBCs. Frontiers in Psychology 08 frontiersin.org students differ from those to their applied supervisees, which differ still from those to their clients in their therapy or consultation work. While intimacy is central to well-being, this conceptualization suggests that intimacy is simply not always available or necessary, and vulnerability should only be invited where it is. The responsibility for determining if intimacy is available in a particular relationship and context fall to the person with more power, and involve moment-to-moment assessment of: (1) power dynamics as the relative ratio of appetitives to aversives available for each member of the interaction, (2) consistency of intimacy with the consented relationship p and the purpose of the current interaction in terms of explicitly tacted interlocking appetitives, and (3) other aspects of the immediate context in terms of their appetitiveness or aversiveness. We might ask: How much breadth, flexibility, and freedom do they seem to have in this interaction? How much breadth, flexibility, and freedom do I have? What are we each working for here? Is there anything we seem to be working to minimize, delay, or escape? Is there anything we seem to be grasping? Is building our intimacy an aspect of why we are in a relationship with one another? Is building our intimacy an aspect of why we are interacting right now? What seems to be supporting or limiting my freedom? What seems to be supporting or limiting their freedom? When this assessment suggests that the context offers limited support for intimacy -that is, that either the relationship or extrarelational aspects of context are limiting empowerment, consent, or overall access to appetitives, invitations to vulnerability should be tempered or withdrawn. When this assessment suggests that the context supports intimacy -that is, that the relationship is appetitive, empowering, and consensual, invitations to vulnerability have the potential to foster intimacy. 5.1.2. Foster accessibility of appetitives in terms of the detection, discrimination, and tacting appetitives with contextually appropriate resourcing Sometimes the consented relationship and purpose of the interaction does involve intimacy -that is, sometimes intimacy is an explicitly-tacted appetitive process or outcome for the relationship broadly and the current interaction. For example, a psychotherapy relationship is, by definition, intimate. However, just because intimacy is a consented part of the relationship and the interaction does not mean it is available from its initiation. For example, a psychotherapist may have to put significant effort into establishing the context for vulnerability to foster intimacy. Repertoires involved in accessing appetitives (both relational and otherwise) vary considerably between people and contexts, and are challenged by vulnerability. Thus, the person with more power in the consented intimate relationship and current interaction bears responsibility for fostering the accessibility of appetitives for the person with less power, both prior to and during the introduction of aversives involved in vulnerability intended to promote intimacy. This involves creating a context that evokes and reinforces the detection of appetitive functional relations, the discrimination of behavior necessary to access them, and the tacting of shifting appetitive functions as the interaction unfolds. The psychotherapist working from this perspective might invite the client to contact appetitives in their interaction from the most simple (e.g., inviting the client to give themselves a kind and resourcing breath) to fairly complex (e.g., inviting the client to share what life is or has been like when they were not struggling or to react to the therapist's shared intentions for psychotherapy). Here, the psychotherapist bases their invitations on the client's responses, interacting to support the gradual cultivation of this skill of detecting, discriminating, and tacting appetitive contingencies in the therapy interaction. It is also necessary for the more powerful person to determine if the current context can be appropriately resourced with appetitives salient to the person with less power to foster accessibility for them and promote intimacy in the relationship. If a psychotherapy client struggles to interact appetitively in a particular session, in the therapy relationship, or at all, the vulnerability of the interaction might already be outstripping the appetitives available, and the appetitive Ongoing assessment of conditions for functional intimacy. Frontiers in Psychology 09 frontiersin.org learning repertoire and/or the therapeutic relationship may need to be developed before vulnerability is explicitly invited. 5.1.3. Foster one's own access to appetitives in terms of the detection, discrimination, and tacting appetitives with contextually appropriate resourcing As vulnerability necessarily involves contact with aversives, contexts that evoke vulnerable behavior can disempower even people with more relative power in the interaction. To the extent that the more powerful person continues to engage vulnerably, then, their behavior is likely to come under aversive functional relations. In other words, their behavior is likely to narrow and become increasingly rigid, reducing their sensitivity and responsiveness to the varying context created by the less powerful person's behavior. In this way, individuallevel aversive functional relations are likely to extend into IBCs and further limit access to appetitives for all members with varying degrees of power in the interaction. Thus, the person with more power in the interaction is responsible for fostering ongoing accessibility of appetitives not only for those with less power, but also for themselves. This involves assessing one's own current repertoire for the detection, discrimination, and tacting of appetitives both prior to and during the interaction. Prior to initiating a vulnerable interaction, the person with more power may find themselves rigid, narrow, and highly oriented to aversives (i.e., aversive functional relations may be dominant in their repertoire). For example, a highly paid keynote presenter who is jet lagged, hungry, and dehydrated may find themselves struggling to connect fully with an emotionally compelling personal story they planned to use to introduce their talk. Despite them being positioned to have the most power and degrees of freedom at the event, they simply do not have the resources to interact with the memories, the images on slides, and the audience appetitively. They rehearse the opening again and again, noticing themselves feeling more and more distracted, anxious, and disconnected from the memory with each attempt. If they are not able to engage appetitively with this vulnerable expression, it is highly unlikely that any of their audience members will. Here, the vulnerability is likely to be alienating instead of connecting. Such an effect might be even more pronounced if the audience is small and intended to be intimate. In such a situation, intimacy is unavailable and should not be pursued. Instead, the person with more power may focus temporarily on resourcing themselves (i.e., engaging in behavior to increase salience of and contact with appetitives) to reassess the importance, availability, and necessity of intimacy. This might include actions that vary in complexity, including those that address physiological needs (e.g., resting, breathing deeply and mindfully, eating or drinking, exercising), interpersonal needs (e.g., seeking consultation, validation, or support from a similarly powered peer), or intrapersonal needs (e.g., reflecting on relevant values, affirming relevant aspects of identity). If there is sufficient time, instead of rehearsal, the talk might be better served by resourcing the speaker themselves with some fluids, a snack, and a nap. If time is limited, the speaker might replace the story with one that is less personal or emotionally compelling that they can interact with effectively. Notably, accessibility of appetitive functional relations for the person with more power is necessary and insufficient for intimacy. If self-assessment reveals increased breadth and flexibility and corresponding access to appetitives, they may initiate the intimate interaction, with careful attention to ongoing accessibility of appetitive functional relations for themselves and other members of the interaction, and a commitment to resource themselves as needed to initiate, maintain, or withdraw intimacy. 5.1.4. Assess for increasing prominence of aversives in the ongoing intimate interaction and intervene to maintain the dominance of appetitive functional relations Interactions are a dynamic process (as are consent and power), and an interaction can shift at any point in such a way as to increase the prominence of aversive functional relations, increasing the likelihood of disempowerment and withdrawal of consent. For example, a parent may invite their adolescent child to talk about a long-term friendship that seems to be preoccupying them. The child accepts, disclosing a number of quite dangerous things their friend has been doing. As the parent's concern increases, they may notice the child becoming defensive and sounding like they might try to end the conversation. The responsibility for ongoing assessment of aversives associated with vulnerability falls to the person with more power, as does the responsibility for shifting the context to maintain the dominance of appetitive functional relations. This involves watching other members of the interaction for narrowing or increased rigidity of their repertoire, and resourcing them as needed (i.e., shifting the context as needed to increase salience of and contact with appetitives). This might begin with inviting actions that address physiological needs (e.g., inviting resting, breathing deeply and mindfully, eating or drinking, exercising) and extend into interpersonal needs (e.g., inviting validation-seeking, or support from a similarly powered peer), or intrapersonal needs (e.g., inviting reflection on relevant values, affirming relevant aspects of identity). For example, the parent may express gratitude for the disclosure and invite their child to pause to see if they need a hug or a glass of water before they go on. If ongoing assessment reveals increased breadth and flexibility and corresponding access to appetitives, they may re-initiate the intimate interaction by inviting vulnerability (i.e., re-introducing aversives), with careful attention to ongoing accessibility of appetitive functional relations. For example, the parent might acknowledge how hard it must have been to keep these secrets about someone they care so much for, and invite them to discuss it further. The parent might also stop short of sharing their own concerns for the friend, noticing that their child does not seem well-positioned to respond appetitively to their expression. Ideally the increased prominence of aversive functional relations is detected prior to their dominance, allowing for a shift in functional relations to maintain the dominance of appetitive functional relations before the person engages in avoidance behavior. Otherwise, intimacy may become unavailable for the remainder of the interaction, as the capacity to contact aversives associated with vulnerability without them becoming dominant is limited. Establish interlocking appetitive functional relations Many relationships call for some degree of consented intimacy (e.g., coworkers, neighbors, colleagues, community members, etc.), even if that is not the primary purpose of the relationship. In the context of such relationships, interactions that are not particularly vulnerable can provide a foundation for future intimacy through the development of robust and easily accessible interlocking appetitive Frontiers in Psychology 10 frontiersin.org functional relations (i.e., appetitive IBCs). Members of the interaction can learn early on in a relationship how to behave in ways that are mutually appetitive. In other words, people can learn to engage in behaviors that are under appetitive functional relations with the behaviors of the other, where each is evoking and reinforcing others' behavior appetitively. Even if there is no apparent power differential associated with respective roles in the relationship, power will still vary across contexts and interaction dynamics. Thus, here too, fostering interlocking appetitive functional relations is the responsibility of the person with more power in the interaction. This involves the person with more power introducing appetitives that evoke and reinforce a range of behaviors for the person with less power, increasing the breadth and flexibility of their unfolding behavioral stream and sensitizing them to a range of appetitives in the context. Once the relationship is established to serve appetitive functions, the person with more power can specifically evoke and reinforce behaviors to support their own repertoire of assessing and intervening on appetitive functional relations (i.e., to evoke and reinforce improved sensitivity and responsiveness of the person with more power to the behavior of the person with less power). If possible, vulnerability should not be introduced until these interpersonal appetitives are well-established and easily accessible to both members of the relationship, in order to optimize the likelihood of intimacy. If not possible, vulnerable interactions should be resourced as described above and interspersed with interactions that are more exclusively appetitive, so as to continue to establish interpersonal appetitives to the point that intimacy is more readily accessible. 5.1.6. Aversive consequences are not used to train behavior and aversive antecedents are limited to the consented relationship In any relationship, there are behaviors in both people's repertoire that are aversive to the other. In other words, there are behaviors in one person's repertoire that narrow and rigidify the behavior of the other, and motivate the other person toward running, fighting, or hiding to decrease that behavior of the other. The quickest, most acutely effective way for a person to decrease a behavior of another is to punish it by presenting aversive consequences. For example, if a research assistant is dominating conversations in lab meetings, the quickest and most effective way to stop it would be to consequate it with an aversive. The research supervisor might, having seen how the assistant seeks the other members' approval, admonish them firmly and publicly, watching them grow red with shame and withdraw. Regardless of whether the dominating behavior was under appetitive or aversive functional relations prior to the admonishment, aversive functions will now be prominent in lab meeting-for that assistant, and perhaps also for the supervisor and even other lab members on the team. Similarly, the quickest, most acutely effective way to increase alternative, more desired behaviors is to reinforce them by removing aversive consequences. For example, a supervisor may lead their team in a way that is quite intimidating, offering only intermittent praise for employee work and frequently demonstrating high level skills with fluency instead of offering instruction or for how to reach that level. Here, the supervisor's intermittent praise for technical improvements may actually function not as an appetitive, but as a temporary removal of threat, allowing the praised team member some relief from the constant intimidation. Aversive functions, however, will still be most prominent, as that relief is temporary, and technical improvements will be overly focused on supervisor reactions. Indeed the quickness and acute effectiveness of aversive functional relations to change another's behavior allows for the behaviors that serve aversive functions to emerge quite easily and with the characteristic narrowness and rigidity. In short, the use of aversive functional relations by the person with more power, even when used with intention and care, fosters their dominance in the IBCs. Further, the quickness and acute effectiveness of behavior change via aversive functional relations is accompanied by serious costs to the availability of intimacy. To the extent that a person uses the application and removal of aversive consequences to train desired repertoires, they are increasing the likelihood that their presence will function aversively for the other person such that aversive functional relations easily become prominent in IBCs. This risk is elevated in vulnerable interactions, which necessarily involve the presence of aversives, and further elevated when power disparities emerge. Here, the use of aversive consequences is associated with decreased availability of appetitive consent and increased probability of disempowerment, both of which are necessary for intimacy. In both examples above, the researcher and the supervisor might find their teams not only struggling to grow expansively and flexibly, but also failing to meet the kinds of vulnerabilities that show up in any workgroup (e.g., errors, interpersonal conflicts, etc.) with intimacy. Thus, according to this conceptualization, it is the responsibility of the person with more power in a particular interaction in any relationship to approach behavior change in terms of expanding repertoires under appetitive functional relations. When the person with less power engages in behavior that is aversive to the person with more power, the role of the person with more power is to evoke and reinforce a range of new, more effective behaviors. When aversives are introduced by a person with more power, they are preceded by foundational interpersonal appetitives and presented as antecedents to expand repertoires in vulnerable situations, instead of as consequences to limit the behavior they find aversive. In both situations above, the researcher and the supervisor's teams would be well served by discussions to establish appetitives in the work, to nurture interpersonal appetitives, to develop appetitive functional relations in team processes, and to support self-resourcing with appetitives outside of the work. Conclusion Intimacy is considered integral to personal and relational wellbeing (see Reis, 1990), and, in most models, involves bidirectional vulnerability (see Gaia, 2002;Timmerman, 2009). FAP has provided a strong contextual behavioral foundation for the ongoing development of intimacy-based interventions, emphasizing contexts that foster vulnerability, intimacy, and safety. This conceptualization expands on that foundation suggesting that safe intimacy is only possible in contexts that use appetitive functional relations to promote consensual, empowered vulnerability. Most crucially, this conceptualization of intimacy places responsibility on people with more relative power to create appetitive contexts for intimacy and to avoid vulnerability where intimacy is not possible. It is our hope that this conceptualization may both guide intentional responses to the needs that vulnerable interactions present, and inform future empirical and applied developments in the science of intimacy in research, practice, and community culture.
v3-fos-license
2014-10-01T00:00:00.000Z
2010-12-28T00:00:00.000
9448857
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-3-349", "pdf_hash": "8b5bd6b22ca3008723f97b98cc0154cdf8075e12", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41522", "s2fieldsofstudy": [ "Biology" ], "sha1": "8b5bd6b22ca3008723f97b98cc0154cdf8075e12", "year": 2010 }
pes2o/s2orc
Exploring the use of internal and externalcontrols for assessing microarray technical performance Background The maturing of gene expression microarray technology and interest in the use of microarray-based applications for clinical and diagnostic applications calls for quantitative measures of quality. This manuscript presents a retrospective study characterizing several approaches to assess technical performance of microarray data measured on the Affymetrix GeneChip platform, including whole-array metrics and information from a standard mixture of external spike-in and endogenous internal controls. Spike-in controls were found to carry the same information about technical performance as whole-array metrics and endogenous "housekeeping" genes. These results support the use of spike-in controls as general tools for performance assessment across time, experimenters and array batches, suggesting that they have potential for comparison of microarray data generated across species using different technologies. Results A layered PCA modeling methodology that uses data from a number of classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes") was used for the assessment of microarray data quality. The controls provide information on multiple stages of the experimental protocol (e.g., hybridization, RNA amplification). External spike-in, hybridization and RNA labeling controls provide information related to both assay and hybridization performance whereas internal endogenous controls provide quality information on the biological sample. We find that the variance of the data generated from the external and internal controls carries critical information about technical performance; the PCA dissection of this variance is consistent with whole-array quality assessment based on a number of quality assurance/quality control (QA/QC) metrics. Conclusions These results provide support for the use of both external and internal RNA control data to assess the technical quality of microarray experiments. The observed consistency amongst the information carried by internal and external controls and whole-array quality measures offers promise for rationally-designed control standards for routine performance monitoring of multiplexed measurement platforms. Background Expression profiling using DNA microarrays is increasingly being used for clinical and diagnostic applications and in support of regulatory decision-making. These applications require the technology to be robust and reliable and that the data be well characterized [1]. The quality of data generated varies considerably between laboratories [2,3] as well as between platforms [4,5]. One initiative working to provide tools for technical performance assessment of microarray gene expression data is the External RNA Control Consortium (ERCC) [6][7][8][9]. The external, "spike-in" controls from this group are intended to be informative about the quality of a gene expression assay independent of microarray platform, experiment, or species. This paper presents evidence that the spike-in controls carry the essential quality information about an experiment. Data obtained from spiked-in controls was compared with that carried by full-array quality metrics, which typically depend on platform, experiment, and species. These results support the proposition that spike-in controls can be used on their own as tools for assessing data quality and comparing data generated as part of different experiments. Data quality can be assessed at a number of stages within the microarray experiment (from the integrity of the biological sample to the accessibility of the data stored in a databank repository) [10]. Few universal data quality metrics are available as there are a large number of array types, labeling methods, scanner types, and statistical approaches available to summarize and analyze the data. The determination of integrated whole-array data quality indicators is not yet a standard practice, and is considered an important research topic area in biostatistics [11,12], as highlighted by Brettschneider et al. [13]. The need for better quality metrics is not limited to gene expression measurements generated using microarrays: a number of other high throughput technologies (e.g., multiplex protein arrays) lack obvious simple scalar metrics that can be used to assess quality [14,15]. A number of initiatives including the Microarray Quality Control (MAQC) project of the FDA http:// www.fda.gov/nctr/science/centers/toxicoinformatics/ maqc/ and the ERCC are working to develop reference data sets, reference RNAs, and standard external controls intended for use in the evaluation of microarray performance [6][7][8][9]. The ERCC seeks to employ external spike-in control measurements to assess technical performance with a standard set of controls in a consistent manner using metrics that can be compared across experiments, labs, platforms, and other factors as they arise. The ERCC is developing the standard controls, analysis tools, and protocols for using these controls and tools to enable consistent assessment and monitoring of technical performance. The MAQC project has examined the use of a diverse set of external controls for a number of platforms [16], noted that external controls have yet to be widely used for performance assessment, and made recommendations for doing so. Analysis of the control signals to assess performance was largely through quantitative characterization of the slope of the signal-concentration curve. A significant observation from this work was the identification of outlier data at one participant's site using principal component analysis (PCA) of the external controls. More recent analysis of the various spikein controls employed in the measurements for the MAQC project demonstrated promise that the spike-in controls were informative of "outlying" arrays, and that they exhibit behavior that is independent of the sample type [17]. This work characterizes the internal and external control data, separate from the signal derived from the biological sample, from a microarray experiment generated on the Affymetrix GeneChip platform. The internal controls are Affymetrix-specified probesets that represent RNA degradation internal controls or "housekeeping" genes and are routinely examined to reveal the quality of the sample RNA (Figure 1a). The external, or "spikein", controls are typically RNA transcripts produced by in vitro transcription that are added at a particular stage in the generation of the labeled sample transcriptome extract, at a known concentration (Figure 1a and 1b). The expression measures of these controls carry information about variation arising from a number of sources; both classes of internal controls should carry information on all of the sources of the variability in the experiment (Figure 1a). The polyA+ controls should carry information about the technical variation associated with amplification and labeling procedures onlyand not variation arising from sampling -whereas the hybridization controls should carry information about variability arising from hybridization and scanning only. Employing PCA as an exploratory data analysis tool, it was anticipated that the variance structure associated with the individual steps of the microarray experiment would be revealed through the resultant scores and loadings profile of the PCA models of these four separate classes of control data. Knowledge of the quantity of each spike added and the relative intensities of the signals can be compared against the expression measures obtained from global gene expression; this has been used as the basis of comparison between data generated on different arrays [18]. Deviations from the expected signal-concentration relationship for the spike-in controls should be informative about the technical performance of the measurement [7,[19][20][21][22][23][24]. Critically, the utility of the information carried by the spike-in controls relies on the assumption that the controls act as meaningful proxies for the endogenous genes and that their behavior is representative of these genes of interest. The retrospective study undertaken here tests that assumption. Hybridization-wise PCA was also used to compare the results of individual PCA models obtained from the control probeset data with independent laboratory measures of RNA-and hybridization-specific quality and full-array metrics [13]. Our results underscore the importance of assessing data quality and reveal some of the strengths and limitations of using spike-in and endogenous controls for assessing data quality. Methods This study uses data generated on the Affymetrix Gene-Chip platform at the Clinical Sciences Centre/Imperial College (CSC/IC) Microarray Centre. This data is stored in, and was accessed, via the Centre's Microarray data Mining Resource (MiMiR) database [25,26]. These data were generated using a stock of external controls (polyadenylated -polyA+ controls) prepared at the Centre and distributed to individual research groups along with standard protocols for generating labeled cRNA in their own laboratories. Prelabeled hybridization controls were purchased from Affymetrix and added to the labeled samples at the Centre prior to hybridization. The polyA+ controls are a cocktail of 5 polyA-tailed Bacillus subtilis transcripts (Lys, Phe, Dap, Thr, and Trp) (Figure 1b). These controls are spiked into total RNA in a fixed ratio to a fixed amount of total RNA and were carried through the sample preparation and used to monitor the efficiency of cRNA labeling and data quality. The hybridization controls (BioB, BioC, BioD, and Cre biotin-labeled transcripts) were spiked into the hybridization cocktail according to the manufacturer's instructions. They are used to align the grid and assess the efficiency of hybridization, washing and staining. Extensive whole-array quality assurance metrics and BioConductor-based summary statistics [27][28][29][30] related to scanner/array performance and RNA quality are routinely assembled for each of the datasets with a report generated at the CSC/IC Microarray Centre. These reports are included in the MiMiR database, together with the individual hybridization files and experimental ontology and annotation information [25,26]. The Microarray Centre QA report metrics are based on .CEL file signal intensity data from GeneChip arrays and include summary statistics of all the hybridizations within a particular experiment generated using the Bio-Conductor (BioC Release 1.9) open source software. This report provides quality assessment metrics based on: 1) Diagnostic Plots, 2) Probe-level Robust Multichip Average (RMA) Model Estimates, 3) Probe Metrics and 4) Principal Component Analysis. The first two sections include summaries of log 2 probe RMA intensities before and after normalization as well as the RMA model fit residuals, relative log 2 expression (RLE) and normalized unscaled standard error (NUSE) plots for the identification of outlier arrays within an experiment dataset. In addition, RNA degradation plots show the log 2 mean intensity by probe pair position (5' end to 3' end) for each array and are used to identify samples that may have been subject to degradation. The third section, Probe Metrics, are obtained from BioConductor MAS 5.0-based statistical algorithms and are used to assess both RNA assay and hybridization performance. These include measures of scanner variability (e.g., RawQ), summarized exogenous control intensities with respect to their spike-in concentration levels, correlation measures between exogenous polyA+ controls and raw signal values, and 3'/5' ratio measures for both exogenous and endogenous controls to assess the efficiency of labeling and/or sample RNA integrity. The fourth and last section provides a simplified PCA scores plot generated from the complete set of probes (including background and all exogenous and endogenous control probes) to identify gross outliers within the experimental dataset as a whole. A recent review of these metrics as they relate to the quality assessment of microarray data after statistical processing is provided by Brettschneider et al. [13] Data Examined in this Study Data from 525 hybridizations representing 22 publicly-available experiments generated over a five-year period at the CSC/IC Microarray Centre on multiple types of GeneChips were analyzed as part of this study and included human (HG-U133A, HG-U133B, HG-U133plus2), rat (RG-230_2, RAE230A, RAE230B) and mouse (MG-430_2, MOE430A, MOE430B, MG-U74v2A, MG-U74v2B, MG-U74v2C) microarrays. A single, exemplary experiment containing data from 137 Rat Genome RAE230A arrays is highlighted for this manuscript. This included data generated on different days over a 10-month period, with different experimenters, array batches, and QC measures from the whole-array QC report. This example was analyzed using PCA and the results compared to the QC and factor information available within the MiMiR database. PCA was conducted using only data from the controlbased probesets (excluding all the non-control (background) probeset signals). There are four groups, or classes, of controls, external and internal to the biological sample (exogenous and endogenous). The external controls were either polyA+ RNAs spiked into the sample before amplification and labeling or prelabeled hybridization controls spiked into the sample prior to hybridization. The internal controls are those suggested by Affymetrix as a measure of RNA degradation, and report on relatively invariant 'housekeeping' genes. Microarray probesets for the same external controls are present on all Affymetrix GeneChip arrays; probesets for the endogenous controls are organism-specific and are common to all arrays of such type (i.e., rat). Dataset Construction and Preprocessing Probeset data from the individual hybridizations on RAE230A arrays (EXP_CWTA_0103_01; Array Express ID E-MIMR-222) are described in this manuscript. In brief, this experiment is a comparison of gene expression profiles of peritoneal fat of 6-week rats from 30 recombinant inbred (RI) strains derived from the spontaneously hypertensive rat (SHR/Ola) and Brown Norway congenic carrying polydactylyl-luxate syndrome (BN-Lx) strains. A single hybridization (HFB2003080611Aaa) was missing annotation for experimental QC and was thus omitted from the data analysis. A summarized version of the annotation QC information pertaining to the individual hybridizations used in this experimental dataset is provided in Additional File 1: Supplemental Table S1. Measures representing expression were generated from the raw data using the RMA "Affy" package (Bioconductor 1.8 release) within the R environment (v 2.6.0). The data was preprocessed using background correction and quantile normalization to the global median [27]. A hybridization-specific normalization protocol was used that adjusts each probeset intensity to the 75th percentile of the non-control (background) probes and is an alternative to the quantile normalization approach typically employed with RMA-based methods. Using the expression values determined from the RMA summarization method (with only background correction), the 75th percentile of the log 2 intensities for the background probesets associated with the individual hybridization was determined and then subtracted from the probesets of interest (i.e., hybridization and polyA+ spike-in controls and the internal Affymetrix-designated cRNA degradation and endogenous control/housekeeping gene controls). This 'brightness-scaled" normalization approach was employed to support control data aggregation across multiple array types can be generated on a similar scale can thus directly compared and permits the identification of sample-associated variability. This 75th percentile normalization was carried out for several datasets that were generated across multiple array types (data not shown) when aliquots of the same samples were hybridized to arrays of the same or different type (e.g. RAE230A and RAE230B). The 75 th percentile normalization was the default data analysis method for our investigations. Mean/SD Plots The mean and standard deviation (SD) of the RMA values were calculated for all probesets within an experiment conducted on a single array type, comparable to other informatic methods for generating probeset-level precision metrics [2,[31][32][33]. All mean and associated SD data pairs were employed to generate mean/SD plots that highlight control probesets associated with the hybridization, polyA+, RNA degradation, and endogenous control/'housekeeping genes' (as defined by Affymetrix for specific array types). The remaining non-control sample probesets were displayed as background for the mean/SD plots; the background average line of these data was determined as a 100-point moving average of the mean values for all the non-control probesets. All calculations were carried out using Excel code. Chemometric Analysis PCA was conducted for all experimental datasets using PLS_Toolbox 4.2.1 (Eigenvector Research, Inc., Wenatchee, WA) within a MATLAB v. 7.5.0.342 (R2007b) (Math-Works, Inc., Natick, MA) computational environment. Each experimental dataset was separated into four subsets representing the: 1) spike-in hybridization controls, 2) spike-in polyA+ controls, 3) internal RNA degradation controls (Affymetrix-designated) and 4) endogenous or normalization control genes http://www.affymetrix.com/ support/technical/mask_files.affx). Each PCA data subset was organized into a single data block structure with dimensions of N rows × K columns that correspond to N samples (hybridizations) and K variables (probesets) (see Table 1). Each variable in the dataset was centered to have a mean of zero but was not variance scaled. A complete list of the spike-in control probe set identifiers together with the internal RNA degradation and endogenous control probe set identifiers is provided in Additional File 1: Supplemental Table S2. The optimal number of components to include in the PCA model was determined by the minimum of both the root mean square error of calibration (RMSEC) and of cross-validation (RMSECV) employing a venetian blinds algorithm for which the dataset were split according to their size (here 10 splits for 137 hybridizations). Datasets that contain duplicate hybridizations were subject to replicate sample trapping as the presence of related samples in test and training sets may lead to skewed cross-validation results. Here, an additional cross-validation using a random subset scheme was employed and checked for consistency with the venetian blinds approach. A summary of the PCA models including the cumulative % variance captured for each model is provided in Table 1. Results and Discussion In this evaluation of internal and external controls for assessing microarray performance, it is assumed that these controls act in a manner similar to and consistent with endogenous transcripts in the biological sample when all are assayed with gene expression microarrays. To provide an initial quality assessment of the probeset-specific performance, the variance behavior of the individual probesets of the controls was examined in relation to average signal level across the entire experiment. Similar approaches have been employed to illustrate relationships between probeset signal level and precision metrics in microarray data [2,[31][32][33]. The mean and standard deviation (SD) of the RMA values for all probesets for the 137 hybridizations of the rat experiment is illustrated in Figure 2 for preprocessing with (a) no normalization, (b) quantile normalization and (c) 75th percentile normalization. A comparison of the normalization approaches on this dataset illustrates that the dispersion pattern of the external spike-in controls, as well as the internal Affymetrix controls relative to the mean of the background probesets, are comparable for the (b) quantile normalization and (c) 75th percentile normalization, particularly for intensities greater than 2 8 . The greatest difference observed is for probesets with intensities less than 2 6 , for which the data resembles a "non-normalized" pattern. The different classes of controls are distinct in terms of the overall variability (SD) across their inherent RMA intensities; this observed difference among the control groups can be used as a screening tool to identify highquality experimental datasets from the lower-quality or more "noisy" datasets [2]. The experimental dataset shown in Figure 2 is considered of "high-quality", given that the precision for the various controls (as a group) does increase in a systematic fashion with respect to the amount of experimental processing that each group has experienced (Figure 1a). The hybridization controls are expected to have the lowest variability as they are added at the last experimental stage, whereas the polyA+ and endogenous controls are subject to amplification/labeling and degradation steps, respectively, and are thus expected to exhibit greater variability. The overall dispersion of the non-control (background) probesets lends insight into the relative "noise" of the data. For this experiment, the spike-in hybridization controls are at this average or below the average of the non-control probesets whereas the spike-in polyA+ controls are well above this average and near the upper-limit of the background probesets. Notably, the 100 internal endogenous controls or "housekeeping genes" have consistently lower variability across the range of RMA intensities. The mean/SD plots also reveal the relative precision of individual probesets within a control group relative to other probesets in the experimental dataset. A few of the internal RNA degradation probesets are considerably more variable than both the average background signal and the internal endogenous genes. As shown in Figure 2, the control probesets with the greatest variability include the AFFX_Rat_GAPDH_5_at and AFFX_Rat_GAPDH_-M_at RNAd controls (RG5 and RGm, respectively) and the Dap, Thr, Phe and Lys polyA+ controls (v/V, w, Y and x/X, respectively). Greater variability, likely attributable to differences in processivity during cRNA labeling, is generally observed for the 5' probesets (denoted with "5"), followed by a moderate level of variability for the probesets that target the middle of the transcript (denoted with "m"). As provided by the quality metrics in the Microarray Centre Quality Assessment (QA) report [26], the majority of hybridizations from this experiment are of acceptable quality, however, several hybridizations exhibit lesser quality and may contribute to the greater variability observed in these probesets. The QA report for Experiment CWTA_0103_01 is included as Additional File 2. The mean/SD dispersion plots provide an overview of quality through an assessment of probeset-specific performance within the experimental dataset but do not definitively identify particular samples that may be outliers within the experimental dataset. Samples that contribute the greatest amount of variance to the experiment can be resolved through a PCA of the spikein controls and can be used to identify problems with the discrete sample preparation steps (e.g., hybridization or RNA amplification). Likewise, PCA models of the internal controls can be utilized to verify sample RNA integrity or to account for other sample degradation issues. Spike-in Hybridization Controls In an effort to identify individual arrays that may be problematic, PCA was employed to explore the variability within the spike-in hybridization control dataset. PCA score plots for the first three principal components (PCs) of the hybridization control data subset of the rat CWTA dataset are shown in Figure 3. The data are classified by the date on which a hybridization was performed. For this experiment, a total of 13 hybridization dates were recorded ranging from May 7, 2003 (20030507) to February 25, 2004 (20040225) and are color-coded and denoted by a letters ranging from "A" to "M". The first PC represents roughly 85% of the model variance and highlights a shifting of hybridization intensities between those of date class "E" (20030806) and those of date class "F" (20030929). PC 2 captures an additional 5% of the overall model variance and separates hybridizations (F64 and I90) that have both low quality Scan QC measures (values of 4) and also are outliers with respect to the Normalized Unscaled Standard Error (NUSE) plot [28], shifted log 2 probe intensities as well as relatively high average array background values and RawQ noise values, the latter of which is a measure of pixel-to-pixel variation among the probesets that is used to calculate the array background [34]. Notably, I90 (NNC2003102101A, Aliquot ID FMTA0048_a; see Table S-1) is a re-hybridization of sample F64 (NNC2003092901A), however there was little improvement to the overall hybridization metrics (i.e., Scan QC, NUSE) Consistent with the relatively high abundance of the biotin-labeled spike-in controls, the scores for PC 2 and PC 3 (< 3% variance) separates hybridizations (F67, F68 and E60) that have relatively low quality Scan QC measures (3 or 4) and have more moderate-to-high average array background values and RawQ values. The Q residuals of the PCA model (Additional File 1: Supplemental Figure S1) can be used as a diagnostic tool to identify hybridizations that have unusual variation (those that reside outside the PCA model space). In addition, Hotelling T 2 values can be used to identify samples that are outliers and that might possess relatively high leverage along the principal axis of the model, analogous to the end points of a linear regression model. The Q residuals in Supplemental Figure S1 (a) highlight hybridization B22, which has also been flagged as potential outlier by the NUSE plot. Hotelling T 2 values consistently highlight hybridizations F64, E60, I90, F68 for which scanner QC measures have been denoted as problematic (values of 3 or 4). Spike-in PolyA+ Controls A cocktail of RNA controls with artificial polyA+ tails are spiked into each RNA sample over a range of concentrations (Table 2) to monitor the entire sample labeling process. All of the polyA+ controls should be scored as "Present" with signal values: Lys >Phe >Dap >Thr >Trp. For this experiment, an extremely low correlation (R 2 = 0.4498) between the polyA+ spike in concentration and raw signal value observed for hybridization NNC2004020512Aaa (sample J111) as reported in the MiMiR QA report. Correlation values of R 2 > 0.95 are expected for typical samples. Outliers such as these are easily identified through an examination of the relative RMA intensities; as an example, the relative RMA intensities for this extreme polyA+ control outlier are shown in Table 2. The difference observed between the average experiment RMA intensity values and that of sample J111 is linearly correlated with log 2 concentrations for the polyA+ spike-in controls. The PCA model for the polyA+ controls comprises of 4 PCs. The first PC captures the largest variance (76.8%) and primarily separates hybridization J111 from the other 136 hybridizations within the experimental dataset (data in Additional File 1: Supplemental Figure S2(a)). PCs 2, 3 and 4 describe the remaining 20% of variance captured for this model and illustrate more subtle patterns of spike-in polyA+ control quality (Figure 4) that are not readily seen by examining the relative intensities of the controls alone. An unfolded 3-dimensional PCA scores plot of these lower PCs illustrates the various outlying hybridizations that correspond to definitive quality control parameters associated with both assay and hybridization performance. PC 2 (11% of variance) separates hybridizations with the most extreme differences in probe intensities and array background (F64, the I90 re-hybridization of F64, and B22) whereas PC 3 has a primary contribution from the polyA+ control level differences observed for hybridization J111. PC 4 (≈ 4% of variance) uniquely identifies hybridizations conducted on Date "G" (20031007) for which the 3'/5' ratios for the Phe and Lys polyA+ controls are substantially above the Affymetrix-defined tolerance ratio of 3, which is usually indicative of either insufficient labeling efficiency or poor sample quality. For example, the hybridizations denoted as G73, G74, G75, G82 and G77 had 3'/5' ratios for the relatively high concentration Phe polyA+ control of 30.32, 18.91, 11.10, 6.70 and 6.82, respectively. The J111 outlier can be also identified in the high Hotelling T 2 values for the overall model (Additional File 1: Supplemental Figure S2(b)). The loadings for PC 1 have comparable contributions from probesets (X/x, Y/y, V/v, and W/w) that represent the four polyA+ controls (Lys, Phe, Dap and Thr) ( Figure S-2(c)). This result is consistent with the obvious difference in RMA intensity; the log 2 probe intensities for these four polyA+ controls for hybridization J111 were several orders of magnitude lower when compared to the other hybridizations in the experiment. In contrast, the log 2 intensities for the Trp polyA+ control probesets (Z5, Zm and Z3) were relatively small relative to the overall experiment (median z-score of 0.7). Consistent with the observed intensity data, these probesets have a low contribution to the loadings for the PC 1. In addition, the probeset loading pattern of 5'-middle -3' trend as observed for the higher concentration controls (Lys and Phe in Additional File 1: Supplemental Figure S2(c)) indicates that the 5' probeset signals carry more of the variance of the dataset. This is likely attributable to low processivity in the in vitro transcription reaction used to synthesis the polyA+ controls (which proceed in the 3' to 5' direction). Internal RNA Degradation and Endogenous Controls The PCA model results for the Affymetrix-designated RNA degradation internal control data ( Figure 5) illustrate a complementary pattern to the PCA results obtained for the polyA+ external spike-in control dataset but with some subtle differences. For this dataset, the primary contribution of the RNA degradation is realized in the first component of the model (PC 1) followed by the separation of hybridizations that differ in log 2 probe intensities and overall array quality in the subsequent PCs (2 and 3). This is observed for the group of flagged hybridizations for elevated 3'/5' ratios for GAPDH and/or β-Actin controls (G73, G74, G82, G75, G80, G78 and G77, and to a lesser extent G79, I100 and A4) that are separated in PC 1 and represent 68% of the model variance. Likewise, the major variables that contribute to the loadings for PC 1 correspond to the 5'-end and middle-segments of the Affymetrix GAPDH and β-Actin probesets (RG5, RGm, RbAct5, RbActm; see Additional File 1: Supplemental Figure S3 (b)). Hybridizations that correspond to shifted log 2 probe intensities and elevated NUSE values (F64, I90, B22) are separated on PC 2. Notably, hybridizations B20 and D46 are partially separated from the other hybridizations on PC 3 (≈ 7%), the former of which has a slight indication of cRNA degradation (3'/5' ratio of 3.16 for β-Actin) but it is unclear how D46 (hybridization ID NNC2003070706Aaa) is different from the others with regards to the Affymetrix cRNA degradation internal controls. In all, the PC 1 × PC 2 × PC 3 scores profile as illustrated in Figure 5 represents ≈ 95% of the total model variance. In contrast to the RNA degradation control dataset, the PC 1 × PC 2 × PC 3 scores profile for the PCA model of the endogenous control data (comprised of 100 Affymetrix-identified "housekeeping genes") capture only 53% of the total model variance, with the remainder dispersed among subsequent PCs ( Figure 6). The PC 1 × PC 2 × PC 3 profile does, however, have some similarities to the observed patterns for both the external polyA+ and the internal RNA degradation control PCA models. The sample F64 and its I90 rehybridization are present as outliers in PC 1 as is the group of hybridizations (G73, G74, G75, G77, G78, G80, G82, I100) that have been flagged for elevated 3'/5' ratios in PC 2. Notably, PC 3 (8.5% variance) contains additional samples from the Date "B" group (B17, B20), for which the variance contribution is not apparent. The samples that were considered outliers with respect to hybridization and/or scanning issues (F67, F68, E60) are indistinguishable in the PC 1 × PC 2 × PC 3 profile, but are apparent in the lower PC profile (PC 4 × PC 5 × PC 6 layout within Figure 6). Sample J111 is not identified as an outlier within either the internal RNA degradation or endogenous control PCA models; this hybridization is only deemed as an outlier through the polyA+ control model (Figure 4) as its only significant variance is measured via the probesets attributable to the four polyA+ controls (Lys, Phe, Dap and Thr). This exemplifies the utility of controls that probe data quality at multiple stages in data generation (Figure 1a). Conclusions Different types of controls provide distinct levels of data quality information that can be readily resolved through principal component analysis. A layered PCA modeling of the four classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes") is valuable for evaluating data quality at a number of stages within the experiment (e.g., hybridization, RNA amplification). The variance at each stage, whether spike-in or internally present, provides complementary information on data quality to those provided by the QA/QC metrics. This work supports the use of both external and internal control data to assess the technical quality of microarray experiments. In the results presented here, using a layered PCA approach, we find that both the external and internal controls carry with them the critical information about technical performance that is consistent with whole-array quality assessment. This information is obtained for every sample generated using spike-in controls and permits assessment of technical performance for each array. This study is thus a key element in our efforts to develop control methods, materials and designs that support the use of genome-scale data with confidence. Furthermore, these results validate the proposal to use such controls with large data sets generated on multiple platforms or with other multiplexed technology applications. Additional material Additional file 1: contains the PCA model results include both diagnostic Q/Hotelling T 2 plots and loadings plots for the spike-in hybridization and polyA+ control data and the internal cRNA degradation control data subsets in Supplemental Figures S1, S2, and S3 respectively. Additionally, two supplemental tables are provided to aid in the data interpretation within the manuscript. These include Table S1 that provides condensed annotation information for the single Rat experiment and Table S2 that lists the probe set identifiers for spikein hybridization and polyA+ controls together with the internal Affymetrix cRNA degradation (RNAd) and endogenous controls for the RAE230A array. Additional file 2: contains a copy of the Microarray Centre Quality Assessment of Affymetrix Data for EXP_CWTA_0103_01 comprising 138 hybridizations on Rat Expression Set 230A Arrays. Hybridization HFB2003080611Aaa listed in the QA report was excluded from the PCA dataset as the full annotation information was not available at the time of this study.
v3-fos-license
2019-01-03T12:48:49.733Z
2015-10-14T00:00:00.000
83745776
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/8169DFD55828.pdf", "pdf_hash": "759fb3e4eaa0a570473dc5d01182e99394ba4709", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41525", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "sha1": "759fb3e4eaa0a570473dc5d01182e99394ba4709", "year": 2015 }
pes2o/s2orc
Estimation of genetic diversity between three Saudi sheep breeds using DNA markers The genetic variation of Najdi, Harri and Awassi breeds of Saudi sheep prevailing in Raniah province of Makka district were assessed and compared to Sudanese Desert sheep using random amplified polymorphic DNA polymerase cahin reaction (RAPD-PCR) technique. Five primers successfully amplified distinguishable 40 bands with an average of 96% polymorphism revealing that Saudi sheep breeds possess the needed genetic variation required for further genetic improvement. The resulted dendrogram showed that, there are two main separate clades. The Desert sheep is genetically distant and appeared as out-group from the Saudi sheep breeds. The first main clade included all of the Najdi individuals and only two individuals from Harri breed. While, the second main clade comprised two subgroups, the first one included individuals from Harri breed and the second included both Harri and Awassi individuals. The cluster analysis shows that Najdi breed is genetically different from both Harri and Awassi and that some Harri individuals showed genetic closeness to Awassi. The present study will help to clarify the image of the genetic diversity of these local Saudi sheep breeds in Raniah province and should be followed by further studies using advanced DNA markers and all available breeds in the kingdom to get the precise estimation of the phylogeny of these local genetic resources. INTRODUCTION The population of sheep in the Kingdom of Saudi Arabia is about 5.2 million head (Saudi Ministry of Agriculture, 2011).In Raniah Province of Makka district alone, there are 250000 heads of sheep (Al Faraj, 2003).Harri and Najdi sheep breeds are Saudi local reflect good adaptive traits to the local environmental conditions and meet the *Corresponding author.E-mail: Gadour_63@yahoo.com.Tel: +966550983644. Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Saudi consumer needs.Najdi is the principal native sheep breed in the eastern province of Saudi Arabia (Aljumaah et al., 2014).Awassi (also known as Naemi), sheep have been exported from its origin in east of the Mediterranean to more than 30 countries in all continents of the world including KSA (Galal et al., 2008).It is known for its good milk production (Al-Atiyat and Aljumaah, 2014).Indigenous sheep breeds are valuable source of genetic material due to adaptation to local harsh environmental conditions, nutritional fluctuations and resistances to diseases and parasites (Nsoso et al., 2004;Galal et al., 2008).Unfortunately, accelerated decline of biodiversity worldwide was reported and 20% of the domestic animal breeds are at risk of extinction (FAO, 2000;Kunene et al., 2009).Particularly for sheep, it is estimated that 180 sheep breeds (14%) are extinct (Cardellino, 2004;FAO, 2007).There is terrible risk that most breed may perish before they have been exclusively recognized and exploited.Conservation and maintenance of animal genetic biodiversity of local breeds will facilitate the effective management of farm animal genetic resources.There is a need to genetically re-evaluate these breeds to assess the existing population structure and differences which would serve to facilitate the future conservation programs.On the basis of microsatellite data, considerable genetic differentiation was recently reported in Saudi Najdi sheep (Musthafa et al., 2012).The first step in the conservation and utilization of indigenous sheep breeds is characterization and evaluation of genetic diversity which is a prerequisite for improving any species (FAO, 2007;Bjornstad and Roed, 2001;Notter, 1999). The traditional phenotypic characterization can now be complemented by molecular markers and sophisticated statistical techniques for data analysis.Random amplified polymorphic DNA (RAPD) is a polymerase chain reaction (PCR) based fingerprinting technique that amplifies random DNA fragments with single short primers of arbitrary nucleotide sequence under low annealing stringency (Williams et al., 1999;Awad et al., 2010).The RAPD markers have been described as a simple and easy method to use for estimation of genetic variability among breeds or species (Kumar et al., 2008;Ruane, 2000).The objective of the present study was to utilize the RAPD technique to characterize three Saudi sheep breeds (namely Awassi, Harri and Najdi) in Raniah province and to estimate the genetic diversity within and between these breeds and Sudanese Desert Sheep as outbreed. Study area The study area pertains to Raniah Province of Makkah district (12° 30 N, 42° E) in the west part of Saudi Arabia extending over 62,000 km 2 .Raniah Province lies about 870 km south-west to Riyadh, 380 km west to Taif and 150 km north to Bishah.The province has similar meteorological and ecological attributes with the rest of the Arabian Peninsula.It is characterized by hot arid desert type climate, with average annual rainfalls of 90 mm, maximum temperature between 34 and 45°C in summer and between zero and 20°C in winter with an average relative humidity of 22% (Al Faraj, 2003). Animals Full mouthed unrelated females were randomly selected from three Saudi sheep breeds, namely, Awassi (Naeimi), Harri and Najdi to serve as blood donors.Tw ent y individuals w er e s am p led from N aj di, 14 from H arr i and five fr om A w assi.B lo od w as als o c ollect ed fr om thr ee S udanes e des ert sh eep to ser ve as outbr eed f or c om par is on. Genomic DNA extraction Blood samples from Jugular vein were collected from full mouthed unrelated female.At least 5 ml blood sample was drawn from the vein in the neck of each animal and collected in EDTA vacutainers.The blood was gently mixed with anticoagulant and kept at -20°C.Genomic DNA was extracted from peripheral blood lymphocytes according to instructions of blood DNA preparation kit (Jena Bioscience, Germany). PCR amplification The PCR amplification was performed in a 25 µl reaction volume, using Promega PCR master mix according to the instructions by the manufacturer with 30 Pmol from each o f t h e primers: Initial denaturation at 94°C for 2 min, followed by 35 cycles consisting of denaturation at 94°C for 30 s, annealing at 55°C for 30 s, extension at 72°C for 2 min and a final extension at 72°C for 2 min.Amplified products were electrophoresed on 1.5% agarose gel at constant voltage and 1X TBE for approximately 1.5 h.They were visualized by staining with ethidium bromide and photographed under ultraviolet light and molecular weights were estimated using 1 Kbp DNA ladder. Scoring and statistical analysis PCR products were scored across the lanes as variables.The presence of a band of amplified DNA was scored as (1) and absence as (0).The data generated was used for calculation of similarity matrix based on Nei and Li (1979).Very faint bands were excluded from the analysis.Similarity coefficients were utilized to generate a phylogenetic tree (dendrogram).Pairwise genetic distance between individuals were calculated by the percentage disagreement method.These data were used in cluster analysis with the unweighted pair-group method using arithmetic averages (UPGMA), in which samples were grouped based on their similarity with the aid of statistical software package STATISTCA-version 9 (StatSoft Inc., 2009). RESULTS AND DISCUSSION Information on genetic relationships in livestock within and between species has several important applications for genetic improvement and breeding programmes (Appa Rao et al., 1996).Comprehensive knowledge of the existing genetic variability is the first step for the Table 1.The sequences of primers used and their polymorphic bands among three Saudi and one Sudanese sheep breeds. Primer Sequence of primer (5'-3') conservation and exploitation of domestic animal biodiversity.Therefore, the objective of this study was to evaluate the genetic diversity of sheep breed in Raniah province of Saudi Arabia based on RAPD analysis.Five, out of 17 tested primers successfully amplified polymorphic bands between the different sheep breeds. The amplified PCR product of DNA showed identical band patterns with similar intensity (Figure 1).Out of the total distinguished 40 amplified fragments, 39 were polymorphic with an average of 7.8 bands.The maximum number of fragments (9 bands) was produced by three primers with 100% polymorphism, while the minimum numbers of fragments were produced by primer OPB-5 with 83.33% polymorphism (Table 1).The very high polymorphic rate (96.67%) indicated that the studied sheep breed possess the needed genetic variation for potential future preservation and breed development.Although, all studied population of Saudi Najdi, Hbsi, Arb, and Naemi sheep had substantial levels of genetic variation, but Najdi sheep had the highest gene diversity (Aljumaah et al., 2014). Table 2 shows the genetic distance between individuals of the four goat breeds.Individuals designated with numbers from 1 to 20 are Najdi, f rom 21 to 34 are Harri, f rom 35 to 39 are Awassi and f rom 40 to 42 are Sudanese Desert Sheep.The highest genetic distance (0.53) was found between Najdi individual (N14) and the three Desert sheep individuals (Desert sheep-1-3).On the other hand, the least genetic distance (0.0) was found between Najdi individuals N1 and N2 and also between the two Desert sheep individuals (40 and 42).Genetic distance value of 0.0 reflects very high similarity between any two individuals.The distance measure between two clusters is calculated from the formula: D=1-C; where, D is the distance and C the correlation between object clusters.If objects are highly correlated, they will have a correlation value close to 1 and genetic distance value close to zero.Therefore, highly correlated clusters are nearer to the bottom of the dendrogram.Object clusters that are not correlated have a correlation value of zero and a corresponding genetic distance value of 1. Objects that are negatively correlated will have a correlation value of minus1 and genetic distance of 2. As shown in Figure 2, the resulted dendrogram constructed from RAPD-PCR data showed that the Desert Sheep is genetically distant and appeared as outgroup to the Saudi goat breeds.The result also shows that, there are two main separate clades.Most of the individuals that belong to the same breed were clustered together.The first main clade included Najdi individuals (N1-N20) and only two individuals from Harri breed.While the second main clade comprises two subgroups, in which the first subgroup contained only individuals from Harri breed (H3-H9 individuals).On the other hand, the second subgroup of the second main clade included Harri and Awassi individuals.The cluster analysis shows In a study aimed to characterize genetic constitution of Awassi, Harri and Habsi Saudi sheep, using random amplified polymorphic DNA (RAPD) technique, the highest homogeneity was observed within Harri breed followed by Habsi and Awassi breeds (40 and 24.2%, respectively) (Sabir et al., 2013).The genetic structure of Saud sheep population including Najdi, Hbsi, Arb, and Naemi was investigated using microsatellite revealing substantial genetic variability, with average heterozygosity range of 0.759 to 0.811 (Aljumaah et al., 2014).The genetic characterization, however, should be a continuous process of surveying and monitoring of the existing indigenous breeds. Conclusion and recommendation The very high polymorphic rate (96.67%) indicated that the studied sheep breed possess the needed genetic variation for further potential future preservation and breed development.The result from this study shows that Najdi breed is genetically different from both Harri and Awassi and that some individuals from Harri showed genetic closeness to Awassi.The present study will help to clarify the image of the genetic diversity of these local Saudi sheep breeds and should be followed by further studies using large number of animals from different geographical regions in the kingdom to get the precise estimation of the phylogeny of these local genetic resources. Figure 2 . Figure 2. Phylogenic tree showing relationships among the four sheep breeds obtained by RAPD-PCR analysis using five primers.Individuals designated with N are Najdi; with H are Harri and with A are Awassi sheep breed. Total number of bands Number of polymorphic bands Number of monomorphic bands RAPD primers from Operon Technologies Inc. (Alameda Calif., USA). Table 2 . Matrix of RAPD dissimilarity among three Saudi sheep and Desert sheep breeds based on Nei and Lei coefficients
v3-fos-license
2021-05-17T00:02:49.401Z
2020-11-14T00:00:00.000
234677804
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://sciforum.net/manuscripts/8293/manuscript.pdf", "pdf_hash": "59a31a9ef64df2208e118ab6ecf41032b457e333", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41526", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "5fe36a07b2d09a312d2eee14fb23efe3a85301b3", "year": 2020 }
pes2o/s2orc
Uncommon Coordination Modes of a Potential Heptadentate Aminophenol Donor This work describes the synthesis, characterization and reactivity towards HoIII of a potential heptadentate N4O3 aminophenol donor. The crystal structure of the [Ho(1,1,4-H3L)(1,1,4-H6L)] complex (1,1,4-H6L = 6,6’-(2-(5-bromo-2-hydroxy -3-nitrobenzyl)-2,5,8,11-tetraazadodecane-1,12-diyl)bis(4-bromo-2-nitrophenol)) shows that the holmium atom binds two aminophenol ligands, one acting as trianionic hexadentate, and the other one as neutral monodentate. As far as we know, both coordination modes of the aminophenol are hitherto unknown for this kind of scarcely reported ligand. This leads to coordination number 7 for the HoIII ion, which is in a capped trigonal prism environment. Introduction Since the discovery of the first single-ion magnet (SIM) in 2003 [1], the bis-phthalocyanine terbium complex [Tb(Pc)2], the field of molecular magnetism began to focus on the coordination chemistry of lanthanoid elements. These elements, by themselves, fulfil two of the necessary requirements for a molecule to behave like a magnet: they present intrinsic anisotropy, and, usually, they have a high spin ground state. However, according to Reinhart and Long [2], the anisotropy of the molecule is modulated by the interaction between the single-ion electron density and the crystal field environment in which it is placed. In this sense, for oblate ions, like Dy III or Ho III , a strong axial crystal field should maximize the uniaxial anisotropy. In this way, it has been demonstrated that an axial pentagonal bipyramidal (pbp) environment usually increases the anisotropy of the complexes, improving their magnetic properties. Accordingly, the blocking temperature record for an air-stable molecular magnet (20 K) is held by a dysprosium(III) complex with pbp geometry [3]. Nevertheless, this temperature is still very low and, consequently, more research in the coordination chemistry of lanthanoid complexes with ligands that can lead to pbp geometries is still needed, in order to improve the magnetic behaviour of this kind of complex. With these considerations in mind, in this study we describe the synthesis of a new potentially heptadentate ligand, which could predetermine a pbp geometry by itself, and its reactivity towards holmium(III). Materials and General Methods All chemical reagents and solvents were purchased from commercial sources and used as received without further purification. Elemental analyses of C, H and N were performed on a THERMOSCIENTIFC FLASH SMART analyzer. 1 H-NMR spectrum of 3NO2,5Br-H3L and 3NO2,5Br-H6L 1,1,4 were recorded on a Varian Inova 400 spectrometer, using DMSO-d6 as solvent. Infrared spectrum of 3NO2,5Br-H3L was recorded in the ATR mode on a Varian 670 FT/IR spectrophotometer in the range 4000-500 cm −1 . Single X-Ray Difraction Studies Single crystals of [Ho(3NO2,5Br-H3L 1,1,4 )(3NO2,5Br-H6L 1,1,4 )]·1.5CH3C6H5 (2·1.5CH3C6H5) were obtained as detailed above. An ellipsoid diagram for 2 is shown in Figure 2 and main distances and angles are recorded in Table 1. The crystal structure shows that the unit cell is composed of neutral [Ho (3NO2,5Br-H3L 1,1,4 )(3NO2,5Br-H6L 1,1,4 )] complexes, and toluene as solvate. In the complex, there are two aminophenol ligands joined to the holmium(III) ion. One of them acts as a trianionic hexadentate donor, using all its oxygen atoms and three of the four nitrogen atoms to coordinate to the metal centre. The distance Ho···N11 of 2.759(10) Å seems too long to be a real coordinated bond, and it should be best considered as a secondary intramolecular interaction [9]. Thus, this ligand provides an N3O3 environment to the Ho III centre. The coordination sphere of the metal ion is completed by an oxygen atom (O23) coming from the second aminophenol ligand, which acts as neutral monodentate. Curiously, in this second ligand, the coordinated phenol oxygen atom is deprotonated, and the nitrogen (N21) with two benzyl substituents is protonated. Thus, this second neutral aminophenol ligand is a zwitterion, with the charge distribution shown in Scheme 3. As a result of the described features, Ho III reaches coordination number 7. Calculations of the distortion from an ideal HoN3O4 core with the SHAPE program [10] indicate that the geometry is closer to a capped trigonal prism. The main distances and angles about the metal centres agree with those expected for holmium complexes with polydentate N,O donors [9], and this aspect does not deserve further consideration. Nevertheless, it should be noted once again that in this complex one of the aminophenol ligands acts as trianionic hexadentate, and the other one as neutral monodentate. None of these coordination modes have been previously described for this kind of scarcely related aminophenol ligand, which, as far as we know, in the only three previous examples crystallographically characterised [7,8], behaves as trianionic heptadentate. Therefore, this works contributes to increase the knowledge of the coordination chemistry of lanthanoids with a barely reported potentially heptadentate aminophenol ligand.
v3-fos-license
2020-01-23T09:19:54.847Z
2020-01-16T00:00:00.000
210944886
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.mmcr.2020.01.004", "pdf_hash": "367e276a80f2bfc7eeeea5bc6353a1732ea71b6e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41530", "s2fieldsofstudy": [ "Medicine" ], "sha1": "54b869c0cd464767448b728dbf7a7adc2654f842", "year": 2020 }
pes2o/s2orc
Microsporidial stromal keratitis in a cat A 12 year-old female spayed felid presented after a 35 day history of right eye pain. On examination, a sub-epithelial opacity was identified in the cornea. A lamellar keratectomy was performed and histopathological analysis revealed low numbers of 2x4um, Gram, Hamatoxylin-eosin and Gomori methanamine-silver positive spores. Transmission electron microscopy found ultrastructural findings consistent with the phylum Microspora. To the author's knowledge, this is only the second case of microsporidial stromal keratitis reported in a felid. Introduction There are almost 1300 species of microsporidia recognised, and these have been recently re-classified as fungi in the phylum Microspora and kingdom Protista that infect both vertebrates and invertebrates [1]. Microsporidia are spore-forming, unicellular, obligate intracellular organisms sharing a unique organelle, the polar filament or polar tubule and are ubiquitous in nature and are distributed worldwide [1,2]. Encephalitozoon cuniculi is the classic microsporidial disease of mammals, and is the most extensively studied. Spontaneous infections with E. cuniculi have been documented in rabbits, rodents, ruminants, horses, domestic dogs, wild and captive foxes, domestic cats, psittacine birds, and non-human primates [3]. Corneal stromal microsporidiosis is rare in both humans and animals, but has been previously reported, with few human cases and one feline case [4,5]. The clinical manifestations of ocular microsporidiosis vary, and depend on both the genus involved and the immune status of the patient. There are two classical clinical presentations of ocular microsporidial infections: corneal stromal keratitis occurring in immunocompetent patients and an epithelial keratopathy and conjunctivitis seen in immunosuppressed patients [6]. However, the condition's phenotypic presentation can be mixed irrespective of the patient's immune status [7]. In immunocompromised people, especially the Human Immunodificiency Virus (HIV) positive and the organ recipient populations, microsporidia are recognised as opportunistic organisms [8,18]. Animal and environmental reservoirs of microsporidia as well as zoonotic potential are hypothesised, but not yet proven [9]. Treatment of human microsporidial infection with therapeutic agents is well documented; however there are relatively few reports of drug efficacy in animals. [10] , STILES et al The single case of stromal keratitis in the feline reported prior to this case was thought to be due to E. cuniculi, and was cured with a keratectomy [5]. Case A 12 year-old female spayed domestic short hair feline presented on day 0 with a 5-week history of ocular pain, corneal edema and moderate episcleral injection. The cat lived on the west coast of the USA for several years before living in Washington D.C where the case presented, and was kept indoors exclusively. The cat lived in an apartment building that faced the aviary of the National Zoo, approximately 50 yards away. Topical triple antibiotic ointment (neomycin, bacitracin, polymixin B) was prescribed by the referring veterinarian, and did not improve the cat's eye clinically. The blepharospasm worsened, and the cat was referred to a center of veterinary ophthalmology for examination. On examination, the cat was visual and ably navigated the examination room. The most obvious clinical sign was ocular pain of the right eye, manifested by severe blepharospasm. The cornea of the right eye was vascularised inferiorly and temporally, and the vessels extended centrally to an area of corneal sub-epithelial opacification. The corneal opacity of the right eye was yellow to white and covered the entire axial and inferior cornea, and there was moderate chemosis and https://doi.org/10.1016/j.mmcr.2020.01.004 Received 28 September 2019; Received in revised form 22 December 2019; Accepted 13 January 2020 hyperaemia (Fig. 1). Both pupils were responsive to light directly and consensually, and the menace response was intact in both eyes. Anterior segment exam by biomicroscopy 1 revealed a +2 aqueous flare of the right eye, and the left eye was normal. The fundus was normal in both eyes by indirect ophthalmoscopy. 2 The Schirmer 3 tear test (STT) results were measured as > 15mm in a minute bilaterally. Intra-ocular pressures were measured via applanation tonometry, 4 and were 6 mmHg in the right eye and 12 mmHg in the left eye. The lower intraocular pressure of the right eye was attributed to uveitis [1]. Fluorescein tests bilaterally were negative. 5 Corneal opacification may be due to a variety of causes, the most likely, considering the history and clinical appearance in this case would be infectious, specifically bacterial, fungal, parasitic or viral. A feline comprehensive blood screen was performed and included a complete blood count, serum chemistry, thyroid evaluation, and serologic tests for feline leukaemia virus (FeLV) and feline immunodeficiency virus antibody (FIV). Feline herpesvirus (FHV-1) tests were not performed because of poor sensitivity, and poor positive predictive value of available tests [2]. The red blood cell count (10.58 10 6 /μL; reference range 5.28-9.97 10 6 /μL) and hematocrit (52.4%; reference range 25.8-48.1%) were mildly elevated, and the eosinophil count was elevated with a percentage twice the normal value (12%; reference range 0-6%). Serum chemistry revealed elevated amylase levels (2440 U/L; reference range 600-1600U/L), elevated alkaline phosphatase levels (87 IU/L; reference range 10-42 IU/L), elevated triglycerides (320 mg/dL; reference range 30-90 mg/dL), and mildly elevated globulins (5.4g/dL; reference range 2.0-5.0 g/dL). Feline immunodeficiency virus antibody, and feline leukaemia virus antigen were not detected by ELISA tests. Urine was not available for analysis. Medical and surgical interventions were discussed. The goals were to relieve pain and obtain a definitive diagnosis and resolution. Considering the corneal opacity was sub-epithelial and the duration of the disease was five weeks with the progression of clinical signs worsening; surgery was recommended. A lamellar keratectomy was performed in the right eye on day 0. A bacterial culture and sensitivity was submitted prior to preparation of the cornea for surgery, and results received on day 7 revealed no growth of any bacteria. Pre-medication was a 0.3mg/kg butorphanol, 6 0.02mg/kg acepromazine 7 and 0.02mg/ kg atropine 8 combination IM, Propofol 9 was given IV to effect for induction, and the patient was maintained on isoflurane 10 via an endotracheal tube. The cornea was prepared for surgery using triclosan antiseptic. 11 The excised corneal sample was submitted for histopathologic analysis in 10% formalin. 12 The cat was sent home on topical erythromycin every 8 hours and atropine ointment every 24 hours to prevent infection and relieve reflex ciliary body spasm, respectively. The keratectomy bed healed routinely with minimal vascularisation and fibrosis observed on day 120 (Fig. 2). The definitive diagnosis of microsporidial keratitis was made histologically. Light microscopy revealed multi-focal corneal epithelial hyperplasia, intra-epithelial pustules, transmigration of neutrophils across the corneal epithelium, vascularisation and collagenolysis of the corneal stroma, and corneal edema. Within the areas of corneal inflammation, there were low numbers of 2x4um, Gram, Hamatoxylineosin (HE) and Gomori methanamine-silver (GMS) positive, oval protistan spores (Fig. 3). Transmission electron microscopy (TEM) showed mature spores containing sporoplasm with a single nucleus, a polar tubule with nine to eleven coils, a thin electron-dense exposure, an inner, thicker, electron-lucent endospore, a unit membrane, an anchoring disc at the anterior pole, and an electron-lucent posterior vacuole. These ultrastructural findings were consistent with the phylum Microspora, class typical of Nosematidae. Within the family Nosematidae, this organism had features most suggestive of the genus Nosema, and contained diplokaryotic nuclei and approximately 11 polar filament coils (Fig. 4 and Fig. 5). Discussion All microsporidia have an obligate intracellular life cycle, existing as environmentally resistant spores outside the host, which is their infectious stage [1]. The life cycle is postulated to be simple and direct, and most infections are acquired through ingestion or inhalation, transplacentally, or from trauma to the epithelium [2]. Transmission of microsporidia is believed to occur primarily by fecal-oral or urinaryoral routes. 1 Waterborne transmission and direct ocular contact with contaminated soil have also previously been documented as modes of transmission [11,12]. Extrusion of the polar filament from the spore, and injection of the infectious sporoplasm into the host cell is postulated to be the primary mechanism by which all microsporidia establish intracellular infection 6 Torbugesic, Fort Dodge Animal Health. 7 Aceproject, Vet8us Animal Health. 8 Atroject S.A, Vetus Animal Health. 9 Propofol, Baxter Healthcare Corporation. 10 Attane, Minrad Inc. 11 Septisol NPD with Triclosan, Steris Corporation. 12 Tissumend, Veterinary Products Laboratory. [1]. However, phagocytosis of spores by host cells may be an alternate mechanism of infection [13]. The organism then divides by a process of merogony, followed by differentiation into spores by a process called sporogony [1]. Microsporidia are recognised as opportunistic pathogens of immunocompromised people, especially the Human Immunodificiency Virus (HIV) positive, and the organ recipient populations [8,18]. Animal and environmental reservoirs of microsporidia as well as zoonotic potential are hypothesised, but not proven [1]. Treatment of human microsporidial infection with therapeutic agents is well documented; however there are relatively few reports of drug efficacy in animals [10]. Corneal stromal microsporidiosis is rare in humans, with few human cases of microsporidial stromal keratitis reported previously [4]. The ophthalmic manifestations of ocular microsporidiosis exhibit characteristic clinical features depending on the genus involved, and the immune status of the patient. There are two clinical presentations of ocular microsporidial infections: corneal stromal keratitis occurring in immunocompetent patients and an epithelial keratopathy and conjunctivitis seen in immunosuppressed patients, or mixed irrespective of the patient's immune status [4,6,7]. Stromal keratitis has previously been reported in a single feline patient prior to this case, and was thought to be due to E. cuniculi, and was cured with a keratectomy [5]. The organisms were present throughout the corneal stroma of the cat; however TEM to speciate the organism was not performed. If infection is limited to the corneal epithelium and conjunctiva, producing a diffuse punctate epithelial keratoconjunctivitis, the genus Encephalitozoon is likely; whereas with the genus Nosema and Microsporidium, the infection typically involves the stroma and keratocytes [4,6]. Diagnosis of microsporidial stromal keratitis can not easily be made by culture as microsporidia grow on tissue culture [15]. Diagnosis is made by cytologic or histopathological tissue examination, with histopathology being preferred with up to 92.3% sensitivity in the human literature [15]. TEM can be utilised for diagnosing microsporidiosis and does so by observing the polar tubule, which is a unique structure found only in microsporidia [1]. This case shows the ultrastructural features such as the number of coils in the polar tubule and arrangement of the nuclei, which can further speciate the genus of microsporidia [2]. While morphologic studies are able to distinguish between genera of microsporidia, TEM may not always distinguish species within a genus. Histochemical methods for detecting microsporidial spores are commonly used in clinical diagnostic laboratories. Microsporidia stain poorly with hematoxylin-eosin, but the Brown Brenn gram stain, and Warthrin-Starry silver can be used for detection of the fungus in tissue samples [8,16]. Serology, utilizing enzyme-linked immunosorbent assay (ELISA), may be used for screening purposes, although immunodeficient animals will not mount a reliable antibody response [8]. Polymerase chain reaction (PCR) tests have been developed and validated for multiple microsporidial species, and are now considered the gold-standard diagnostic tool for microsporidia due to their ability to detect low-levels of pathogens and high sensitivity and specificity. [17] Various genetic targets have been used previously with PCR detection, with the specific targets varying with the microsporidial species examined, however often a component of the small subunit ribosomal RNA is targeted [17,18]. This case represents a unique clinical finding, as it is only the second reported case of corneal microsporidia infection in the feline. This cat was systemically healthy prior to, during, and after the removal of the diseased cornea. The cat was FeLV and FIV negative, following the pattern of immune competence and corneal stromal microsporidial keratitis in humans. The elevated eosinophil count may have been an indicator, but without clinical signs of parasitic infection or disseminated fungal disease, no therapy was instituted. Although animal to animal transmission has not been documented, we question where an indoor cat may have acquired such an infection. One hypothesis is the aviary, which faced a window this cat sat in all day long. Microsporidiosis has previously been documented in birds, and infectious spores are able to survive in the environment, but the exact modes of transmission from have not been elucidated [8,9]. It may be possible that the cat traumatized the cornea, and microsporidia being ubiquitous nearby was able to opportunistically invade the cornea. This case was cured with a lamellar keratectomy, and the cornea remains disease free at day 400 post-operatively. The mode of infection for this case still remains an enigma. Microsporidial keratoconjunctivitis has been previously identified as an occupational hazard for veterinarians [19]. An increased risk of infection may be associated with animals who possess active microsporidial infections, however a definitive reservoir of species that are pathologic to humans has yet to be discovered [20]. This disease should be considered as a potential differential diagnosis and possible workplace hazard for veterinarians due to potential risk for zoonotic infection. Declaration of competing interest The authors have no personal or financial conflicts of interest.
v3-fos-license
2024-06-29T15:15:21.181Z
2024-06-27T00:00:00.000
270816127
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "9dc16a915e59f58bd4361a6f5bc26a9768101073", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41531", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "c917ad72059428f9c1f2ab3e20ef822f70144720", "year": 2024 }
pes2o/s2orc
Emerging role of exosomal microRNA in liver cancer in the era of precision medicine; potential and challenges Exosomal microRNAs (miRNAs) have great potential in the fight against hepatocellular carcinoma (HCC), the fourth most common cause of cancer-related death worldwide. In this study, we explored the various applications of these small molecules while analyzing their complex roles in tumor development, metastasis, and changes in the tumor microenvironment. We also discussed the complex interactions that exist between exosomal miRNAs and other non-coding RNAs such as circular RNAs, and show how these interactions coordinate important biochemical pathways that propel the development of HCC. The possibility of targeting exosomal miRNAs for therapeutic intervention is paramount, even beyond their mechanistic significance. We also highlighted their growing potential as cutting-edge biomarkers that could lead to tailored treatment plans by enabling early identification, precise prognosis, and real-time treatment response monitoring. This thorough analysis revealed an intricate network of exosomal miRNAs lead to HCC progression. Finally, strategies for purification and isolation of exosomes and advanced biosensing techniques for detection of exosomal miRNAs are also discussed. Overall, this comprehensive review sheds light on the complex web of exosomal miRNAs in HCC, offering valuable insights for future advancements in diagnosis, prognosis, and ultimately, improved outcomes for patients battling this deadly disease. Introduction Hepatocellular carcinoma (HCC) is the most common type of primary malignancy of the liver, occurring most frequently in patients with underlying chronic heaptic dysfunctions.HCC is the fourth most prevalent cause of cancer-related deaths worldwide and the sixth most common malignant tumor worldwide (Craig et al., 2020).The healthcare and economic burden of HCC continues to rise, and its incidence is projected to exceed over one million cases by 2025 (Estes et al., 2018).HCC is a severe threat to people's physical and mental health because of its covert beginnings, high degree of malignancy, and poor prognosis.It is also considered a primary cause of mortality for individuals with liver cirrhosis and encephalopathy (Aravalli et al., 2008;Pinzani et al., 2011).Chronic infection with the hepatitis B virus (HBV), hepatitis C dietary aflatoxins (Granata et al., 2006;Miceli et al., 2009;El-Serag, 2011;Sukocheva, 2018) The overall initiation, development and progression of HCC is a multi-step and intricate process involving hepatocyte regeneration and necrosis linked to fiber deposition, as well as ongoing inflammatory damage.The high molecular heterogeneity of HCC is explained in detail by the combination of epigenetic modification and somatic genome mutation accumulation (Schulze et al., 2016).Therefore, the search for effective biomarkers for the detection and diagnosis of HCC has great clinical utility (Liu X.-N. et al., 2019).The detection of multiple hepatocellular carcinoma stem cell surface biomarkers (CD44, CD90, CD133/2 and OV-6) using electrochemical immunosensors has been demonstrated (Eissa et al., 2017).HCC can be treated primarily with surgery, transplantation, ablation, transarterial chemoembolization (Llovet et al., 2002;Lo et al., 2002;Llovet and Bruix, 2003), and drug therapy with agents such as Regorafenib and Lenvatinib (Bruix et al., 2017;Kudo et al., 2018). Exosomes are 50-150 nm-diameter nanovesicles that are released into the extracellular milieu by fusing with the cell membrane (Théry et al., 2002).Tumor cells can influence nearby cells by means of exosomes, creating an environment that is conducive to tumor growth (Wu et al., 2019).Meanwhile, immune cells and matrix cells (like stellate cells and mesenchymal stem cells) can act against tumor cells by exosomes to encourage or prevent carcinogenesis (Zhou et al., 2017).Methods for isolating exosomes and the main roles of exosomal microRNAs are shown in Scheme 1. Exosomes contain a variety of genetic material such as mRNA, microRNAs, and other noncoding RNAs as well as proteins.(Figure 1) (Dai et al., 2020).They have crucial significance in chemical resistance, angiogenesis (Cho et al., 2012) epithelial-mesenchymal transition (EMT) (Tauro et al., 2013), and tumor metastasis (Kahlert and Kalluri, 2013) because they mediate signal pathways in recipient cells and are involved in intercellular communication and microenvironment regulation. MicroRNAs are a class of non-coding RNAs that range in length from 17 to 24 nt.They participate in post-transcriptional control by causing the RNA-induced silencing complex (RISC) to degrade target mRNA or stop its translation by forming complementary base pairings with it (Ruvkun, 2006;Chen S. et al., 2017).Numerous biological activities, such as cell division, proliferation, migration, and the start and progression of disease, have been linked to microRNA (Gee et al., 2008;Tay et al., 2008;Kota et al., 2009;Png et al., 2012).Exosomal microRNA expression imbalances can hasten the course of a disease and impact the pathophysiological state of tumors.Additionally, the occurrence and development of tumors are closely linked to the aberrant expression of these microRNAs (Kosaka et al., 2013;Sun et al., 2018).According to the most latest studies, exosome-mediated microRNAs play a crucial role in the onset and progression of liver cancer (Li and Xu, 2019).As a result, the identification of serum exosomal microRNA for early diagnosis and prognostication of HCC becomes appealing.Furthermore, in contrast to cell therapy, cell-free exosome therapy is easier to store and produce in large quantities, have fewer risks and pose fewer challenges.Cell-free exosome therapy represents a potentially effective new therapeutic approach is cellfree exosome therapy (Fais et al., 2013;Liu et al., 2018). Exosomes have been shown in studies to influence tumor growth by establishing an immunosuppressive environment via signal transduction between stromal cells and tumor cells (Zhou Y. et al., 2021).Exosomal microRNA derived from cancerous cells and nearby stromal cells have a tendency to stimulate the development of the metastatic environment (He et al., 2015;Liu et al., 2015;Li et al., 2016;Zhang H. et al., 2017;Plebanek et al., 2017;Yu et al., 2017).Research indicates that exo-miRNA-320a loss in Schematic illustration shows that exosome biogenesis in cells begins with budding to the inner side of the plasma membrane, which allows for the generation of early endosomes.Next, the endosome sorting complex protein family required for transport (ESCRT) promotes the formation of late endosomes that collect various microparticles and apoptotic contents, including nucleic acids, proteins, and lipids, ultimately leading to the formation of multivesicular bodies (MVBs).Finally, MVBs amalgamate with the plasma membrane to form exosomes, which are secreted into the extracellular environment by exocytosis.Adapted from Dai et al., 2020 with copyright permission under the terms of the CC-BY-NC-ND 4.0 license.exosomes derived from cancer-associated fibroblasts (CAFs) in HCC can trigger hepatocytes, which are receptor cells, to activate ERK downstream, leading to lung metastasis (Zhang Z. et al., 2017).Likewise, exo-miRNA-1247-3p in exosomes secreted by CAF may facilitate HCC lung metastases (Fang et al., 2018).Additionally, adipocytes can secrete exo-miRNA-23a/b, which can be delivered to cancer cells to stimulate the growth and migration of HCC cells (Liu Y. et al., 2019).According to other research, macrophages can facilitate hepatoma cell invasion by secreting exosomes that contain exo-miRNA-92a-2-5p (Liu et al., 2020).Cancer cells' exosomes have the ability to influence tumor growth and metastasis.Certain researchers propose that the presence of exo-miRNA-21 and exo-miRNA-10b in the exosomes of HCC, which are generated by an acidic microenvironment, may facilitate the growth and spread of cancerous cells.Thus, they could be employed as HCC therapeutic targets and prognostic molecular markers (Tian et al., 2019).These findings suggest that miRNAs in exosomes can be delivered to target cells within the HCC microenvironment, control the growth of lung cancer cells, and create an environment that is conducive to tumor development and cancer metastasis. 2 Roles of microRNA in the liver MiRNAs in liver metabolic processes The liver is a crucial organ in maintaining metabolic homeostasis, playing a central role in both glucose and lipid metabolism.It receives glucose from dietary carbohydrates and releases it from glycogen stores or through gluconeogenesis, ensuring a steady supply to fuel essential tissues like the brain and muscles.Additionally, the liver efficiently processes lipids, absorbing them from the gut, packaging them into lipoproteins for transport, and regulating cholesterol levels to prevent both deficiencies and overabundance (Feingold, 2024).Understanding these intricate metabolic processes in the liver is essential for comprehending metabolic disorders like obesity and diabetes (Adeva-Andany et al., 2016;Trefts et al., 2017).Many miRNAs have emerged and still do as essential regulators in every part of lipid biology.Despite their unfavorable classification as gene expression regulators, loss-of-function investigations in both animal and cell models unequivocally demonstrate the crucial functions that miRNAs play in metabolism, illness, and cellular and animal phenotypes (Figure 2) (Sedgeman et al., 2019;Paul et al., 2021). According to a recent study by Kaur et al., the increase of gluconeogenesis caused by miRNA-22-3p and its target protein, Tcf7, is critical for the development of diabetes.The results of this investigation confirm miRNA-22 as a novel metabolic regulator and show that it targets Tcf7 to increase the expression of gluconeogenic genes in the liver.These results offer important new information for developing therapeutic approaches that effectively manage diabetes (Kaur et al., 2015).miRNA-206 was shown to be inhibited by fat buildup in the livers of obese mice and human hepatocytes during a study examining the protective effects of miRNA-206 against hepatosteatosis and hyperglycemia.Mice with obesity showed considerable improvements in hepatosteatosis and hyperglycemia after receiving miRNA-206 injections into their livers.Mechanistically, the degradation of PTPN1 (protein tyrosine phosphatase, non-receptor type 1) was caused by miRNA-206s interaction with its 3′untranslated region.Tyrosine kinases and PTPN1 are two different types of enzymes that cooperate to control signaling pathways.One possible therapeutic strategy could be to inhibit PTP1B, SH2, DEP1, and other PTP family members, as they have been connected to a higher risk of developing several human disorders (Verma and Sharma, 2018).miRNA-206 inhibited hepatic lipogenesis by blocking Srebp1c transcription and improved insulin signaling by encouraging phosphorylation of the insulin receptor (INSR) via downregulating PTPN1 expression.In both human hepatocytes and the livers of obese mice, miRNA-206s dual regulation of lipogenesis and insulin signaling led to a decrease in the synthesis of lipids and glucose.miRNA-206s inhibitory effects were reversed in the livers upon reintroducing PTPN1, indicating that PTPN1 is involved in mediating the protective effects of miRNA-206 against hyperglycemia and hepatosteatosis (Wu et al., 2017).According to a study by Castaño et al., lean mice can be efficiently made to develop glucose intolerance, adipose tissue inflammation, and hepatic steatosis by administering exosomes laden with synthetic miRNAs that are similar to those present in the blood of obese mice.These results provide credence to the idea that exosomal miRNAs regulate mice's metabolism of fats and carbohydrates.Furthermore, the research demonstrated that obesity modifies the miRNA profile of exosomes in circulation in mice, resulting in increased expression of .Therefore, the early phases of the development of the metabolic syndrome, which are marked by the advent of glucose intolerance, dyslipidemia, and central obesity in mice, are actively influenced by obesity-associated exosomal miRNAs (Castaño et al., 2018). MiRNAs are essential for controlling the metabolism of lipids and glucose in the liver, among other metabolic processes.An array of miRNAs, including , have been found to be important modulators of hepatic metabolic pathways.These miRNAs have the potential to affect insulin signaling, gluconeogenesis, lipogenesis, exosome-mediated communication, and ultimately the liver's general metabolic health.It is crucial to comprehend the processes by which miRNAs control liver metabolism in order to create innovative treatment approaches for metabolic diseases such as diabetes and obesity. Innate adaptive immunity in hepatic inflammation and anti-inflammatory effects The liver is an essential component of the immune system that works to keep viruses and other substances out of the body while carefully balancing tolerance and immunity.Maintaining this equilibrium is crucial to avoid both over-inflammatory and inadequate infection control.This balance and general tissue health are largely dependent on the dynamic interactions between different immune cells in the liver (Kubes and Jenne, 2018).The incidence of nonalcoholic fatty liver disease (NAFLD) has increased globally in parallel with the growth in diabetes and metabolic syndrome.NAFLD, a range of liver disorders that includes nonalcoholic steatohepatitis (NASH) and nonalcoholic fatty liver (NAFL), can progress in different ways and result in liver cancer and cirrhosis (Friedman et al., 2018).According to recent research, nonalcoholic fatty liver disease (NAFLD) affects other organs and regulatory mechanisms in addition to the liver.It raises the risk of cardiovascular disease, chronic kidney disease, and type 2 diabetes mellitus.Even though cirrhosis, liver failure, and hepatocellular carcinoma can result from the main liver damage in NAFLD, cardiovascular disease accounts for the majority of NAFLD patients' fatalities (Byrne and Targher, 2015). It has been shown that miRNA-26a affects cellular development, differentiation, death, and metastasis.He et al. demonstrated that the miRNA-26a, IL-6, IL-17 axis has an immunoregulatory role in the development of NAFLD.Reduced IL-17 expression and slower NAFLD progression are caused by overexpression of miRNA-26a, which is partly mediated by IL-6 inhibition (He et al., 2017).Through positive regulation of the NF-κB-TNFa axis, miRNA-378 plays a critical role in the development of hepatic inflammation and fibrosis.It has come to light as a possible therapeutic target for NASH management.The incidence of nonalcoholic fatty liver disease (NAFLD) has dramatically increased in correlation with the recent rise in obesity.Effective treatment methods for NAFLD are still inadequate, and the underlying mechanisms are largely unknown.These results show that miRNA-378 stimulates the growth of hepatic fibrosis and inflammation, indicating the therapeutic potential of miR-378 inhibitors for the management of nonalcoholic fatty liver disease (Zhang T. et al., 2019). Fibrosis signaling pathway Liver fibrosis, which can lead to cirrhosis, liver cancer, and liver failure, is the body's wound-healing reaction to liver injury.The primary process in liver fibrosis is the activation of hepatic stellate cells (HSCs).Myofibroblasts and cells generated from bone marrow are further significant elements.The molecular and cellular mechanisms underlying liver fibrosis are poorly understood because the liver is a complex organ (Aydın and Akçalı, 2018).A large number of studies have shown that the expression level of miRNAs in the serum and liver tissue of patients with liver fibrosis has significantly changed (Yu et al., 2023).MiRNAs are implicated in the liver fibrosis process by affecting the proliferation, apoptosis, and activation of HSCs, immune cells, and hepatocytes (Tian et al., 2016). A study states that the parasite trematode Clonorchis sinensis, which inhabits the bile ducts of animals, releases extracellular vesicles (EVs) that can activate M1-like macrophages and cause biliary damage and fibrosis.This is accomplished by delivering a particular miRNA known as Csilet-7a-5p, which targets the NF-kB(Nuclear factor kappa-light-chain-enhancer of activated B cells) signaling pathway that is regulated by Clec7a (C-type lectin domain family seven member A) and Socs1(Suppressor of cytokine signaling 1) (Na et al., 2020).Tumor formation is dependent on the protein SOCS1, which is targeted by Csilet-7a-5p and is essential for cell signaling and protein breakdown (Yan et al., 2021).Furthermore, another study by Chen et al. discovered that by blocking the mitochondrial fusion protein 2 (MFN2), the elevation of exosomal miRNA-500 in macrophages could accelerate liver fibrosis and encourage the growth and activation of hepatic stellate cells (HSCs).Understanding the role of these molecules in parasitehost interactions could lead to new therapeutic approaches for biliary injuries and fibrosis (Chen et al., 2021).MiRNA-103-3p is present in exosomes produced by THP-1 macrophages that have been treated with lipopolysaccharide (LPS).It works by targeting Krüppel-like factor 4 (KLF4), a transcription factor that is involved in cell division, proliferation, and growth, to promote the activation and growth of hepatic stellate cells (HSCs).The advancement of liver fibrosis is significantly influenced by this interaction between HSCs and macrophages.Exosomes enriched with miRNAR-223 are released more readily in individuals with NAFLD when myeloid cells expressing IL-6 signaling are activated.By transferring antifibrotic miRNA-223 to hepatocytes, these exosomes prevent liver fibrosis and decrease the expression of profibrotic transcriptional activators with PDZ-binding motifs (TAZ) in hepatocytes (Ghaleb and Yang, 2017;Hou et al., 2021).During chronic liver damage, hepatocyte's miRNA-221-3p activity can be inhibited to facilitate the quick removal of accumulated extracellular matrix and hasten liver healing.Liver fibrosis, a major cause of death from liver illnesses, can be lessened by lowering the levels of miRNA-221-3p in hepatocytes.For liver fibrosis, targeting miRNA-221-3p may be a useful therapeutic approach.Furthermore, hepatocytes exhibiting reduced expression of miRNA-221-3p also exhibit elevated levels of GNAI2 (G protein subunit alpha i2). A protein that prevents the release of C-C motif chemokine ligand 2 (CCL2).This decrease in hepatic stellate cell (HSC) activation and reduction in liver fibrosis demonstrate the potential therapeutic utility of miRNA-221-3p in liver disorders and its capacity to hasten the clearance of fibrosis (Tsay et al., 2019;PubChem, 2022).The levels of a-SMA and Col1a1, two indicators of liver fibrosis, can be significantly lowered by raising the expression of miRNA-148a-5p in activated LX-2 cells, which are cells that create liver scars.Notch2, a gene implicated in the onset of liver fibrosis, is the target of miRNA-148a-5p.The mechanism by which mesenchymal stem cells (MSCs) provide therapeutic benefits in the treatment of liver fibrosis may be attributed to their ability to enhance the production of miRNA-148a-5p by inhibiting the Notch signaling pathway.The potential of miRNA-148a-5p as a biomarker to track the development of liver fibrosis seems encouraging (Zhou et al., 2022).Increased expression of miRNA-30a can prevent liver scarring by directly inhibiting autophagy, a mechanism that breaks down cellular components, as its levels are low in liver fibrosis.An important modulator of this connection is Beclin1, a protein implicated in both autophagy and apoptosis (Prerna and Dubey, 2022).Consequently, miRNA-30a may provide a novel therapeutic target in the management of liver fibrosis.Because of its anti-fibrotic characteristics, miRNA-30a may be able to treat liver fibrosis by inhibiting the activation of hepatic stellate cells (HSCs), the primary cells in the liver that produce scars.This results in less collagen being produced and more scar tissue breaking down.The research findings indicate that miRNA-30a has an anti-fibrotic effect on HSCs by directly inhibiting Beclin1, which in turn inactivates the Beclin1 signaling pathway and suppresses autophagy in HSCs (Chen J. et al., 2017).It was discovered that three mice models of hepatic fibrosis and activated HSCs treated with TGF-ß1 (Transforming growth factor beta 1) had lower levels of miRNA-488-5p.MiRNA-488-5p was found to decrease HSC multiplication and the expression of fibrosis-related markers in vitro tests.Mechanistically, it was found that TET3 mRNA's 3′UTR is directly bound by miRNA-488-5p, which lowers TET3 (tet methylcytosine dioxygenase 3) protein expression.Consequently, this led to the inhibition of the TGF-ß/Smad2/3 signaling cascade.By suppressing TET3 expression, overexpression of miRNA-488-5p decreased extracellular matrix deposition and ameliorated liver fibrosis in mice (Qiu et al., 2023).The expression of miRNA-150-5p in liver tissue increases with the progression of hepatic fibrosis and decreases with its reversal.Hepatocytes going through apoptosis have an upregulation of this miRNA, whilst proliferating hepatic stellate cells (HSCs) have a downregulation of it.Overexpression of miRNA-150-5p causes hepatocytes to become more susceptible to apoptosis and encourages apoptosis in HSCs.It is interesting to note that HSCs have a stronger effect of miRNA-150-5p on transcriptome stability than do hepatocytes.MiRNA-150-5p is thought to trigger interferon signaling pathways, which could aid in HSC apoptosis.Overall, during liver fibrosis, miRNA-150-5p shows differing regulation and function in hepatocytes and HSCs (Chen et al., 2020).MiRNAs play a crucial role in the development and progression of liver fibrosis.Various miRNAs, including miRNA-103-3p, miRNA-221-3p, miRNA-148a-5p, miRNA-30a, miRNA-488-5p, and miRNA-150-5p, have been shown to regulate the activation, proliferation, and apoptosis of hepatic stellate cells (HSCs), hepatocytes, and immune cells, ultimately influencing the fibrotic process.These findings highlight the potential of miRNAs as novel therapeutic targets for the treatment and management of liver fibrosis. Exosomal-miRNAs in cellular processes MicroRNAs (miRNA) are small, endogenous RNAs that posttranslationally regulate gene expression.These RNAs play a pivotal role in the regulation of gene expression and have been increasingly recognized for their involvement in various cellular processes, particularly in the context of HCC (Figure 3) (Ye et al., 2018).In HCC, the aberrant expression of specific miRNAs has been linked to the development and progression of this malignancy.The complexity of their role in HCC becomes evident when considering their interaction with other non-coding RNAs, such as circular RNAs (circRNAs), in the regulation of key molecular pathways (Li et al., 2022). The circRNAs may play a direct role in miRNA pathways in HCC progression.While circRNAs are generally found to be highly stable and conservative, they can also play multiple functions in disease development, including cancers.Li et al. have shown that the increased expression of circMRPS35, a non-coding circular RNA, directly promotes malignant processes through the inhibition of miRNA-148, therefore inhibiting the miRNA-148a-STX3(Syntaxin 3)-PTEN(Phosphatase and tensin homolog) axis (Li et al., 2022).As miRNA-148 is inhibited, PTEN is consistently ubiquitinated, leading to a decreased expression of pure PTEN, resulting in a promotion of malignant progression.Furthermore, chemotherapy induces the translation of circMRPS35, amplifying the malignant progression while simultaneously developing chemotherapeutic resistance.In another paper by Jiehan et al., miRNA-130b-3p expression was shown to be significantly increased in HCC and downregulated its expression by directly targeting (Homeobox protein Hox-A5), which further activated the PI3K/AKT/ mTORpathway, thereby stimulating HCC cells to induce capillary tube formation, endothelial cell migration, and proliferation (Li et al., 2023). Diagnostic and therapeutic application of exosomal microRNAs The current research on miRNAs reveals their significant impact on various cellular processes in the pathogenesis and progression of HCC.MiRNAs are found to interact with other non-coding RNAs, such as circular RNAs (circRNAs), influencing pathways related to gene regulation, autophagy, and cellular signaling in HCC.These interactions play a critical role in the development, progression, and therapeutic response of HCC.These insights highlight the potential of miRNAs as biomarkers for early detection, prognostic indicators, and therapeutic targets in HCC (Shen et al., 2016).Understanding the complex roles of miRNAs in HCC opens new avenues for innovative treatment strategies and improved patient outcomes.The ongoing research in this field is crucial for unraveling the intricate molecular mechanisms of HCC and developing more effective, targeted therapies (Figure 4) (Syn et al., 2017;Li et al., 2023). The use of microRNA as biomarkers for the effectiveness of different HCC treatments is starting to emerge with multiple studies showing promising results.One such study found a difference in expression of nine different miRNAs, such as miRNA-30A, miRNA- 122, miRNA-125B, miRNA-200A, and miRNA-374B levels being increased and miRNA-15B, miRNA-107, and miRNA-320B levels decreased, and the complete absence of miRNA645, correlating them to an increased survival benefit with regorafenib and increased overall survival in patients having the Hoshida S3 subtype of the tumor (Teufel et al., 2019).An alteration in the expression of miRNAs has also been determined with the use of Sorafenib, with a comparison of HepG2 cells and primary hepatocytes revealing a differential expression of miRNAs, with nine miRNAs downregulated and 24 miRNAs upregulated in HepG2 cells.These miRNAs are known to target genes involved in cancer-related processes (de la Cruz-Ojeda et al., 2022).Furthermore, the analysis of Circulating microRNAs revealed that miRNA-200c-3p in patients being treated with sorafenib were predictive of improved survival, whereas increased levels of miRNA-222-5p and miRNA-512-3p after 1 month of treatment were indicative of poorer survival outcomes (de la Cruz-Ojeda et al., 2022). Another aspect of miRNA could be its use for the overall prognosis of HCC, a research study showed that establishing low levels of miRNA-320d in serum exosomes was associated with more advanced tumor stages, lymph node spread, and poorly differentiated tumors (Li et al., 2020).Patients with lower levels of miRNA-320d in their serum exosomes had shorter overall and disease-free survival.Low levels of miRNA-320d in serum exosomes were independently associated with a worse prognosis for HCC.In addition, overexpression of miRNA-320d in HCC cells inhibited their proliferation and invasion, and BMI1 was shown to be a direct target of miRNA-320d (Li et al., 2020).Another study highlighted that low serum miRNA-122 has a strong association with poor progression-free survival and overall survival, although predicting overall survival is not possible with serum miRNA-122 levels (Zhang Y. et al., 2019).However, there has been strong evidence arguing the correlation between CHST4, SLC22A8, STC2 (Carbohydrate sulfotransferase 4, Solute carrier family 22 member 8, Stanniocalcin-2), hsa-miRNA-326, and hsa-miRNA-21 with a strong potential for predicting prognosis in HCC patients specifically with sa-miRNA-326 andhsa-miRNA-21-5p have been found to have associations with multiple cancer-related pathways (Hu et al., 2021).The likelihood of developing HCC increases exponentially in the event of an HCV infection, which becomes of great importance in prevention and control.MiRNA expression is altered in the event of an HCV infection, such as miRNA-135 having a "proviral" effect due to its ability to increase HCV RNA replication in hepatocytes (Badami et al., 2022).Furthermore, miRNA-135a has been shown to suppress the expression of CXCL2, MyD88, and IRPK2 (Chemokine (C-X-C motif) ligand 2, Myeloid differentiation primary response 88 and Receptor-interacting serine/threonine-protein kinase 2) which are host restriction factors that are essential components of the antiviral immune response (Sodroski et al., 2019).Another example is miRNA-146a-5p with its dual function of downregulation of inflammatory signaling and inhibiting the hepatocyte immune response (Badami et al., 2022). Isolation and detection of exosomes Most of the miRNAs are known to bind with free protein present in the body fluids.Detection of miRNAs without isolation of EV is complicated.Therefore, it is necessary to isolate the Evs from the body fluids and use different methods to quantify the EV-derived miRNAs (Hu et al., 2018;Yang et al., 2018).Isolation of exosomes is very important for the detection of exosomal-miRNAs there are many methods available for the isolation of exosomes.However, the following methods such as ultracentrifugation (UC), size exclusion chromatography (SEC), density gradient centrifugation (DGC), immunoaffinity, and co-precipitation are used often to isolate the exosomes (Xu et al., 2022).Recently aptamers specific to the exosome membrane proteins have been used as targets to capture the exosomes (Chinnappan et al., 2023a;Chinnappan et al., 2023b). Centrifugation methods The separation of exosomes by UC is based on the physical and chemical properties of exosomes.This is a gold standard classical method for exosome isolation.Differential ultracentrifugation method is used for the separation of exosomes from other biological components.Despite its wide use, it has many limitations such as high-cost instruments, aggregation of exosomes and sticking with other components, high-speed centrifugation leads to morphological changes in the exosomes.Exosome isolation from density gradient centrifugation has high resolution and high purity.UC and DGC methods can not be used for the isolation of exosomes from large volumes of biological fluids and to detect the exosomal miRNAs. Size exclusion chromatography(SEC) The isolation of exosomes from the SEC method is based on the particle size.The exosome passes through the pore size of the polymer beads loaded in the chromatographic column.The particles with a small radius will move rapidly and the particles with a large size cannot enter into the polymer pore.Exosome isolation from SEC is pure without soluble components and viruses, and proteins.Therefore this method is more suitable for clinical applications and basic research. Ultrafiltration This is a simple and efficient method for the isolation of exosomes.It will not alter the morphology or the biological behavior of the particles.In this method, a membrane with a specific pore size is used to collect exosomes.This method will be useful for the isolation of exosomes from large volumes of biological samples.This method separates the exosome only by particle size and therefore, it cannot remove all the impurities and it is not specific. Co-precipitation This method of polymer co-precipitating agent reduces the solubility of exosomes significantly, and as a result, it precipitates easily.This method is very simple and rapid.The isolation efficiency is 2.5-fold higher than the ultracentrifugation technique.This method cannot be used for large-scale applications it also co-precipitates with organelle-related proteins.The addition of precipitating reagents are contaminants along with exosomes, therefore the isolated exosomes limit the further application of isolated exosomes. Immunoaffinity enrichment This is an efficient method for the isolation of specific exosomes.The specific antibodies of exosome-specific biomarkers, such as CD6, CD63, CD81, EGFR, and EpCAM are immobilized with magnetic beads, chips, and ELISA plates.The immunoaffinity capture methods can specifically bind to the exosome component resulting in isolation of specific and pure exosomes.The separation of exosomes from the solid support is challenging due to the strong interaction between the antibody and antigen.In addition to antibodies, exosome componentspecific aptamers are also used for the isolation of exosomes and further analysis (Chinnappan et al., 2023a). Field flow fractionation (FFF) Unlike size exclusion chromatography, the FFF method works in a single phase.The sample flows in the FFF channel in a parabolic way.The vicious particles stay in the center and the particles with less viscosity will move closer to the channel wall (Zhang and Lyden, 2019).This method is ideal for the separation of different particle sizes.The sample preparation for this method is tedious, it is timeconsuming and limits the wide application. Acoustic-based isolation method The Acoustic-based microfluidics separation of exosomes is highly precise.The ultrasonic wave is used for the separation of the particles.Under sound pressure, the particles are separated based on their characteristic physical properties, such as size and density.This is a rapid, label-free, contact-free method of exosome isolation (Hassanpour Tamrin et al., 2021). Absorbent polymer-based method This method is based on the high water-absorbing ability of the hydrogels.In the presence of hydrogel, the small molecules will be absorbed into the pore sizes of the hydrogel, and the exosomes and large-size particles will be excluded for the concentration and purification.Yang et al. have successfully enriched the exosomes by this method from culture media and urine (Yang et al., 2021).The interaction between the hydrophobic surfaces and the microbes in the urine sample can be utilized for concentrating the microorganisms.Tuning the hydrophobicity surfaces was acting as a sensing platform for the detection of nucleic acid and other metabolites (Sudarsan et al., 2023;Uttam et al., 2024).A similar methodology was used for the isolation of exosomes.For example, Hydrophobic interaction chromatography technology was used for the isolation of exosomes from urine and plasma samples using a polyester, capillary-channeled polymer fiber phase (Wang L. et al., 2019;Huang et al., 2019). Exosomal microRNA detection methods There are several methods developed for the quantitative and qualitative detection of exosome-derived miRNAs.The reverse transcription polymerase chain reaction (qRT-PCR) method is the gold standard for the quantitative detection of exosomal-miRNAs.In addition, there are many other methods such as surface-enhanced Raman scatting (SERS), microarray, molecular beacon fluorescence assay, isothermal amplification, and nextgeneration sequencing methods that have been developed.Most of the methods use probe molecules or complementary primers for the detection of miRNAs. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) Exosomal miRNAs quantification by qRT-PCR consists of two steps.In the first step, the complementary DNA (cDNA) of the target, miRNAs will be produced by reverse transcription processes.In the second step, the cDNA will be used as a template for the realtime-PCR amplification which is monitored by changes in the fluorescence of the probe dye with time (Chinnappan et al., 2019).There is a standard reference for the exosomes that is used for the quantification of exosomal-miRNAs because there is no stable expression of miRNAs in the exosomes that can be used as standards.Magnetic nanoparticle-based Portable nucleic acid detection (PNAD) has been designed by the integration of sample processing and PCR amplification in a single device.This device can work in three different modes such as high-precision heating rapid thermal cycle control and rate-adjustable constant heating/cooling control for nucleic acid extraction, PCR, and melting curves respectively (Fang et al., 2021).Droplet digital PCR(ddPCR) is an advanced nucleic acid amplification technology, that is highly precise and accurate in the quantification of nucleic acid.The outstanding performance of ddPCR was noticed for the quantification of miRNAs from serum samples for the diagnosis of cancer (Hindson et al., 2013).Wang et al. demonstrated that the quantitative detection of exosomal-miRNAs from urine samples by ddPCR exhibits an excellent sensitivity compared to the conventional qPCR.It could detect miNA-29A as low as 50 copes/µL (Wang C. et al., 2019).Exosome-derived miRNAs from endometrial cancer (EC) patient's plasma samples have been quantified by PCR method.It has been found that the miRNA-15a-5p, miRNA-106b-5p, and miRNA-107 were upregulated compared to healthy individuals (Zhou L. et al., 2021). Insitu detection of miRNAs by molecular beacons Fluorescence assay is used for the in-situ detection of target nucleic acids specifically.Several types of molecular beacons were used for the sensitive detection of miRNAs and other RNA targets (Raja et al., 2006;Chinnappan et al., 2013;Chinnappan et al., 2019).The molecular beacon consists of a stem-loop DNA with a fluorophore and a quencher attached at the 5′and 3′ends of the stem, which can bind to the target RNA specifically and regenerate the fluorescence.The increase in the fluorescence intensity is directly proportional to the quantity of miRNAs present in the sample.Lee et al. have designed two different molecular beacons for the simultaneous detection of two miRNAs (such as miRNA-375 and miRNA-574-3P) specific to prostate cancer.The urine samples were used directly for the quantification of miRNAs without any sample processing steps (Lee et al., 2018).Lee et al. have demonstrated the in situ single-step detection of exosomal miRNA--21 specific for breast cancer using the molecular beacon probe from the patient serum sample (Lee et al., 2015).Many other exosomal miRNAs were detected using the target-specific molecular beacon (Xu et al., 2022).There is more possibility of false positive results due to the autofluorescence, low abundance of the target miRNAs leads to more noise and light scattering due to the inhomogeneity of the samples. Microarray The microarray assay is based on the hybridization of the predesigned probe to the target sequences.The total RNA extracted from the samples will be labeled with a fluorescent probe and hybridized with the complementary DNA which is immobilized on the glass slide.The signal intensity after hybridization is correlated with the quantity of miRNAs in the sample.The fluorescence emission from different kinds of miRNA hybridizes with the respective probes at different positions can be detected.From the signal intensity and the position, the nature of miRNAs and their quantity can be determined.Exosome-derived miRNAs from type-1 autoimmune pancreatitis (AIP) samples, chronic pancreatitis (CP), and healthy adults (HA) were analyzed.The over-expression of miR-21-5p was observed compared to healthy adults (Nakamaru et al., 2020).Two hundred and ten different exosome-derived miR expression patterns were identified using TaqMan open-array technology from peritoneal lavage fluid of patients suffering from colorectal cancer (CRC) (Roman-Canal et al., 2019).Different expression levels of Alzheimer's disease (AD) specific miRNAs have been studied using 5XFAD mouse model.The microarray analysis showed that 48 miRNAs expressed differently, of which six miRNAs played play important role in gene targets and signaling pathways of AD (Song et al., 2021).Despite the multiplex analysis, microarray technology has certain limitations such as low sensitivity, expensive, and narrow range of detection. Next-generation sequencing (NGS) NGS is an advanced technology for high-throughput sequencing in the transcriptome.It can be used for sequencing DNA or RNA base pairs.The total RNA from the sample is to be purified and the universal adaptor has to be connected to both 5′and 3′ends of RNA strands followed by reverse transcription, PCR amplification, and sequencing (Miller et al., 2022).NGS has more advantages over microarrays such as high sensitivity and accuracy and many unknown miRNAs can be detected.NGS is often used for the detection of miRNAs for specific diseases.Overexpression of exosome-derived miRNA-10a-5P and miRNA-29b-3P from a prostate cancer patient's plasma samples by NGS technology (Worst et al., 2019).This methodology can detect new sequences, however, it is not apt for standard detection due to high cost and complex data analysis. Isothermal amplification technique This technique is one of the easy and simple methods for the detection of miRNAs.This method allows the amplification of nucleic acids at a constant temperature without the aid of a thermocycler.This is suitable for the detection of short sequences like miRNAs (Gines et al., 2020).There are two kinds of isothermal amplifications such as with enzyme and enzyme-free.The enzymatic amplification includes loop-mediated isothermal amplification (LAMP), nuclear acid sequence-based amplification (NASBA), rolling circle amplification (RCA), and exponential amplification reaction (EXPAR).RCA technology was used for the sensitive detection of miRNA-21,miRNA-122, and miRNA-155 from exosomes simultaneously (Wang et al., 2020).Catalytic hairpin assembly (CHA) and hybrid chain reaction (HCR) are enzyme-free methods.Electrochemical detection of miRNA-122 was demonstrated using the HCR amplification method (Guo et al., 2020). Clinical application of exosomederived miRNAs The exosome-mediated intercellular transmission of miRNAs exhibits a new model in the clinical research area.The short non-coding RNA can be transferred from 1 cell to another cell through exosomes and create an RNA-induced silencing complex (RISC) which could cause the degradation of target mRNA or prevent the protein translation.Therefore, exosomal-miRNAs play an important role in the gene regulation pathways in the recipient cells (Ghafouri-Fard et al., 2023).The exosome-derived miRNAs and their influence on various disorders, including pulmonary, neurological, and cardiovascular disorders, gastrointestinal disorders, and cancers.Serum Exosome derived-MiR-638 was identified as a significant and independent prognostic biomarker for HCC.The overexpression of exosomal miRNA-638 was associated with the reoccurrence of the tumor.The cancer cell-secreted miRNAs promote vascular permeability by downregulation of endothelial expression of VE-cadherin and ZO-1 (Yokota et al., 2021).Mesenchymal stem cells secreted exosomal-miRNA-15a hinder the progression of HCC by down-regulating the Sal-like protein 4 (SALLA4) levels (Ma et al., 2021).The serum exosomal miRNA-720 is identified as an excellent biomarker for the detection of HCC, which gives more accurate results compared to AFP or PIVKA-II (protein induced by vitamin K absence).The exosomal miRNA-720 is not influenced by the aminotransferase levels (Jang et al., 2022).There are several other miRNAs are utilized as potential biomarkers in the clinical application for the diagnosis of HCC.The major limitations are the following, there is no standardized method for the detection of exosome miRNA.Very limited numbers of clinical samples were used for the studies and the experimental settings and the detection methodologies vary from lab to lab.There is no standard optimized method for the isolation of exosomes.Most of the studies were conducted using serum and blood samples, however, most of the biological fluids contain exosomes.Therefore, more studies have to be done using other biological fluids.Another big challenge is the production of a large quantity of exosomes for clinical trials.3D scaffold or microfluidic can be used for the alrge production, however, the purity of the isolated exosomes are another upto the clincal levels using this methods, other kinds to Evs and the miomolecules of exosome sizes will be contaminated.Various types of exosomes and their complexicity make the miRNA based HCC diagnosis more challenging. Conclusions and future prospectives MicroRNAs have been attracted recently due to their potential as a biomarker for the detection of cancer and other diseases, they are also used as predictive prognosis.Despite much research has been done on miRNAs, and specific features of miRNAs roles; still it is under investigation including sample preparation methods, analysis and selection of the control.There are several variables to consider when studying miRNAs and their role as biomarkers and mediators of disease.The sex, age, and body mass index of the patient may result in significant variation in miRNA levels.The samples used as health control are often questionable.The age and sex-matched control sample used for the study may not have the diseases of interest, yet, it is not clear that the miRNAs are associated with age and sex.Moreover, some other disease factors may be a match between the control sample and the disease samples (Takizawa et al., 2022).In most cases, these issues have not been highlighted, there are very limited reports that have considered these issues.A study by Ameling et al.demonstrated that expression levels of 179 miRNAs from 372 healthy volunteers were selected from a previous populationbased cohort study.There are 12 and 19 miRNAs that were significantly associated with age and BMI after adjusting the blood cell parameters.Out of 35 associated miRNAs, it was reduced to 7 after adjustment with age, BMI, and blood cell parameters (Ameling et al., 2015).Additionally, there is a great lack of standardized protocols for the collection and processing of samples for miRNA studies (Takizawa et al., 2022).Studies may use either plasma or serum, which could introduce variations across studies.For example, Binderup et al. demonstrated significant differences between miRNA levels in recentrifuged biobank plasma compared to platelet-poor plasma (Binderup et al., 2016).Finally, miRNA extraction methods, such as next-generation sequencing and reverse transcriptase quantitative polymerase chain reaction, result in method-dependent variation (Kloten et al., 2019).Hence, it is important to take these variables into consideration when analyzing the role of miRNAs in HCC and interpreting potentially conflicting data in the field.Future research should focus on overcoming these challenges through developing efficient isolation techniques, standardizing detection methods, and conducting extensive clinical trials. FIGURE 2 FIGURE 2Schematic sketch of signaling pathways linked to carbohydrate and lipid metabolism.The orange arrows depict the part of the pathway that builds glucose and plasma triacylglycerol from the byproducts of the tricarboxylic acid cycle (TCA) and the glycolytic cycle.Some of the representative miRNAs that play vital roles in the metabolism of carbohydrates and lipids are highlighted in green color.Adapted from Paul et al., 2021 with copyright permission, Elsevier. FIGURE 3 FIGURE 3 Overview of the role of microRNA (e.g., MiR-223) in normal liver physiology and pathobiology.Adapted from Ye et al., 2018, with copyright permission under the terms of the CC-BY-NC-ND 4.0 license. FIGURE 4 FIGURE 4 Overview of the potential therapeutic application of exosomes for cancer diagnostics and treatment.Adapted from Syn et al., 2017, with copyright permission, Elsevier.(A) Targeted drug delivery approches.(B) Immunotheraputic apprachs.
v3-fos-license
2018-11-30T14:03:26.582Z
2018-11-30T00:00:00.000
54066776
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.02829/pdf", "pdf_hash": "4a2c092ef2b858191130e39aea45b062fd66a444", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41533", "s2fieldsofstudy": [ "Biology" ], "sha1": "4a2c092ef2b858191130e39aea45b062fd66a444", "year": 2018 }
pes2o/s2orc
Mast Cells Respond to Candida albicans Infections and Modulate Macrophages Phagocytosis of the Fungus Mast cells (MCs) are long-lived immune cells widely distributed at mucosal surfaces and are among the first immune cell type that can get in contact with the external environment. This study aims to unravel the mechanisms of reciprocal influence between mucosal MCs and Candida albicans as commensal/opportunistic pathogen species in humans. Stimulation of bone marrow-derived mast cells (BMMCs) with live forms of C. albicans induced the release of TNF-α, IL-6, IL-13, and IL-4. Quite interestingly, BMMCs were able to engulf C. albicans hyphae, rearranging their α-tubulin cytoskeleton and accumulating LAMP1+ vesicles at the phagocytic synapse with the fungus. Candida-infected MCs increased macrophage crawling ability and promoted their chemotaxis against the infection. On the other side, resting MCs inhibited macrophage phagocytosis of C. albicans in a contact-dependent manner. Taken together, these results indicate that MCs play a key role in the maintenance of the equilibrium between the host and the commensal fungus C. albicans, limiting pathological fungal growth and modulating the response of resident macrophages during infections. INTRODUCTION Mast cells (MCs) are immune cells belonging to the innate arm of immunity. They originate in the bone marrow from a hematopoietic progenitor through the myeloid lineage but, unlike other myeloid-derived cells, MCs progenitors leave the bone marrow at an early stage of differentiation to enter the circulation. Once in the bloodstream, they rapidly migrate to the periphery and complete their differentiation into mature MCs with tissue-specific phenotypes (1). These cells mainly localize at mucosal sites and are found in close contact with epithelial cells and venules. They differentially express a wide plethora of pathogen recognition receptors (PRRs), cytokines and chemokine receptors, as well as costimulatory molecules by virtue of their tissue-specificity. Moreover, triggering of MCs by specific stimuli results in their activation and in the release of different pre-stored and de novo synthesized mediators (2,3). Albeit being relegated to mere effectors of allergic processes for many years, MCs are now believed to be important tissue-resident sentinels and have been described to interact with the host microbiota (4,5). Candida spp. are commensal fungi that colonize mucous membranes and the skin of healthy individuals and among all the species, Candida albicans is the most common in human mycobiota. However, they can cause severe invasive diseases in patients hospitalized in intensive care units, with solid tumors or hematological malignancy, undergoing surgery or being treated with broad-spectrum antibiotics. Albeit being only poorly considered as a public health concern, over 800 million people worldwide suffer from life-threatening fungal-related diseases and it is estimated that C. albicans is responsible for more than one half of the cases of candidaemia, which mortality rates in Europe vary between 28 and 59% (6,7). Among Candida species, C. albicans is the only one able to grow as a unicellular yeast and as a filamentous hyphal and pseudohyphal forms (8). This property is rather important as C. albicans hyphal growth is an important virulence factor and represent a key step for tissue invasion processes (9). The immune response to C. albicans begins with the recognition of specific pathogen-associated molecular patterns (PAMPs) by the innate arm of the immune system. The recognition of fungal PAMPs is mediated by several PRRs, including Ctype lectin receptors (CLRs), Toll-like receptors (TLRs), and intracellular NOD-like receptors (NLRs) (10). Dectin-1 is the best characterized CLR and is fundamental for the recognition of β-glucans and subsequent production of pro-and antiinflammatory cytokines. On the other side, TLR2, TLR4, and TLR6 are the main TLRs that are involved in the recognition of the fungal cell wall mannoproteins and can cooperate with dectin-1 to boost cytokine expression in response to β-glucans (11,12). Despite their potential role during fungal infections, interactions between MCs and fungi have been only poorly investigated and published data are often contradictory. Gastrointestinal colonization with C. albicans induced MCs infiltration and degranulation, increasing the permeability of the gastrointestinal mucosa (13). Rat peritoneal MCs as well as murine bone marrow-derived MCs (BMMCs) were shown to be able to phagocytose heat killed and opsonized live C. albicans yeasts and to produce nitric oxide (but not ROS) in a mechanism involving both TLR2 and dectin-1 (14,15). On the contrary, a recent study demonstrated that BMMCs released ROS and several pro-inflammatory cytokines in response to in vitro stimulation with C. albicans yeasts and hyphae (16). Candida challenge also induced degranulation of human MCs and the release of pro-and anti-inflammatory cytokines as well as of tryptase-containing MC-extracellular traps (17). Our work adds new tiles on the big picture of MCs role during fungal infection, describing the tight interaction between these cells and C. albicans, as well as their control of macrophages activation and fungal clearance. Mice C57BL/6 mice were purchased from Envigo (The Netherlands) and maintained at the animal facility of the Department of Medicine, University of Udine (Italy). Dectin-1 −/− femurs and tibiae were kindly gifted by Prof. Gordon Brown, University of Aberdeen (Aberdeen, UK). All animal experiments were approved by the OPA (Organismo per il Benessere Animale) of the local committee in accordance with institutional guidelines and National law (D.Lgs. 26/2014). For C. albicans infections, 10 6 BMMCs were stimulated with C. albicans yeast (1:1 ratio) or hyphae (1:10 ratio) at a final concentration of 2·10 6 cells·ml −1 in IL-3-free complete RPMI medium. In order to limit fungal growth, amphotericin-B (Sigma Aldrich) was added to each well at a final concentration of 10 ng·ml −1 . RNA extraction was performed before the addition of amphotericin-B. BMMCs Phagocytosis of C. albicans Phagocytosis of C. albicans by BMMCs was assessed by flow cytometry. C. albicans yeasts were labeled for 20 min with 5 µM Cell Proliferation Dye eFluor 670 (CPD, eBioscience) at 37 • C in 5% CO 2 atmosphere, following manufacturer's instructions. CPD-labeled C. albicans was seeded to BMMCs at a 10:1 ratio on 24-well plates in IL-3-free complete RPMI medium and incubated for 90 min at 37 • C in 5% CO 2 atmosphere. As a negative control of phagocytosis, some BMMCs were pretreated with 10 µM cytochalasin-D 1 h before the addition of CPDlabeled C. albicans yeasts. Alternatively, the phagocytosis was performed at 4 • C to block active endocytosis processes. Cells were then harvested, stained for cKit and acquired. Phagocyting BMMCs were determined as cKit + CPD + double-positive cells. BMMCs Degranulation Assay BMMCs degranulation response was determined as the percentage of β-hexosaminidase released. 0.5·10 6 BMMCs were incubated in Tyrode's buffer (10 mM HEPES buffer [pH 7.4], 130 mM NaCl, 5 mM KCl, 1.4 mM CaCl2, 1 mM MgCl2, 5.6 mM glucose, and 0.1% BSA) with or without the addition of 10% FBS and stimulated with the same number of C. albicans yeasts at 37 • C for the indicated time points. As a positive control, 0.5·10 6 BMMCs were sensitized in complete RPMI medium for 3 h with 1 µg·ml −1 of dinitrophenol (DNP)-specific IgE, then washed twice, resuspended in Tyrode's buffer and challenged with 100 ng·ml −1 DNP (Sigma-Aldrich). The enzymatic activity of the released β-hexosaminidase was assessed by the cleavage of its synthetic substrate (p-nitrophenyl N-acetyl-glucosamide, Sigma Aldrich) in p-nitrophenol and measuring the p-nitrophenol absorbance at 405 nm with a plate spectrophotometer. Results are expressed as the percentage of β-hexosaminidase released over β-hexosaminidase retained in the cytoplasm. Leukotrienes C4, D4, and E4 were measured in the same samples using a specific detection kit (GE Healthcare) according to the manufacturer's instructions. Purification of Peritoneal Macrophages The peritoneum of 8-to 12-weeks-old C57BL/6 mice was lavaged using a PBS solution containing 100 U·ml −1 penicillin and 100 µg·ml −1 Streptomycin (Euroclone). Following lavage, the cells were washed, resuspended in complete RPMI medium, plated in 24-well plates at a concentration of 0.5·10 6 cells/well, and cultured for 6 h at 37 • C in a 5% CO 2 atmosphere. Non-adherent cells were removed by washing the cells twice with PBS, and adherent cells were cultured overnight in complete RPMI at 37 • C in a 5% CO 2 atmosphere. Peritoneal macrophages purity was confirmed by flow cytometry and immunofluorescence by staining with fluorochrome-conjugated anti-F4/80 (BM8), anti-CD11b (M1/70) and anti-MHC-II (M5/114.15.2) antibodies (BioLegend). Candida albicans Cultures Wild-type C. albicans SC5314 strain yeasts were seeded on BBL Sabouraud Dextrose Agar (Becton Dickinson and Company) supplemented with 50 µg·ml −1 chloramphenicol and incubated at 30 • C for 24 h. To generate C. albicans hyphae, 10 7 yeast cells were resuspended in complete RPMI medium, seeded into T-25 adhesion flasks and allowed to germinate for 3 h at 37 • C. Hyphae were harvested by scraping, centrifuged at 700 × g for 10 min and washed with PBS. Time-Lapse Bright-Field Microscopy MC-C. albicans interaction was analyzed by time-lapse epiluminescent microscopy using the Leica AF6000LX system (DMI6000-B microscope equipped with a DFC350FX camera) at a magnification of 40×. Before each experiment, BMMCs were labeled with FAST DiI (Invitrogen) according to the manufacturer's instructions. 0.5·10 6 BMMCs and 0.5·10 6 C. albicans yeasts (ratio 1:1) were plated on 8-well PermanoxR Chamber Slide (Lab-Tek, Nunc). The chamber was placed at 37 • C in 5% CO 2 atmosphere. Phase contrast images were recorded every 10 min for a total of 12 h and resulting videorecorded movies were processed with LAS AF (Leica) and Fiji (ImageJ) software (18). Macrophage Chemotaxis and Migration Assay Chemotaxis of peritoneal macrophages was evaluated using the ibidi R µ-Slide Chemotaxis kit according to the manufacturer's instructions. ≈15,000 peritoneal macrophages were seeded in the observation area and the slide incubated at 37 • C in 5% CO 2 atmosphere. After cell attachment, non-adherent cells were removed by washing three times with PBS. Twenty four hours after cell seeding, reservoirs were filled with either complete RPMI medium (10% FBS) or conditioned media and the slides were immediately placed at 37 • C in 5% CO 2 atmosphere. DIC images were recorded at 10× magnification every 10 min for a total of 24 h and resulting video-recorded movies were processed with LAS AF software (Leica). At least 25 cells per condition were manually tracked with Fiji Software (ImageJ) and resulting data were analyzed with the Chemotaxis and Migration Tool software (ibidi) (18). Macrophage chemotaxis during live C. albicans infection was assessed using 8 µm Transwell R inserts (Corning). Briefly, 10 5 peritoneal macrophages were seeded in serum-free media in the upper chamber of a 24 well-Transwell R system. The lower chamber was filled with serum-free media containing or not: conditioned media from C. albicans alone, BMMCs alone or C. albicans-infected BMMCs; 2·10 6 ml −1 BMMCs, 2·10 6 ml −1 C. albicans yeasts or 2·10 6 ml −1 BMMCs stimulated with C. albicans yeast (1:1 ratio); 100 ng ml −1 MCP-1. Chemotaxis was allowed overnight, then inserts were collected, carefully washed and stained with crystal violet (0.5% in 25% methanol) for 10 min. Migrated cells were counted in 3 random fields and the percentage of migration was calculated on the total number of seeded macrophages. Macrophage Phagocytosis Assay Candida albicans yeasts were labeled for 20 min with 5 µM Cell Proliferation Dye eFluor 670 (CPD, eBioscience) at 37 • C in 5% CO 2 atmosphere, following manufacturer's instructions. CPDlabeled C. albicans was seeded to BMMCs at a 1:1 ratio or plated alone on 24-well plates, incubated for 3 h at 37 • C in 5% CO 2 atmosphere and harvested by scraping. Peritoneal macrophages received BMMCs co-cultured with C. albicans, naïve BMMCs, and CPD-labeled C. albicans or CPD-labeled C. albicans alone at a 1:1:1 ratio. As a negative control of phagocytosis, some macrophages were pretreated with 1 µM cytochalasin-D for 1 h. After 1 h, cells were harvested by scraping, washed with PBS and stained with anti-F4/80 (BM8, Biolegend). Flow cytometry was used to quantify the number of F4/80 + cells that had engulfed CPD-labeled C. albicans. Percentage of phagocytosis was calculated by subtracting the percentage of double positive cells in presence of cytochalasin-D to the percentage of double positive cells in non-treated macrophages. Phagocytosis index was further determined as fold-change over the phagocytosis percentage of macrophages stimulated with CPD-labeled C. albicans alone. In some experiment, 50 pg·ml −1 recombinant IL-4 (Peprotech), 100 pg·ml −1 recombinant TNF-α (Immunotools), 10 µg·ml −1 anti-IL-4 neutralizing antibody (11B11, eBioscience), 10 µg·ml −1 anti-TNF-α neutralizing antibody (MP6-XT22, Miltenyi Biotec), or conditioned media were used to stimulate peritoneal macrophages together with BMMCs and C. albicans. RNA Extraction and Real-Time PCR Analyses Cells were lysed with EURO GOLD TriFast (Euroclone) and total RNA extracted with the phenol-chloroform protocol according to manufacturer's instructions. Total RNA was quantified using a NanoDrop TM spectrophotometer (ThermoFischer) and retrotranscribed with the SensiFAST TM cDNA Synthesis kit (Bioline). Quantitative qPCR analyses were performed with SYBR Green chemistry (BioRad) using a BioRad iQ5 real-time PCR detection systems. Target genes expression were quantified with the Ct method using g3pdh (glyceraldehyde 3-phosphate dehydrogenase) as normalizer gene. PCR primers used are as follows: MC-Immunological Synapse Fungal recognition by immune cells specifically relies on the recognition of fungal PAMPs by cellular PRRs (12). In order to assess whether BMMCs could recognize C. albicans, the expression level of different PRRs involved in fungal recognition was analyzed by flow cytometry. As reported in Figure 1, BMMCs expressed dectin-1 as well as TLR2 and TLR4. To better dissect the interaction between MCs and C. albicans, BMMCs were cocultured with live C. albicans Frontiers in Immunology | www.frontiersin.org both in the yeast and hyphal forms. Intriguingly, after a few hours of co-culture, MCs were found to tightly interact with the hyphal form of the fungus in a way that resembled phagocytosis. Time-lapse bright field microscopy experiments showed that MCs interacted with C. albicans as soon as it changed its morphology from yeasts to hyphae but not with yeasts alone, suggesting that this phenomenon specifically relies on the progression of Candida germination (Figure 2A and Supplementary Video 1). Flow cytometric analysis of BMMC-C. albicans shows that a considerable number of BMMCs are able to phagocytose the fungus ( Figure 2B). Interestingly, this process was not a consequence of the fungal invasion of MCs but was rather mediated by MC's actin dynamics, as the addition of cytochalasin-D, a potent actin polymerization inhibitor, almost completely inhibited the process (Figure 2B). Dectin-1 signaling is known to be activated only when the receptor binds to particulate β-glucans. This interaction induces the receptor to cluster in synapse-like structures (called "phagocytic synapses") to which signaling molecules are recruited (19). Albeit C. albicans hyphae β-glucans are shielded by a layer of mannoproteins and thus fail to activate dectin-1, it has also been hypothesized that this receptor may be responsible for the recognition of hyphal β-glucans probably due to the presence of thinner mannan fibrils (20). Interaction with C. albicans by dectin-1 −/− BMMCs was comparable with WT BMMCs. Immunofluorescence staining showed that partial engulfment of C. albicans hyphae induced the rearrangement of the α-tubulin cytoskeleton in both WT and dectin-1 −/− BMMCs (Figure 2C). 3D-modeling of α-tubulinstained BMMCs, indicate that BMMCs are able to "wrap" around the fungal hypha (Figure 2C), resembling the phenotype of the so-called frustrated phagocytosis (21,22). In order to define whether this behavior could be ascribed as phagocytosis or not, BMMCs were stained for two markers of early-and lateendosomes. During phagosome maturation, phagosomes acquire different surface molecules (e.g., Rab GTPases) which play key roles in the process of maturation. The early-endosome antigen 1 (EEA1) is involved in the initial stages of the maturation process by binding to PIP3 and mediating endosomes fusion. On the other side, the late phase marker lysosomal-associated membrane protein 1 (LAMP1) is acquired at the late stages of maturation, after the endosomes have fused with acidic lysosomes (23). None of the cells stained for EEA1 (not shown) while most of them stained positively for LAMP1. Interestingly, both WT and dectin-1 −/− BMMCs stained positively for LAMP1, suggesting that this receptor is not required for the accumulation of LAMP1 + vesicles ( Figure 2D). MCs Degranulation in Response to Fungal Challenge Seen that LAMP1 is also considered a marker of degranulation, MCs degranulation in response to C. albicans was evaluated. Fungal challenge was performed in the presence of 10% serum in order to allow C. albicans switch to the hyphal form, and the release of β-hexosaminidase and leukotrienes C4, D4, and E4 was determined after 30 min, 1, and 2 h. The release of β-hexosaminidase was minimally increased over the control only after 2 h of stimulation, while leukotrienes levels remained constant at all the time points (Figure 3A). IgE/Ag stimulation was used as positive control of MCs degranulation. Taken together, these data indicate that, in our setup, BMMCs do not degranulate in response to the encounter of C. albicans. Cytokine Release in Response to Fungal Challenge MCs can release a broad range of de novo synthesized mediators which play an important role in the modulation of the immune response to pathogens (24). To understand the role of MCderived mediators during fungal infections, MCs were cocultured with live C. albicans yeast and hyphae and culture supernatants assessed for different cytokines. After 3 h of coculture, it was already possible to detect IL-6, IL13, and TNF-α, while after 24 h also IL-4 was detected in culture supernatants ( Figure 3B). Interestingly, C. albicans yeasts were more effective than hyphae in inducing cytokine release from MCs. These data were also confirmed by gene expression analyses that revealed a strong upregulation of tnf-α, il6, il13, and il4 genes. Again, stimulation with C. albicans yeasts rather than hyphae induced higher levels of cytokines expression ( Figure 3C). To assess whether dectin-1 played a role in MCs activation by C. albicans, dectin-1 −/− BMMCs were co-cultured with the fungus and cytokine levels were assessed after 24 h. Stimulation of dectin-1 −/− BMMCs with C. albicans yeast and hyphae resulted in an impaired release of TNF-α, IL-6, and IL-13 compared to WT controls, both during the stimulation with yeasts and hyphae ( Figure 3D). Notably, cytokine release was only impaired and not completely abolished, in line with the hypothesis that dectin-1 is not the only receptor involved in C. albicans recognition (11). Macrophage Crawling Is Increased in the Presence of Activated MCs Clearance of fungal pathogens rely mostly on the activity of phagocytic cells and especially on neutrophils and macrophages, and depletion of mononuclear phagocytes has been described to worsen fungal proliferation and overall survival (25). MCs interact with many members of the innate and adaptive immune system and can affect monocyte/macrophage behavior during infections (26,27). Thus, we aimed to determine how BMMCs could induce macrophage migration and modulate their ability to phagocyte C. albicans. The ability of MCs to induce macrophage chemotaxis was determined with the ibidi R µ-Slide Chemotaxis slides. For each experiment, peritoneal macrophages were purified from C57BL/6 mice peritoneal lavages and checked for F4/80, CD11b and MHC-II expression by flow cytometry and immunofluorescence (Figure 4). To determine the release of chemotactic factors during C. albicans infections, conditioned media of BMMCs-C. albicans co-cultures were collected after 3 h and used to assess migration. Conditioned media from living, germinating, C. albicans alone and complete RPMI media (10% FBS) were used as controls. Figure 5A shows all the single cells trajectories during 24 h incubation. Notably, conditioned media from BMMCs + C. albicans co-cultures induced a more evident movement of macrophages. Forward migration indexes (FMI) were calculated and showed no significant differences (FMI| = 0.0147 ± 0.0126 and FMI-=-0.0008 ± 0.0228) (28). Rayleigh test reported a p-value of 0.5327, thus indicating that cell endpoints were uniformly distributed. On the other side, macrophages incubated with BMMC + C. albicans culture supernatants moved with a higher velocity which resulted in greater accumulated distance compared to controls (Figure 5B). These data might indicate that, during C. albicans infections, BMMCs release soluble factors that increase macrophage crawling but do not promote their chemotaxis. On the other side, it is possible that the chemotactic factors might be unstable in the media. To further confirm this latter hypothesis, we evaluated macrophage chemotaxis toward a live C. albicans infection taking advantage of a transwell migration assay. Interestingly, conditioned media from infected MCs only partially induced macrophage chemotaxis but, on the contrary, live infection induced a prominent migration of macrophages (Figures 5C,D). Taken together, these data suggest that MCs can release short-lived soluble mediators which improve tissue-resident macrophage crawling and induce their migration toward Candida infections. Resting MCs Partially Inhibit Macrophage Phagocytosis Ability MCs are known to modulate macrophages' phagocytosis ability (26,29). To establish whether MCs phagocytosis of C. albicans could be responsible for a better fungal clearance by providing "eat-me" signals to tissue-resident macrophages, peritoneal macrophages were co-cultured with BMMCs and C. albicans, and their phagocytosis ability was determined. BMMCs were stimulated with CPD-stained C. albicans for 3 h, in order to allow phagocytosis of Candida germinated yeasts (from now on, these cells will be referred as "activated MCs"), scraped and seeded to peritoneal macrophages. Naïve BMMCs + CPDstained C. albicans (referred as "resting MCs") or CPD-stained C. albicans alone were seeded to peritoneal macrophages as a control. After 1 h of co-culture, the percentage of phagocytosis was determined by flow cytometry (Figure 6A). Albeit activated MCs had no effect on the phagocytosis of C. albicans by macrophages, resting MCs were able to inhibit macrophages phagocytosis ( Figure 6B). Impaired Phagocytosis of Candida by Macrophages Is Not Dependent on MCs Soluble Mediators MC-dependent inhibition of phagocytosis has already been described during bacterial infections and appear to be mediated by a quick release of IL-4 from MCs upon bacterial encounter (26). To determine whether naïve BMMCs inhibition of macrophages phagocytosis was dependent on quickly released IL-4, macrophages were stimulated in the presence of exogenous IL-4 or anti-IL-4 blocking antibody. Addition of recombinant IL-4 to activated MCs or the neutralization of IL-4 activity on resting MCs did not affect macrophages phagocytosis of C. albicans, suggesting that IL-4 is not involved in the modulation of phagocytosis ( Figure 6C). Seen that C. albicansstimulated MCs release TNF-α already after 3 hours (Figure 3B), we hypothesized that TNF-α might be responsible for the reversion of the phenotype by activated MCs. However, the addition of recombinant TNF-α to resting MCs or neutralizing TNF-α activity in activated MCs did not revert the phenotype (Figure 6D). To undoubtedly exclude a role of MCderived soluble mediators in the modulation of macrophages phagocytosis, conditioned media were collected after 3 h of BMMCs and C. albicans co-culture or from Candida alone, and added to macrophages together with resting MCs and C. albicans. Again, phagocytosis inhibition was not reverted, suggesting that this mechanism is soluble mediators-independent but (B) C. albicans phagocytosis by macrophages is impaired by the presence of naïve (resting) MCs. The addition of exogenous IL-4 nor its neutralization with monoclonal antibodies (C) as well as the addition of TNF-α nor its neutralization (D) restored or inhibited macrophages phagocytosis. Phagocytosis was assessed by flow cytometry after 1 h of co-culture and the phagocytosis index expressed as the fold-change over the phagocytosis percentage of macrophages stimulated with C. albicans alone. (E) Similarly, the presence of conditioned media (C.M.) to resting MCs did not affect macrophages phagocytosis ability. Data were analyzed with Kruscal-Wallis and Dunn's multiple comparison tests. (F) Immunofluorescence analyses of macrophages-BMMCs-C. albicans co-cultures indicate that MCs and macrophages interact during the process of phagocytosis. Scale bar: 10 µm. *p < 0.05, **p < 0.01, ***p < 0.001. rather contact-dependent ( Figure 6E). This hypothesis is further sustained by the fact that in vitro macrophages and BMMCs interact during Candida phagocytosis ( Figure 6F). Intriguingly, activated MCs lost the inhibitory activity, indicating that fungaldependent BMMCs activation can somehow down-regulate the expression of putative co-stimulatory molecules. DISCUSSION Every multicellular organism contains a rich and diverse microbiota and their interactions profoundly affect the fitness of both the host and the microbial community. The coexistence of these two entities is based on a fragile equilibrium between commensalism and pathogenesis which is maintained by proper mechanisms of activation and suppression of the immune system. Importantly, the disruption of this stable host-microbiota equilibrium can lead to pathological consequences (30,31). A striking example is provided by numerous studies which reported a clear reduction in gut microbiota diversity on patients with autoimmune diseases (32). C. albicans is the most common member of the human and murine mycobiota and is found as a commensal especially in the colon and the vagina. When the equilibrium between the host and the fungus is perturbed (e.g., during broad-spectrum antibiotics treatment, or in conditions of pathological or pharmacologicallyinduced immunosuppression) C. albicans can overgrow and cause severe diseases as recurrent vulvovaginal candidiasis and invasive candidaemia (7,10). The main effector cells involved in the control of fungal infections are neutrophils and macrophages but in recent years several studies reported that also mast cells might be involved in the outcome of pathological Candida overgrowth (12,(14)(15)(16). This should not be surprising as a growing body of evidence highlighted the concept that MCs are not mere effector of allergies and anaphylaxis but are rather involved in the maintenance of tissue homeostasis as well as in many pathological circumstances (3,24). Accordingly, it was recently demonstrated that MCs play a pivotal role during C. albicans infections also in vivo, contributing to the inflammatory pathology occurring during initial infection but contributing to the control of fungal growth and dissemination, and to the activation of memory-protective Th1 responses upon re-infection (33). The present study provides novel proofs of the role of MCs as tissue-resident sentinels involved in the recognition of fungal infections and in a wider cross-talk with the commensal microbiota. Previous studies demonstrated that MCs respond to fungal infections with C. albicans but often reported contradictory data. In order to provide additional elements of the interaction between MCs and C. albicans we set up an in vitro co-culture system using murine WT and dectin-1 −/− BMMCs. Time-lapse microscopy experiments showed that MCs tightly interacted with the fungus as soon as it switched to the hyphal form, in a way that resembled frustrated phagocytosis (21,22). The formation of this phagocytic synapse was further characterized by immunofluorescence analyses which revealed that MCs were able to re-organize their α-tubulin cytoskeleton and to accumulate LAMP1 + vesicles at the interface with the hyphae. Interestingly, no differences were observed between WT and dectin-1 −/− BMMCs suggesting that other receptor than dectin-1 might be involved in the recognition of C. albicans. This finding is in line with the current belief that C. albicans is able to efficiently shield the β-glucan layer after the germination to the hyphal form, thus preventing its recognition by dectin-1 (34). LAMP1 accumulation at the frustrated phagosome was previously described in RAW246.7 macrophages spreading on IgG-ovalbumin micro-patterned surfaces and was found to be accompanied by the release of β-hexosaminidase (21). Very recently, it was also demonstrated that during the frustrated phagocytosis of C. albicans hyphae by RAW macrophages, LAMP1 + vesicles accumulated at the interface with the fungus and in close proximity of the actin cuff, suggesting that complete enclosing of the phagosome is not required for the recruitment of lysosomes to the phagocytic synapse (35). LAMP1 is also considered a marker of degranulation in MCs but incubation of BMMCs with C. albicans even in the presence of serum (to allow the germination of hyphae) did not result in the release of β-hexosaminidase nor leukotrienes (36). A possible explanation is that granules' cargo was released directly on the fungal surface due to their close interaction and thus preventing their detection in the supernatants. It has been demonstrated that MC-derived β-hexosaminidase was able to disrupt Staphylococcus epidermidis cell wall, rendering mice more resistant to bacterial infections (37). Moreover, a similarly polarized degranulation was recently described and named "antibody-dependent degranulatory synapse." Opsonized Toxoplasma gondii tachyzoites induced FcγR-triggering on MCs and the localized release of granule contents at the interface with the pathogen (38). On the contrary, we observed the release of TNF-α, IL-6, IL-13, and IL-4 during the co-cultures, especially during the stimulation with the yeasts, which was partly dependent on the recognition of the fungus by dectin-1. This observation might reflect the fact that MCs recognize the morphological switch from yeasts to hyphae, possibly by discriminating the composition of the outer layer of the fungal cell wall or through the recognition of cell wall debris released during the germination. It was demonstrated that the release of IL-4 during C. albicans infections in vivo was fundamental for the induction of a protective T H 1 response during reinfection. As such, il4 −/− mice were more resistant than WT littermates during the first infection with C. albicans (probably due to the absence of a T H 2 skewing) but failed to survive a secondary infection (39). The authors did not identify the source of IL-4 in this context but our data support the idea that MCs may account for the release of this cytokine. Several reports demonstrated that MCs are able to phagocytose bacteria and fungi, and that in particular conditions they can also present antigens to autologous T cells (40)(41)(42)(43). Nevertheless, their ability to kill phagocytosed pathogens is much more limited than "professional" phagocytes so we hypothesized that engulfment of C. albicans could be an early line of defense addressed to recruit tissue-resident macrophages and promote the clearance of the pathogen. Time-lapse chemotaxis experiments revealed that the soluble factors released by MCs during Candida infection failed to induce peritoneal macrophages chemotaxis but instead markedly improved their crawling ability. These results are in agreement with a previous study by Lopes et al. which reported that conditioned media from C. albicans-infected human MCs were able to recruit neutrophils but not circulating monocytes (17). Interestingly, the velocity of migration toward the pathogen was described to be important to enable a quicker clearance of C. albicans by PMN in vitro (44). This suggests that increased macrophage crawling might be important for the clearance of C. albicans also by increasing the probability of the encounter with the fungus. To further determine whether the lack of chemotaxis against infected-MCs conditioned media could be due to the short half-life of chemotactic compounds, we set-up a chemotaxis protocol against a live infection. Interestingly, live C. albicans-infected MCs induced a prominent migration of macrophages, compared to uninfected controls. Contrary to our previous experiments, we noticed a partial migration also against infected-MCs conditioned media. This could possiblty be the result of the increased cell crawling observed during live-imaging rather than proper chemotaxis. To assess whether MCs engulfment could promote macrophage-mediated C. albicans clearance by providing "eat-me" signals (such as PtdSer residues or Calreticulin), co-cultures between BMMCs, peritoneal macrophages, and C. albicans were set up. Interestingly, we found that resting MCs were able to inhibit macrophages phagocytosis of the fungus. A similar phenomenon was described in a model of severe septic peritonitis, in which a very fast release of IL-4 by MCs upon bacterial encounter resulted in the inhibition of bacterial clearance by peritoneal macrophages (26). Nevertheless, the neutralization of extracellular IL-4, nor the stimulation with infected MCs conditioned media reversed macrophages phagocytosis of the fungus, suggesting that this phenomenon might rely on cell-cell contact. This should not be surprising since it was demonstrated that MCs modulated T and B cells activation through the OX40-OX40L and the CD40-CD40L axes, respectively (45)(46)(47). Although no studies on the interaction between MCs and macrophages which might provide mechanistic insight of this phenomenon are present to date in the literature, and although our experimental data do not directly prove this, it is tempting to speculate that MCs might constitutively express inhibitory molecules on their surface which inhibit tissue-resident macrophages in healthy conditions and thus promote tissue homeostasis. However, during infections these molecules could be rapidly downregulated, allowing a proper and rapid activation of macrophages. Taken together, this data demonstrate that MCs can respond to fungal infections by tightly interacting with C. albicans hyphae and releasing pro-inflammatory mediators as TNF-α and IL-6. The fungal challenge also induced the release of the T H 2 cytokines IL-4 and IL-13. While IL-4 has been correlated with a protective effect during fungal reinfection, IL-13 is known to promote intestinal goblet cell hyperplasia and increased mucin expression during parasitic helminth infections (48). Thus, it is possible that a similar mechanism might be involved in the elimination of C. albicans hyphae. Moreover, we demonstrated that MCs-derived soluble mediators can increase tissue-resident macrophage crawling and promote their migration toward the infection. Interestingly, resting MCs were found to limit macrophages phagocytosis of C. albicans: this result might reflect the ability of MCs to restrain effector functions of myeloid cells in homeostatic conditions, highlighting once more that these cells are important players in the maintenance of the equilibrium between the host and the microbiota.
v3-fos-license
2019-05-23T13:02:49.551Z
2019-05-21T00:00:00.000
162169877
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1371/journal.pone.0215941", "pdf_hash": "979f5d93e3c3438286c46a797a966ad17aaaf0a0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41534", "s2fieldsofstudy": [ "Medicine" ], "sha1": "979f5d93e3c3438286c46a797a966ad17aaaf0a0", "year": 2019 }
pes2o/s2orc
Effectiveness of the prevention of HIV mother -to-child transmission (PMTCT) program via early infant diagnosis (EID) data in Senegal Background To improve the care and treatment of HIV-exposed children, early infant diagnosis (EID) using dried blood spot (DBS) sampling has been performed in Senegal since 2007, making molecular diagnosis accessible for patients living in decentralized settings. This study aimed to determine the evolution of the HIV transmission rate in children from 2008 to 2015 and to analyze associated factors, particularly the mother’s treatment status and/or child’s prophylaxis status and the feeding mode. Methods The data were analyzed using EID reports from the reference laboratory. Information related to sociodemographic characteristics, HIV profiles, the mother’s treatment status, the child’s prophylaxis status, and the feeding mode was included. Descriptive statistics were calculated, and bivariate and multivariate logistic regression analyses were performed. Results During the study period, a total of 5418 samples (5020 DBS and 398 buffy coat) from 168 primary prevention of HIV mother-to-child transmission (PMTCT) intervention sites in Senegal were tested. The samples were collected from 4443 children with a median age of 8 weeks (1–140 weeks) and a sex ratio (M/F) of 1.1 (2309/2095). One-third (35.2%; N = 1564) of the children were tested before 6 weeks of age. Twenty percent (N = 885) underwent molecular diagnostic testing more than once. An increased number of mothers receiving treatment (57.4%; N = 2550) and children receiving prophylaxis (52.1%; N = 2315) for protection against HIV infection during breastfeeding was found over the study period. The transmission rate decreased from 14.8% (95% confidence interval (CI): 11.4–18.3) in 2008 to 4.1% (95% CI: 2.5–7.5) in 2015 (p < 0.001). However, multivariate logistic regression analysis revealed that independent predictors of HIV mother-to-child transmission included lack of mother’s treatment (adjusted odd ratio (aOR) = 3.8, 95% CI: 1.9–7.7; p˂0.001), lack of child’s prophylaxis (aOR = 7.8, 95% CI: 1.7–35.7; p = 0.009), infant age at diagnosis (aOR = 2.2, 95% CI: 1.1–4.3 for ≤6 weeks versus 12–24 weeks; p = 0.025) and protective effect of breastfeeding on ART against formula feeding (aOR = 0.4, 95% CI: 0.2, 0.7; p = 0.005). Conclusion This study demonstrates the effectiveness of PMTCT interventions in Senegal but indicates also that increased efforts should be continued to reduce the MTCT rate to less than 2%. Introduction Pediatric HIV infection remains a significant public health issue; 2.6 million children, 2.3 million of whom were in sub-Saharan Africa (SSA), were infected worldwide in 2014 [1]. To reduce HIV mother-to-child transmission (MTCT), different strategies have been recommended by the World Health Organization (WHO). Adoption of WHO Options (Fig 1) was from the initial regimen of monotherapy with zidovudine (AZT) at 36 weeks of pregnancy and highly active antiretroviral therapy (HAART) at 14 weeks before delivery (Option A) to the more recent Option B and Option B+. In alignment with the WHO guidelines, Option A was adopted in Senegal in 2010; in this regimen, AZT treatment is begun at the 14 th week of gestation, a single dose of nevirapine (sdNVP) is provided during labor, daily doses of zidovudine/lamivudine (AZT/3TC) are given for 7 days postpartum, and finally, daily doses of NVP are given from birth to up to 4-6 weeks postpartum. By the end of 2011, Option B was adopted; this option consists of a three-drug regimen provided to the mother from the 14 th week of gestation to delivery and continuing during the entire breastfeeding period. Furthermore, prophylactic treatment is provided to newborn infants as part of Option B. At the end of 2012, Senegal adopted Option B+, which provides lifelong ART to all HIV-infected pregnant and breastfeeding women irrespective of CD4 count or clinical stage [2][3][4]. To prevent HIV infection and ensure the survival of infants, ART during the breastfeeding period is recommended until the infant is 12 months of age, based on evolving WHO guidelines. Other WHO strategies (Fig 1) implemented in the country to reduce HIV MTCT include scaling up prevention of MTCT (PMTCT) services, providing rapid screening for pregnant women, and using dried blood spot (DBS) sampling for early infant diagnosis (EID) in decentralized settings (Fig 1). EID should be routinely performed in all children aged 4 to 6 weeks born to HIV-infected mothers [5,6] to allow and inform the initiation of appropriate lifesaving treatment. The early provision of ART has been shown to have the potential to save the lives of � 50% of children infected by HIV by the time they reach 2 years of age [7][8][9]. In Senegal, PMTCT sites and use of DBS increased (Fig 1) according to WHO guidelines. The use of DBS sampling for EID became policy in 2007 and scaled rapidly across the country, thus becoming more widely accessible. In 2006, only 61 sites throughout the country used DBS sampling for EID. However, in 2014, DBS sampling was performed at 1045 PMTCT sites, which were classified as primary sites (hospitals and sanitary districts) and secondary sites (health posts). [10,11]. These improvements were in line with the WHO recommendations and goal of achieving an MTCT rate lower than 5% by 2015 considered as a virtual elimination of MTCT (virtual eMTCT). The objective of this study was to evaluate the Senegalese PMTCT response from 2008 to 2015 using EID data and to measure the impact of related efforts on decreasing the HIV MTCT rate. Data collection Data from EID request forms and EID report results collected at the bacteriology-virology reference laboratory of Le Dantec hospital in Dakar from 2008 to 2015 were analyzed. These data were programmatic efforts and included information routinely collected in mother/child medical records, such as age, site visits, mother's HIV status and treatment status, child's prophylaxis status, feeding mode, and type of samples collected (both whole blood in EDTA tubes and DBS samples). Blood samples from children born to HIV-seropositive mothers were collected from different PMTCT sites throughout the country. The number of sites reported increased from 258 in 2008 to 1146 in 2015, including 168 primary and 978 secondary sites (Fig 2). Whole blood was collected from Dakar pediatric reference centers, and DBS samples were collected from some Dakar PMTCT sites and from other regions of Senegal. After collection, samples were sent with the standard EID request form to the reference laboratory, an ISO 15189 accredited medical laboratory, where PCR tests, under CDC-CADU/Afriqualab PT Program � , were performed to determine the infant's HIV status. All positive results were confirmed with a second test using another sample according to the EID algorithm before being entered in the local electronic laboratory database (Fig 3). This database was used to calculate the annual HIV prevalence and to analyze factors associated with HIV infection, such as the mother's treatment status and/or the child's prophylaxis status and the feeding mode. Four feeding modes were reported: formula feeding; mixed feeding, which combines breast and formula feeding; exclusive breastfeeding; and breastfeeding on ART, which consists of breastfeeding for 12 months postpartum to lower the MTCT risk while the mother is on ART. Statistical analysis The data were entered into a Microsoft Excel database and analyzed using SPSS version 16 and STATA version 12. Descriptive statistics were calculated for selected characteristics and the MTCT rates were estimated in proportion with 95% confidence intervals (CI), using Wald method [12]. Bivariate and multivariate logistic regression analyses to determine associations between HIV testing results and other variables were performed. Odd ratio (OR) with a 95% CI was used to measure the degree of association with infant HIV positivity. Bivariate analysis was performed with univariate logistic regression or cross tabulation analysis and multivariate analysis was used to calculate the adjusted odd ratio and addressed the confounding issues. The Chi-square test was used to estimate the difference between proportions in successive years. For Chi-square values lower than 0.05, Fisher's exact test was also used to estimate the pvalue. p-values less than 0.05 were considered statistically significant. Sample distribution by region This MTCT analysis was carried out on 5418 samples comprising 398 venous blood samples from Dakar pediatric reference centers and 5020 DBS samples from Dakar and from PMTCT sites in other regions. The distance from the sites to the reference laboratory ranged from 14 to 682 km (Fig 2). The samples were mainly from Dakar and Ziguinchor; 43.2% (2338) and 13% (704) of the samples, respectively, were from these cities ( Table 1). Characteristics of the children Samples were collected from 4443 children, with more than one sample tested from 885 of these children. The median age was 8 weeks and ranged from 1 week to 24 months; the sex ratio (M/F) was 1.1 (2309/2095). The characteristics of the children and mothers over the study period are presented in Table 2. The greatest increase in mothers receiving treatment and children receiving prophylaxis to protect against MTCT through breastfeeding occurred between 2010 and 2012. Prevalence of HIV The number of samples from children receiving EID testing increased from 411 (70.6%) in 2008 to 494 (72%) in 2009. The HIV transmission rate between these two years decreased from 14.8% (95% CI: 11.4-18.3) to 7.5% (95% CI: 5.2-9.8). The MTCT rate continued to decrease through 2015, when it dropped below 5% for the first time in Senegal (Fig 4; Table 3). The number of children being tested increased in all age categories below 48 weeks by 2009, and the proportion of tested children under 12 weeks of age climbed to 60-65% by 2015 (Table 2). Overall, the first diagnostic test was performed before 12 weeks of age in 59% of the children (2623/4443). It was found that the proportion of children tested before 12 weeks of age increased significantly (p-values<0.05) over the study period while it decreased among those tested after 12 weeks of age (p-values>0.05) ( Table 2). A coincident decrease in MTCT (< 5%) occurred between 2008 and 2009 in children tested between 6 and 24 weeks of age, and a slower decrease in MTCT (< 3%) occurred among children tested before 6 weeks of age (Table 4). Children tested after 12 weeks of age presented higher transmission rates than did those tested before 12 weeks of age in the same period (Table 4). In addition, Table 2 shows an improvement in prophylaxis over the study period, with an increase in the number of mothers treated and/or children given prophylaxis of approximately 70-88% since 2011. In addition, the proportion of infected children in each group is 6 to 10 times lower when the mother was treated and/or the child was given prophylaxis than the absence of treatment and/or prophylaxis ie (the mother was not treated and/or the child was not given prophylaxis) ( Table 4). According to the analysis of the infant feeding mode, breastfeeding on ART increased over the study period, from 60% in 2012 to 78% in 2015 (Table 2). In addition, beginning in 2012, less than 3% of infants breastfed on ART were infected ( Table 4). The factors associated with HIV infection in children in the bivariate analysis were a late age at testing, feeding with the exclusive/mixed modes, and the lack of the mother's treatment/ child's prophylaxis; the risk associated with these factors decreased over the study period, especially in 2009/2012, 2012/2015, and 2011, respectively (Table 5). Table 6 (multivariate analysis) show that the sooner the better for the child's HIV diagnosis, the better if the mother is under treatment, better if the child is on prophylaxis, better if breastfeeding is on antiretroviral treatment (ART). These factors showed to help reduce the MTCT rate. Discussion The international community has responded to the launch of the Global Plan to eliminate HIV MTCT by 2015 [13]. More pregnant women have been and will be screened for HIV, and more HIV-exposed infants have been and will be tested for HIV infection. The pediatric risk of HIV infection could be reduced to less than 5% by 2015 through PMTCT interventions [14]. The goal of this study was to evaluate the Senegalese national PMTCT program after the increases in both the PMTCT services implemented in primary health care facilities and the accessibility of EID due to the use of DBS sampling from 2008 to 2015. This report showed that Senegal has attained virtual eMTCT due to programmatic efforts related especially to greater access to EID testing, treatment of mothers and prophylaxis of infants. Global prevalence The rate of MTCT decreased from 14.8% to 4.1% between 2008 and 2015, certainly due to a great intensification of efforts towards PMTCT services in Senegal via a combination of factors, including the increase in the number of PMTCT sites, which increased from 258 in 2008 All these factors have contributed to the decrease in the rate of MTCT in Senegal, as in other countries [15][16][17]; the greatest progress in reducing new infections was seen in 2015 for Uganda, South Africa, Burundi, Namibia, Mozambique, Botswana, and Swaziland due to the rapid increase of integrated PMTCT services [18]. Despite this success in Senegal, a further reduction is needed to truly move towards the elimination of MTCT (< 2%), a goal already achieved by other countries. Universal access to integrated PMTCT services led to eMTCT in Cuba in mid-2015 and in Thailand, Belarus, Armenia and the Republic of Moldova in 2016 [19][20][21]. However, these programmatic data could be biased due to the lack of information regarding prophylaxis and feeding modes in 2008 and 2009. Those missing data, added during some years of the study, showed that additional effort needs to be made to improve data collection and management system. Moreover, we cannot exclude the possibility that EID testing was offered mainly to symptomatic children, which could explain the high rate of HIV infection observed during those years. Moreover, the declining MTCT rate between those years was due HIV transmission-associated factors As reported in other studies, factors associated with HIV transmission were a late age at diagnosis, the lack of ARV provision for both mother and child, and feeding with the exclusive/ mixed modes. In this study, in addition to the year-to-year increase in the number of PMTCT sites, factors contributing to the reduction in MTCT were earlier testing and the increased treatment of mothers/prophylaxis of infants to protect against HIV infection from breastfeeding. Indeed, the incidence of earlier testing improved over the years; an increased number of children were tested before 3 months of age, and these children had an HIV infection rate 2 to 9 times lower than that estimated for children tested after 3 months of age. However, data regarding the time lag in returning the results to the patients were missing in this analysis, and the impact of this lag on the timely initiation of ART could not be assessed. Indeed, the delayed diagnosis of HIV-exposed infants due to the delayed receipt of the collected samples and the delayed return of the results to the testing site will in turn delay treatment initiation [22][23][24][25][26]. The results from several studies [22,[27][28][29] highlighted the importance of implementing efficient strategies, like the integration of innovative POC instruments into the conventional laboratory diagnostic network, to facilitate sample collection and timely results reporting [30]. However, EID has been found to be an excellent indicator for evaluating PMTCT success in some countries in SSA, as it was in this study [22,[31][32]. The proportion of breastfeeding mothers on HAART increased over time, and the impact of breastfeeding with ART on reducing MTCT was 10 times more significant than that of formula feeding, as has been described in other studies in African countries [33][34][35][36]. When considering treatment of mothers and/or prophylaxis of children, we found that the significant improvement in the provision of ARV to infected pregnant women and exposed children was associated with a decrease in the MTCT rate over the study period. However, the study confirmed that the lack of treatment and/or child prophylaxis remain the main factor correlated with vertical transmission, as has been shown in several countries where ARV intervention was late or unsatisfactorily implemented [29,[37][38][39][40][41], and highlighted the benefits of antenatal HAART (Option B+) in reducing the risk of MTCT [42]. However, in this study, no information was available regarding the adherence of the mothers to treatment or that of the children to prophylaxis, which is a broader challenge and limitation when measuring the effectiveness of PMTCT programs. Conclusion This report indicated the effectiveness of PMTCT in Senegal, showing that the MTCT rate decreased to less than 5% between 2008 and 2015. This decrease could be due to the greater and earlier access to EID allowed by DBS sampling combined with an increase in PMTCT services. Task shifting to integrate primary health care centers, the adoption of Option B+ for pregnant women, the improved coverage of antiretroviral prophylaxis for babies, and the use of breastfeeding on ART were probably the key factors underlying the improved organization of maternal and infant health care services. However, due to the possible further postpartum infection of uninfected newborns via breastfeeding, efforts must be strengthened-especially towards improving the adherence of mothers to treatment and children to prophylaxis, enhancing counseling and monitoring infant feeding in order to achieve the goal of an MTCT rate of less than 2% by 2020 for a generation without AIDS in 2030. Supporting information S1 File. Data underlying this study. (XLS)
v3-fos-license
2019-11-19T14:04:43.165Z
2019-11-18T00:00:00.000
208143192
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1002/cphc.201900963", "pdf_hash": "23383f788484b4652e36b16b793877741b785f9e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41535", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "d820ff2872be40ff2c657c030dee0d19d18e4542", "year": 2020 }
pes2o/s2orc
Reversible Photoswitching of Isolated Ionic Hemiindigos with Visible Light Abstract Indigoid chromophores have emerged as versatile molecular photoswitches, offering efficient reversible photoisomerization upon exposure to visible light. Here we report synthesis of a new class of permanently charged hemiindigos (HIs) and characterization of photochemical properties in gas phase and solution. Gas‐phase studies, which involve exposing mobility‐selected ions in a tandem ion mobility mass spectrometer to tunable wavelength laser radiation, demonstrate that the isolated HI ions are photochromic and can be reversibly photoswitched between Z and E isomers. The Z and E isomers have distinct photoisomerization response spectra with maxima separated by 40–80 nm, consistent with theoretical predictions for their absorption spectra. Solvation of the HI molecules in acetonitrile displaces the absorption bands to lower energy. Together, gas‐phase action spectroscopy and solution NMR and UV/Vis absorption spectroscopy represent a powerful approach for studying the intrinsic photochemical properties of HI molecular switches. Theoretical methods and calculated conformer structures and energies To assess the potential contribution of different conformations of the alkyl chain and aniline/julolidine moiety to each ATD peak, a non-exhaustive conformer search was performed using the Force Field tool in Avogadro [5]. Conformations with relative energies <10 kcal/mol were re-optimized at the DFT wB97X-D/cc-pVDZ level of theory using the Gaussian16 package [6]. Cartesian coordinates for these structures are given in the SI Appendix. These geometries were then used to calculate collision cross section using a version of the MOBCAL package parametrized for N2 buffer gas [7]. Vertical excitation wavelengths were determined for the conformers were determined at the df-CC2/aug-cc-pVDZ (augcc-pVDZ-RI auxiliary basis set) level of theory using the MRCC program [8]. Figures S1, S2 and S3 show the optimized three-dimensional structures of Z and E conformers of hemiindigos 1, 2 and 3, respectively. Calculated energies, vertical excitation wavelengths and collision cross-sections for these conformers are presented in Table S1. Figure S1 Optimized structures for Z (upper) and E (lower) conformers of HI 1, computed at the wB97X-D/cc-pVDZ level of theory. Figure S2 Optimized structures for Z (upper) and E (lower) conformers of HI 2, computed at the wB97X-D/cc-pVDZ level of theory. Figure S3 Optimized structures for Z (upper) and E (lower) conformers of HI 3, computed at the DFT wB97X-D/cc-pVDZ level of theory. Table S1 Optimized ground state energies, wavelengths and oscillator strengths for vertical S1ßS0 and S2ßS0 transitions, and calculated collision cross sections in N2 buffer gas for a series of Z/E conformers of hemiindigos 1-3 shown in Figure S1-3. The energies were computed at the DFT ωB97X-D/cc-pVDZ level of theory. Collision cross sections were calculated using the MOBCAL program with appropriate parameter for N2 buffer gas [7]. Vertical excitation wavelengths for S1ßS0 and S2ßS0 transitions were calculated at the df-CC2/aug-cc-pVDZ (augcc-pVDZ-RI auxiliary basis set) level of theory using the MRCC program, with the italic numbers indicated in brackets to the corresponding oscillator strengths obtained from the CIS wavefunctions [8]. Hemiindigo 11.2 518 (0.9) || 397 (0.0) 247 † oscillator strengths given in parentheses S16 Gas phase photoisomerization experiments The photoisomerization of the isolated charge-tagged hemiindigos 1-3 was investigated using a homebuilt tandem ion-mobility mass spectrometer (IMS) [4]. The principle of ion-mobility spectrometry rests on the spatial and temporal separation of charged molecular isomers due to differences in their drift velocity (vd) as they travel through buffer gas under propelled by an electric field (E). vd=K.E The mobility K can be expressed by the Mason-Schamp equation (eq. 2): Here, z is the ion's charge number, e the electron charge, N the density of the buffer gas, µ the reduced mass of the collision partners, kb the Boltzmann constant, l is the length of the drift region, td is the drift time, and V is total voltage applied across the drift region. W is the integral collision cross section, which depends on the interaction between ion and buffer gas molecule, and is therefore influenced by the structure of the molecular ion. Bulky, unfolded molecular ions have larger collision cross sections and therefore drift more slowly (larger td) than more compact molecular ions. Figure S4 Tandem ion mobility mass spectrometer. Further details can be found in ref. [4]. (eq. 1) S17 could either be held open or opened momentarily to select target isomer ions which successively passed through the second ion mobility stage, through an ion funnel, an octupole ion guide, and a quadrupole mass filter before being sensed by a Channeltron ion detector connected to a multichannel scaler. Arrival time distributions (ATDs) were obtained by plotting ion count against arrival time. Example ATDs for hemiindigos 1-3 are shown in Figure S5. Two baseline resolved peaks were obtained for hemiindigo 2 in N2, whereas only one broad peak was observed for compounds 1 and 3 with N2 buffer gas (see Figure S5, upper row). Better separation was achieved by seeding the N2 buffer gas with ≈1% 2-propanol ( Figure S5, lower row). This allowed separation of the E and Z isomers for hemiindigos 1 and 3, allowing individual isomers to be isolated and irradiated. Ion Intensity Ion Intensity ATD peak assignments and determination of isomer yields upon irradiation in solution To assign the ATD peaks to specific isomers, a series of experiments were carried out in which the hemiindigo solutions in the syringe connected to the electrospray source were irradiated by visible light. ATDs were monitored after exposure of the sample in the syringe to the output of either a blue 39.5mW,, green (Thorlabs CPS533, 4.5mW, 532nm) or red <15mW,632.8nm) CW laser for 5-10 minutes that served to establish a photostationary state (PSS). These ATDs are compared to ATDs obtained using solutions protected from light (see Figure 2 in manuscript). The effects of irradiating the samples on ATD peak intensities are apparent in Figure S6, where there is clear evidence for the interconversion of Z and E isomers with the relative isomer abundances depending on wavelength. The measured ATDs were fitted by the sum of two Gaussian functions having equal widths to estimate the relative isomer abundances. Figure S6 Fitted arrival time distributions (ATDs) for HI 1-3 ions obtained using electrosprayed solutions exposed to light of different wavelengths. The left column shows ATDs obtained after 5 minutes exposure of the respective sample to blue light (430-480 nm) prior to electrospray ionization, whereas the right column shows ATDs obtained following exposure of the solution to green light (532 nm) or red light (632.8 nm). The fitted contributions of each isomer are given. The isomer PSS abundances derived from the ATDs are consistent with abundances measured in solution through analysis of UV-vis spectra (see Table S2). Photoisomerization action spectroscopy experiments To investigate the photoisomerization of the hemiindigo ions in the gas phase, a particular isomer was selected using a pulsed Bradbury-Nielsen ion gate (IG2) situated midway along the drift region which was opened for 120 μs at an appropriate delay with respect to IG1. As shown in Figure S4, shortly after passing through the gate, the ions were exposed to a light pulse from a tuneable optical parametric oscillator (OPO, EKSPLA NT342B, 20 Hz, 5 ns pulse width). The photoisomers were separated from the parent isomers in the second stage of the drift region and were then guided through a second ion funnel (IF2) followed by a differentially pumped octupole ion guide, a quadrupole for mass selection and a Channeltron detector. The OPO operated at 20 Hz and overlapped alternate ion packets allowing 'laser on' and 'laser off' ATDs to be collected, the difference between which reflects the effect of light on the parent cation. Thus, a given photo-isomer appeared as a separate peak in the 'laser on' ATD. Power dependence of the photoisomerization yield To evaluate the effect of light intensity on the photoisomerization yield, the Z isomer of hemiindigos 1-3 were exposed to blue light (430-480 nm) over a range of fluences. Resulting power dependence plots are shown in Figure S7. The photoisomer yield is directly proportional to light fluence for hemiindigos 1 and 2, consistent with a single-photon isomerization. For hemiindigo 3, the linear dependence is not followed at fluences exceeding 1 mJ/pulse/cm 2 , indicating the onset of saturation/multiphoton processes. Although power dependence measurements were not performed for E→Z photoisomerization, we expect similar power dependences to the Z→ E processes. Figure S7 Normalized yield of E photoisomer as a function of light fluence. The experiments were performed at 450nm (hemiindigos 1 and 3) and 430nm (hemiindigo 2), respectively. Solution photoisomerization experiments Determination of the UV-vis absorption spectra of Z and E isomers The UV-vis absorption spectra of pure Z and E isomers of 1 to 3 in acetonitrile were obtained by subtraction of one E/Z-mix spectrum with known isomer composition (previously determined by integration from 1 H NMR spectroscopy) from another E/Z-mix spectrum with different but also known isomer composition and subsequent multiplication with a weighting factor. Weighting was done by multiplying the first E/Z-mix spectrum with the Z (or E) isomer percentages of the second E/Z-mixture (determined via 1 H NMR spectroscopy) and vice versa. The obtained absorption spectrum of the respective pure isomer was multiplied by compensation factors to match the previously determined absorption values at isosbestic points. This method of spectra determination relies on the following conditions: -The system must consist of two chromophores, which interconvert without side reactions or decomposition -The total chromophore concentration is constant during determination of isosbestic points, thus absorption spectra of mixtures result solely from the addition/subtraction of pure E and Z isomer spectra -Distinct isosbestic points must be observed Eq. 2 defines the spectrum S of the e.g. E isomer (E) as a matrix of colligated numeric values: S( ) = S( >?,5…, , >?,5… ) (eq. 2) with wEi (wavelength in nm) as fixed experimental parameters value and aEi (absorption in a.u.) as experimental observables value representing the absorption spectrum. Eq. 3/4 define the measured E/Zenriched mixture spectrum, Smix(E+/Z+), as a composite of pure S(E) and S(Z) spectra: with f1,… being factors to account for the concentrations of each isomer in the mixture, which were determined by NMR measurements and the corresponding magnitudes of the absorption spectra. Solving the system of linear equations for S(E) and S(Z) results in eq. 5/6: Factors f1 -f4 were obtained from integrated indicative signals in the 1 H NMR spectrum (percentage divided by 100) for the E or Z isomer in the E or Z enriched mixture according to the following matrix: The hereby determined spectra consist of 100% E isomer S(E) and 100% Z isomer S(Z), respectively. The isomeric yields obtained in the photostationary state (PSS) at different irradiation wavelengths were determined by irradiation in NMR tubes with subsequent analysis of the isomer composition by 1 H NMR spectroscopy or by irradiation in 10 mm quartz cuvettes followed by UV-vis measurements. In the latter case, isomer abundances were determined by first scaling obtained UV-vis spectra at the PSS to the absolute positions of previously obtained isosbestic points and then calculating the Z/E-ratio in the PSS from the known extinctions of pure isomers at distinct wavelengths aberrant to isosbestic points. Isomer abundances obtained in the PSS at different irradiation wavelengths are given in Table S2. Figure S9 PSS UV-Vis spectra at different irradiation wavelengths for 2 Z/E in acetonitrile. Figure S10 PSS UV-Vis spectra at different irradiation wavelengths for 3 Z/E in acetonitrile. Appendix: Cartesian coordinates for calculated structures Cartesian coordinates for lower energy conformations of HI 1, HI 2 and HI 3. Structures were optimised at the DFT wB97X-D/cc-pVDZ level of theory using the Gaussian16 package [6] and are depicted in Figures S1, S2
v3-fos-license
2019-10-17T09:05:50.438Z
2019-10-15T00:00:00.000
204965694
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-7796-8", "pdf_hash": "c7c79857e7b4bbdc2d41e605975bee75aa888456", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41537", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7446b0f527e35986be55b8ec89bb254ab2577760", "year": 2019 }
pes2o/s2orc
The drivers of antibiotic use and misuse: the development and investigation of a theory driven community measure Background Antimicrobial resistance is a global public health concern, with extensive associated health and economic implications. Actions to slow and contain the development of resistance are imperative. Despite the fact that overuse and misuse of antibiotics are highlighted as major contributing factors to this resistance, no sufficiently validated measures aiming to investigate the drivers behind consumer behaviour amongst the general population are available. The objective of this study was to develop and investigate the psychometric properties of an original, novel and multiple-item questionnaire, informed by the Theory of Planned Behaviour, to measure factors contributing to self-reported antibiotic use within the community. Method A three-phase process was employed, including literature review and item generation; expert panel review; and pre-test. Investigation of the questionnaire was subsequently conducted through a cross-sectional, anonymous survey. Orthogonal principal analysis with varimax rotation, cronbach alpha and linear mixed-effects modelling analyses were conducted. A 60 item questionnaire was produced encompassing demographics, social desirability, three constructs of the Theory of Planned Behaviour including: attitudes and beliefs; subjective norm; perceived behavioural control; behaviour; and a covariate – knowledge. Results Three hundred seventy-three participants completed the survey. Eighty participants (21%) were excluded due to social desirability concerns, with data from the remaining 293 participants analysed. Results showed modest but acceptable levels of internal reliability, with high inter-item correlations within each construct. All four variables and the outcome variable of antibiotic use behaviour comprised four items with the exception of social norms, for which there were two items, producing a final 18 item questionnaire. Perceived behavioural control, social norms, the interaction between attitudes and beliefs and knowledge, and the presence of a healthcare worker in the family were all significant predictors of antibiotic use behaviour. All other predictors tested produced a nonsignificant relationship with the outcome variable of self-reported antibiotic use. Conclusion This study successfully developed and validated a novel tool which assesses factors influencing community antibiotic use and misuse. The questionnaire can be used to guide appropriate intervention strategies to reduce antibiotic misuse in the general population. Future research is required to assess the extent to which this tool can guide community-based intervention strategies. Introduction Antibiotics are an antimicrobial agent defined as "a chemical substance produced by a microorganism that kills or inhibits the growth of another microorganism" [1]. Since the introduction of the first effective antimicrobial in 1937 [2], there has been persistent growth and spread of drugresistant bacteria, broadly referred to as antimicrobial resistance (AMR). AMR is defined as the phenomenon where infection-causing microorganisms, such as bacteria, have the ability to survive exposure to medicine which would normally inhibit their growth or kill them [3]. The health implications of AMR are extensive, affecting not only the treatment of a primary bacterial infection, but also the prophylactic use of antibiotics in routine surgical procedures, such as caesareans and hip replacements [3,4]. O'Neill (2016) estimates that, unchecked, the growth of AMR will result in 10 million preventable deaths per year by 2050. In addition to the human cost, the increase in AMR is associated with significant economic consequences [5]. AMR is associated with increased expenditure on health services, with greater resource utilisation and higher levels of routine health care costs [6][7][8]. The additional impact of AMR has downstream effects on health service productivity [9]. Unfettered, it is estimated that by 2050, AMR will have impacted world global production by $US100 trillion [3]. From an evolutionary standpoint, AMR is unavoidable [10] due to bacteria's inherent ability to survive, mutate and adapt, following stress and greater exposure to antimicrobials [4]. Given that AMR cannot be reversed or eradicated [11], actions to slow and contain the development of resistance are imperative [12]. The rate of AMR development is widely understood to be facilitated by indiscriminate and unnecessary antibiotic use [3,[13][14][15][16]. The World Health Organisation (WHO) Global Strategy for Containment of Antimicrobial Resistance (2001) defines appropriate antimicrobial use as the "cost effective use of antimicrobials which maximises clinical therapeutic effect whilst minimising drug-related toxicity and development of antimicrobial resistance" [17]. Existing literature highlights consumer or patient demand and behaviour, as a driving force behind antibiotic misuse [18][19][20]. Understanding the extent of global trends in consumer demand for, and knowledge about, antibiotics is therefore an important component in the battle to curtail the growth of AMR and has precipitated multinational surveys. For example, a survey carried out by the Taylor Nelson Sofres (TNS) Opinion and Social for the European Commission (2010) gathered information from 26,761 individuals across the (then) 27 member states of the European Union. The survey found that 40% of respondents had taken antibiotics in the previous 12 months, with 95% reporting that they (appropriately) obtained them from a medical practitioner. However, the survey also reported that only 20% of respondents were able to correctly answer four knowledge statements regarding antibiotics, including 53% who believed that antibiotics kill viruses, and 47% who believed antibiotics were effective against colds and influenza. These results suggest that while Europeans report obtaining antibiotics through appropriate means (doctors), their intended use is often inappropriate [21]. A subsequent survey conducted by the WHO (2015) questioned 9772 individuals across two member states in the six WHO regions. In this survey it was found that 65% of respondents had used antibiotics in the previous 6 months, with 81% (range 56-93%) indicating that they had obtained them from a medical professional. The WHO survey reported that 25% of respondents believed it acceptable to use antibiotics given to them by a friend or family member, 43% thought it acceptable to buy antibiotics or seek them from a doctor if they were sick with symptoms that they believed were effectively treated by antibiotics in the past, and 64% incorrectly believed viruses such as colds and influenza could be treated by antibiotics [15]. According to Wise et al., (1998), 20% of human antibiotic use occurs within the hospital sector, whilst 80% is within the community sector. Within this community portion, 20-50% may be questionable and unnecessary [22]. Within Australia specifically, antibiotic consumption rate exceeds the Organisation for Economic Cooperation and Development (OECD) average [23]. Thus, an understanding of the drivers of Australian consumer antibiotic seeking and use is warranted. Both the TNS Opinion & Social (2010) and WHO (2015) surveys had numerous limitations, including various sampling techniques, bias toward more educated responders, and an absence of checks upon socially desirable responding. Furthermore, neither survey was theory informed in order to enable prediction of consumer antibiotic use, other than the potential impact of poor knowledge about antibiotics and AMR, and neither reported detailed psychometric properties of the questionnaires. They do, however, confirm previous research which has identified a range of key factors contributing to patient behaviour with respect to antibiotic use, including attitudes and beliefs, subjective norms, self-efficacy and knowledge [6,24]. There are few measures which currently exist in this area. Many are specific to population sub-groups, including physicians, parents [6,[25][26][27], medical students [28] and pharmacists [29,30]. To our knowledge, there exists no sufficiently validated measure which aims to investigate factors influencing antibiotic use within the general populace [31]. The current study sought to develop a questionnaire that predicts the factors influencing a consumer's intentions to indiscriminately obtain and use antibiotics. Given that attitudes and beliefs [32], the opinions of others within a person's social or professional network [33][34][35], and the self-perceived (and actual) ability to obtain antibiotics [36][37][38] have all been independently associated with the use of antibiotics, the current study aimed to construct a questionnaire informed by the Theory of Planned Behaviour (TpB) [39], a respected and highly cited model which predicts health related behaviours [40][41][42][43]. The TpB, which has yet to be used in the context of antibiotic use for consumers, would suggest that a person's actual use of antibiotics is best predicted by their intentions, which are influenced by three major components (see Fig. 1): (a) attitudes, referring to one's positive or negative evaluation of indiscriminate antibiotic use, (e.g. 'the negatives of taking antibiotics outweigh the positives'); (b) subjective norm, involving their perception of the social expectations of indiscriminate antibiotic use, (e.g. 'my friends and family would follow recommendations for antibiotic use'); and (c) perceived behavioural control (PBC), reflecting the beliefs regarding the ease or difficulty in accessing antibiotics, (e.g. 'I would easily be able to get antibiotics if I wanted them'). PBC was the only control measure as actual behavioural control (added to later TpB models) [44] was unable to be measured within this study protocol. A condition of the strength of this PBC-behaviour relationship is that 'perceptions of behavioural control must reflect actual control in the situation with some degree of accuracy'. When perceptions of control are accurate, PBC is expected to predict behaviour [45][46][47]. One of the most extensive TpB reviews, focusing on prospective behaviours across 237 studies, was conducted by McEachan, Conner, Taylor and Lawton, [48], who found that the TpB could explain 19.3% of variance in behaviour and 44.3% of the variance in intention to behave. McEachan and colleagues further demonstrated that the TpB provides strong predictions of intention and behaviour across a range of health behaviours, with the attitude component being the strongest behavioural intention predictor. Further, Ajzen, (1991) suggests that the TpB is highly adaptive, possessing the ability to incorporate additional predictors where required, providing that they maintain the ability to capture a significant proportion of variance in intention or behaviour, and also given that the initial variables have been considered. Given previous research, knowledge about antimicrobials, and AMR specifically, would be expected to influence attitudes [32]. Limitations surrounding the TpB include its sole reliance upon self-reported behaviour, potentially inspiring socially desirable and less accurate predictions of objective behaviour [47]. Armitage and Conner undertook a meta-analysis of 161 articles containing 185 independent empirical tests of the TpB, concluding that the use of the model is effective in predicting intention and behaviour, more so in the context of subjective self-reported behaviour over observed behaviour (Rsquared 0.31 and 0.20 respective) [40]. This is not a limitation specific to the TpB, but broadly to the area of social psychology, and is not a large cause for concern given the model still capably measures a good amount of variance in prospective measures of actual behaviour [40]. Moreover, the TpB showcases high consistency between intention and behaviour, even in contexts of differing emotional states [47]. None-theless, attention to social desirability would enhance the predictive validity of the TpB as applied to consumer antibiotic use. Thus, the aim of the current study is to develop and investigate the psychometric properties of an original, novel and multiple-item quantitative questionnaire, aiming to identify factors contributing to antibiotic use within the community, informed by the TpB. Considering the adaptive nature of the TpB [39], knowledge was added as a variable of interest within the current study, due to the array of literature which indicates a relationship between knowledge and antibiotic-use behaviour [31]. BEHAVIOURAL INTENTION Intention to indiscriminately use antibiotics. BEHAVIOUR Indiscriminate use of antibiotics. ATTITUDES One's positive or negative evaluation of indiscriminate antibiotic use. SUBJECTIVE NORM One's perception of the social expectations of indiscriminate antibiotic use. PERCEIVED BEHAVIOURAL CONTROL One's beliefs regarding the ease or difficulty in accessing antibiotics. Questionnaire development A three-phase process was employed to develop the Antibiotic Use Questionnaire (AUQ) utilised within this study. Phases included: a literature review and item generation; expert panel review; and pre-test. Investigation of the AUQ was then subsequently conducted through a cross-sectional, anonymous and voluntary survey. Phase one. Literature review and item generation An opening list of 43 items were drawn from a literature review of previous studies investigating consumer characteristics and self-reported antibiotic use, using search terms such as 'antibiotic use', 'AMR', and 'antibiotic use influences'. Questions were then grouped under discrete categories, including demographics, knowledge, TpB constructs (attitude, subjective norm and PBC), and an outcome factorself-reported antibiotic use behaviour. Items were adapted where required to suit the cultural and sociodemographic context of the target population. For example, 'Aboriginal or Torres Strait Islander' was added as an option for ancestry to reflect the Australian population. Phase two. Expert panel review The original 43 item questionnaire was examined for content validity by a panel of eight experts, organised to represent a range of fields including psychology, business and health. Questions were evaluated with respect to the extent to which, on face value, they aligned with the TpB variables (including knowledge), their repetition, clarity and cultural relevance. Additional questions were generated in areas under-represented, such as social norms, and redundant questions removed. A subset of six items from the Marlowe-Crowne Social Desirability Scale [49] were randomly selected and incorporated into the questionnaire, to allow for measurement of the honesty and reliability in respondent answers. Following agreement by the expert panel on the established questions, selected items (excluding demographics) were randomised using Stat Trek: Random Number Generator (no date) to mediate response bias. Review of the questionnaire involved 10 iterations with the expert panel and yielded an initial (preassessment) questionnaire of 60 items, organised as per Table 1, with questions requiring multiple choice, dichotomous or likert scaled responses. Phase three. Pre-test Before administration of the questionnaire to participants, a group of 10 participants pre-tested the survey to examine face validity. Feedback was gathered on time to completion, question clarity, perceived relevance, and face validity. Minor adjustments were made based on feedback received. Data collection and ethics The finalised questionnaire was distributed via an anonymous cross-sectional survey conducted between July -August 2018. Tacit consent was obtained, inferred through anonymous completion and return of the questionnaire. Survey Monkey was used to create a soft copy version of the questionnaire, with the link being distributed via nonmoderated e-mail services and social media, predominantly incorporating snowballing techniques. Hard copy questionnaires were also distributed, mainly to participants who were unable to be reached via e-mail or social media, and for the purposes of purposive sampling after a middata collection review identified disparities in demographic representation. All hard copy questionnaires were completed in the presence of a researcher. Purposive sampling took place in popular public spaces, including a local shopping mall and retirement club, with a desire to achieve balance from older age groups, males, and those of lower socioeconomic status and education level. Hardcopies were returned directly to the researcher after completion, and manually entered into a Microsoft Office Excel, Version 10, spread sheet. The current study was approved by the Health and Medical Human Research Ethics Committee (Joint University of Wollongong and Illawarra Shoalhaven Local Health District, 2018/330). Sample All recipients of the questionnaire, aged 18 years and over, were invited to partake in the research. Completed questionnaires were received from 373 participants. The majority of participants were recruited via online platforms (91%, n = 338), with the remaining participants recruited in person (9%, n = 34). Eighty participants (21%) were excluded from the analysis, due to concerns regarding the accuracy and reliability of their responses, after scoring equal to or higher than five in social desirability. Data from the remaining 293 (79%) participants was analysed. Data analysis Data analysis was carried out using Matlab R2018A (The MathWorks Inc). All completed questionnaires were screened for missing data, outliers and coding errors. Participants answered at least 83% of the questions (mean = 99.24%, standard deviation (SD) =3.23). Descriptive statistics for patient demographics were reported, expressed as raw numbers and percentages. An orthogonal principal component analysis with varimax rotation (factoran Matlab function) was utilised to assess the factor loadings of the questionnaire items for the four dimensions of the TpB and the covariateknowledge. Cronbach alpha was used to determine the internal reliability of items relating to each of the five factors. Furthermore, linear mixed-effects modelling was employed to study the influence of the TpB factors on intended antibiotic use behaviour. The moderation from knowledge on the link between attitude/belief and intended behaviour was modelled as an interaction term between knowledge and attitude/belief. The fixed effects of the model included the interaction between attitudes and beliefs, and knowledge, PBC, social norms, age, gender, education, whether the participants had children, health trained, health worker in the family, frequency of antibiotic consumption, financial security, and most recent antibiotic consumption. The random effect was the participants. Results Participant demographics are reported in Table 2. Mean score for social desirability (range 0 to 6) was 2.69 (± 1.16). Of the 80 (21%) participants who were excluded due to a high social desirability score (5 or above), 65% (n = 52) were female with 55% (n = 44) of respondents having a bachelor degree qualification or higher. 59% (n = 47) of respondents were aged between 18 and 44 years, with the 18-24 year category the highest (n = 25). Whilst majority of respondents were not personally trained in a health-related field (68%, n = 54), 58% (n = 46) had a family member or friend with a health-related occupation. 55% (n = 44) of respondents excluded had not taken an antibiotic within the past year. Of the remaining 293 participants whose data was included, majority (74%, n = 217) identified themselves as infrequent antibiotic users, consuming antibiotics once a year or less, with less than a third of respondents (30%, n = 87) consuming antibiotics within the past 6 months. Consistent with previous research, although 83% (n = 242) of participants correctly identified that antibiotics should be used for the treatment of bacterial infections, 25% (n = 74) of these respondents also incorrectly identified that they work for viral infections and/or fungal infections. Factor loadings of the questionnaire items for the three variables of the TpB, the outcome variable (behaviour), and covariate, knowledge, are reported in Table 3. All five variables encompassed four items with the exception of social norms, which included two items, yielding a final 18-item questionnaire. A Linear-mixed model was run with the following fixed effects: the interaction between attitudes and beliefs and knowledge; PBC; social norms; age; gender; education; whether the participants had children, were health trained, had a health worker in the family; frequency of antibiotic consumption; financial security; and most recent antibiotic consumption. The random effect was the participants. Fixed effects coefficients can be found in Table 4. For this model the Akaike Information Criterion (AIC) was 1033.5 and the Bayesian Information Criterion (BIC) was 1088.7. The ordinary R-squared was 0.7071 and the adjusted R-squared was 0.6945; indicating that this model explains around 70% of the variance in the self-reported antibiotic misuse. The part of variance explained by the model decreases when knowledge is not included (ordinary R-squared: 0.6950; adjusted R-squared: 0.6820). The modest decrease could be explained by the other fixed effects capturing partially the part of variance that was explained by knowledge; thus indicating some relative overlap between these constructs as illustrated by the loading factors in the factor analysis. The fixed effect variables, PBC (β = −.22, p = 0.001), Social Norms (β = .24, p = 0.047), interaction between attitudes and beliefs and knowledge (β = .09, p = < 0.001), and the presence of a healthcare worker in the family (β = .35, p = 0.039), were all significant predictors of antibiotic use behaviour. All other predictors tested did not produce a significant relationship with the outcome variable. An alternative model was tested with the same variables except for the moderator knowledge. AIC and BIC were slightly higher in this case (1040.9 and 1096.1 respectively) and the model explained a smaller proportion of variance (ordinary and adjusted Rsquared: 0.6950 and 0.6820). Scores for rational antibiotic use were calculated using the factor loading coefficients from the questionnaire items which loaded to the construct of behaviour. Calculations were made as follows: 0.561*Behaviour Item 5 + 0.83*Behaviour Item 6-0.544*Behaviour Item 13 + 0.602*Behaviour Item 14. Scores were subsequently normalised in order to have 0 as the minimal score, and 10 as the maximum score. High scores are reflective of rational antibiotic use, whilst low scores are reflective of less rational behaviour. Figure 2 outlines scores of rational antibiotic use for all participants. Discussion The aim of the present study was to develop and investigate a novel quantitative measure, modelled on the TpB. The study sought to assess the factors influencing community antibiotic use and misuse, including: TpB variables (attitudes and beliefs, subjective norm and PBC); knowledge; and key demographic characteristics (such as age, gender, education level, financial status, the presence of offspring and personal health-related field training). The confirmatory factor analysis identified items corresponding to the three variables of the TpB, the outcome variable (behaviour), and the covariate knowledge. The selected items demonstrated good psychometric properties in terms of internal reliability and convergent and discriminant validity. The internal reliability values are particularly encouraging considering the small number of items and the fact that when utilising Cronbach's Alpha as a measure of internal consistency, the greater the number of items in the pool, the better the chance of obtaining a positive value, indicating greater internal consistency [51]. A linear-mixed effects analysis revealed that intent of antibiotic use behaviour can significantly be explained by each of the TpB variables (PBC, social norms and attitudes and beliefs moderated by knowledge) and that the TpB construct predicted 70% of the variance in antibiotic use and misuse. This amount of predicted variance exceeds that of past literature using the TpB model to predict health related behaviours [39][40][41]48], and supports the use of the TpB model in this context. The presence of a healthcare worker in the family was also a significant predictor of antibiotic use behaviour. Contrary to previous research, demographic variables such level of education in this study did not significantly predict intention to seek and use antibiotics [52]. To our knowledge, this is the first sufficiently validated measure which assesses factors influencing antibiotic use and misuse within a general population and the first application of the TpB to the prediction of antibiotic use behaviour. The measure provides an opportunity for targeted intervention programs to reduce antibiotic misuse in the general community, and may inform public policy decisions. For example, while O'Neill has observed that greater accessibility to antibiotics is associated with an increase in indiscriminate use, there are limited empirical studies exploring this relationship [3]. Our finding that PBC is associated with antibiotic use behaviours confirms the observations of O'Neill [3]. Our study also contributes to an understanding of the role of knowledge, and hence the value of public educational programs, on antibiotic use intentions. Previous research into this relationship is largely contradictory; whilst some indicate a relationship between lesser antibiotic knowledge and more indiscriminate antibiotic use [13,25], others relate greater knowledge to more indiscriminate use [53], and some do not identify a relationship between the two at all [54][55][56]. In our study we observed an interaction effect between knowledge and attitude/beliefs. This finding may provide evidence as to why information-intensive or educationaldriven interventions alone are not entirely efficacious or maintain long-lasting results [57,58], and likely require a multi-factorial approach, targeting the range of motivating factors which contribute to antibiotic use, i.e. attitudes and beliefs, behavioural control and knowledge. As indicated by Edgar, Boyd and Palamè, behaviour change is unlikely unless motivating factors, values and subjective norm cumulatively encourage that change [57]. Consistent with the TpB, subjective norm contributed to the prediction of antibiotic use behaviours, suggesting that antibiotic use behaviours are influenced by peers, family and community/cultural factors. This is a complex relationship given that in our study we found that the presence of family or friends working in a healthrelated field is associated with indiscriminate antibiotic use. Scaioli et al., indicated a similar finding, whereby those with a family member working in a health-related field were more likely to use non-prescribed antibiotics and keep left-over antibiotics [56]. It is likely that this relationship is associated with access to antibiotics (PBC) and requires further investigation. The remaining demographic variables did not significantly influence self-reported antibiotic-use behaviour. These results are not entirely surprising, given that the current literature is contradictory when examining the relationship between demographic variables and antibiotic use. Conflicting findings in the context of antibiotic-use behaviour are apparent for education level [53,59,60]; income and socioeconomic status [29,53,61]; gender [53,59] and age [62,63]. Although the reason for this inconsistency is currently unclear, it may be hypothesised that the differing geographic locations, healthcare regulations and policies of the differing countries where these studies are based may be a contributing factor, although future research is needed to investigate this. The present study, while providing a novel and important lens upon which to examine the AMR dilemma, has several limitations. First, the confusing results associated with the contribution of subjective norms requires a deeper investigation. In this study the questionnaire contained only 2 items and had the weakest internal consistency. Further research is required to uncover and input additional items which aim to target subjective norm, and which enable analysis of the relationship between PCB and subjective norm, is required. Secondly, the sample size and representation is modest. This might explain why there was not a greater relationship between demographic variables and predictors of antibiotic use behaviours. A replication of this study with a larger and more representative sample would add to the value of the AUQ. Related to this, the questionnaire was distributed to an Australian sample and may not generalise to other nations or cultures. Further research is needed to determine whether the relationships identified in this study are replicated in other international samples. Finally, the utility of the study in informing intervention programs needs to be tested. This relates in particular to the verification of antibiotic use intent, and actual antibiotic use behaviours. Application of the AUQ to a population cohort and an informed intervention based on the identified drivers of antibiotic use behaviours is required. Conclusion This study successfully developed and validated an original, theory driven tool which assesses factors influencing community antibiotic use and misuse. Notwithstanding the above mentioned limitations, the research highlights the pervasive influence that people-driven factors have upon antibiotic-use behaviours, likely contributing to the growth of AMR on a widespread scale. Furthermore, these findings have implications for the development of sustainable, multi-dimensional interventions that reflect the multitude of factors influencing antibiotic misuse. While AMR is a multifactorial problem requiring intervention at many levels of policy, drug discovery, and molecular biology, the role of end-point users, consumers, is a vital component of the worldwide effort to address AMR.
v3-fos-license
2018-11-01T20:38:12.828Z
2018-10-16T00:00:00.000
53523497
{ "extfieldsofstudy": [ "Medicine", "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1751-7915.13326", "pdf_hash": "73bea51100e9ee61ecaaeda69e1fe9ced4c16157", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41538", "s2fieldsofstudy": [ "Business", "Biology", "Environmental Science" ], "sha1": "73bea51100e9ee61ecaaeda69e1fe9ced4c16157", "year": 2018 }
pes2o/s2orc
Synthetic microbiology as a source of new enterprises and job creation: a Mediterranean perspective The European biotech industry is proving to be robust against the challenges arising during the last years. Despite geopolitical complexity, regulatory uncertainty and the growing potential of the Asian bio-based industry, European biotechs are still growing in number and revenues, and even SMEs have been able to ‘stay in the course’ by developing new business models and keeping R&D as a pillar of competitiveness. The UK and Germany host the most powerful hubs of biotech companies. In 2017, nearly 30% of all of Europe’s venture capital went to UK-based biotechs, whereas the German biotech hub led the league in terms of a number of dedicated biotechnology firms, with more than 500 SMEs (Biotechnology Report 2017). Roughly, half of the European biotech companies are settled in these major hubs, with Switzerland and Sweden being also remarkable hotspots for the biotechnology-based industry. Although it is yet to be improved, the scenario in Southern Europe is far from discouraging. French companies such as GenSight and Pharnext (both developing innovative treatments for ophthalmic and neurology diseases) are flagship examples of top-ranking European biotechs in terms of capitalization and revenues (Van Beneden, 2018). Beyond particular cases, biotechnology hubs are quickly developing in France, Italy and Spain (Fig. 1A). In fact, more than 120 companies are settled in Madrid, 141 in Lombardy and more than 150 in the biotechnology hubs of Paris and Barcelona. While Northern and Central Europe biotechs largely focus on drug discovery and manufacturing of biomaterials, there is a growing number of Southern Europe companies with business models centred in the development of new products for the agro-food sector, and in the so-called ‘platform technologies’ (i.e. DNA sequencing, multiomics or bioinformatic data mining; Allansdottir et al., 2002). The rise of industrial biotechnology in Southern Europe is also measurable by the increasing number of R+D projects and funds granted by the EC throughout the last decade (Fig. 1B, C respectively). This growth is connected with the conceptual and technical progresses that have taken place in biological engineering in the last years. Indeed, it would be wrong to merely link the success of biotechnological companies with the public and private funds gathered, particularly in the case of synthetic biology. In fact, the standardization of biotechnology is one of the main goals of synthetic biology, which aims at the design of biological systems from an engineering perspective (Khalil and Collins, 2010). If achieved, biological standardization is expected to boost the development of biotechnological products and processes, by improving the predictability and robustness of their underlying biological circuits. However, there are important cultural and technical issues that certainly hamper the development of synthetic biology in Europe. One of those is a lack of trust in the analogy between biology and engineering that lies at the very core of synthetic biology. Indeed, the field has classically identified cells as living machines, but there are solid reasons to cast doubts on the exactness of such a metaphor (Porcar and Peret o, 2016). It has to be stressed that successes in most of the biotechnological projects considered as synthetic biology have in fact been the consequence of trial-anderror strategies rather than the result of a generalized standardization in biology. The difficulties biological standardization faces are the intraspecific variation in heterologous protein production (i.e. variation depending on the strain of E. coli being transformed); the cell-tocell variation in output signal intensity or the non-orthogonal effects of simple biological circuits on each other, as we have previously reported (Vilanova et al., 2015). All these limitations can be summarized in a simple way: true standards in synthetic microbiology do not exist to this date. Both the potential and limitations of standardization in SB are exemplified by the international Genetically Engineered Machine (iGEM) competition, in which students worldwide present synthetic biology projects based on organisms engineered from a toolbox of BioBricks. However, Biobricks has limitations as universal building blocks in SB (Vilanova and Porcar, 2014; Valverde The European biotech industry is proving to be robust against the challenges arising during the last years. Despite geopolitical complexity, regulatory uncertainty and the growing potential of the Asian bio-based industry, European biotechs are still growing in number and revenues, and even SMEs have been able to 'stay in the course' by developing new business models and keeping R&D as a pillar of competitiveness. The UK and Germany host the most powerful hubs of biotech companies. In 2017, nearly 30% of all of Europe's venture capital went to UK-based biotechs, whereas the German biotech hub led the league in terms of a number of dedicated biotechnology firms, with more than 500 SMEs (Biotechnology Report 2017). Roughly, half of the European biotech companies are settled in these major hubs, with Switzerland and Sweden being also remarkable hotspots for the biotechnology-based industry. Although it is yet to be improved, the scenario in Southern Europe is far from discouraging. French companies such as GenSight and Pharnext (both developing innovative treatments for ophthalmic and neurology diseases) are flagship examples of top-ranking European biotechs in terms of capitalization and revenues (Van Beneden, 2018). Beyond particular cases, biotechnology hubs are quickly developing in France, Italy and Spain (Fig. 1A). In fact, more than 120 companies are settled in Madrid, 141 in Lombardy and more than 150 in the biotechnology hubs of Paris and Barcelona. While Northern and Central Europe biotechs largely focus on drug discovery and manufacturing of biomaterials, there is a growing number of Southern Europe companies with business models centred in the development of new products for the agro-food sector, and in the so-called 'platform technologies' (i.e. DNA sequencing, multiomics or bioinformatic data mining; Allansdottir et al., 2002). The rise of industrial biotechnology in Southern Europe is also measurable by the increasing number of R+D projects and funds granted by the EC throughout the last decade ( Fig. 1B, C respectively). This growth is connected with the conceptual and technical progresses that have taken place in biological engineering in the last years. Indeed, it would be wrong to merely link the success of biotechnological companies with the public and private funds gathered, particularly in the case of synthetic biology. In fact, the standardization of biotechnology is one of the main goals of synthetic biology, which aims at the design of biological systems from an engineering perspective (Khalil and Collins, 2010). If achieved, biological standardization is expected to boost the development of biotechnological products and processes, by improving the predictability and robustness of their underlying biological circuits. However, there are important cultural and technical issues that certainly hamper the development of synthetic biology in Europe. One of those is a lack of trust in the analogy between biology and engineering that lies at the very core of synthetic biology. Indeed, the field has classically identified cells as living machines, but there are solid reasons to cast doubts on the exactness of such a metaphor . It has to be stressed that successes in most of the biotechnological projects considered as synthetic biology have in fact been the consequence of trial-anderror strategies rather than the result of a generalized standardization in biology. The difficulties biological standardization faces are the intraspecific variation in heterologous protein production (i.e. variation depending on the strain of E. coli being transformed); the cell-tocell variation in output signal intensity or the non-orthogonal effects of simple biological circuits on each other, as we have previously reported (Vilanova et al., 2015). All these limitations can be summarized in a simple way: true standards in synthetic microbiology do not exist to this date. Both the potential and limitations of standardization in SB are exemplified by the international Genetically Engineered Machine (iGEM) competition, in which students worldwide present synthetic biology projects based on organisms engineered from a toolbox of BioBricks TM . However, Biobricks TM has limitations as universal building blocks in SB (Vilanova and Porcar, 2014 Galdzicki et al., 2014). A clear definition of the notion of standard in biology, with all its limitations and possibilities, is still thus much needed, and it would certainly contribute towards a richer landscape of synthetic biology enterprises, particularly in Southern Europe. A new paradigm of biological standardization is thus central for synthetic biology in Europe to consolidate. In this scenario, one of us (MP) is coordinating the EU-funded CSA on synthetic biology BIOROBOOST, which has as the main challenge to translate a core concept for engineering disciplinesstandardizationinto the biological realm. The proposal encompasses research groups and institutions worldwide as well as enterprises (two of which from Southern Europe) working on metrology, cloning techniques, metabolic engineering, genome reduction, etc. BIOROBOOST aims at further developing standards in biology in a holistic, systematic way, from the biological parts to the human procedures and techniques. The project will also define a limited set of specific chasses, adapted to particular industrial and ecological niches, such as thermophilic or halophilic environments, and will gain insight into cultural (lab-to-lab procedure variations) aspects. Indeed, standardization is not only about biological parts, but also about the way techniques, protocols and even companies' 'cultures' are developed. The 'cultural Achilles heel' of standardization in biology has been very poorly explored to date, but is brightly exemplified by Nobel Prize laureate Murray Gell-Mann, who stated that 'a scientist would rather use someone else's toothbrush than another scientist's nomenclature'. This failing must be rectified as soon as possible since reluctance to accept standards is a veritable obstacle for their implementation. In summary, Mediterranean Europe holds great promise as a new pole for synthetic biology, but this potential will only be met provided that the necessary conditions for it to flourish are present. The main condition is continued public and private support for enterprises and academia involved in central issues, such as biological standardization, that will determine the fate of synthetic biology. Conflict of interest None declared.
v3-fos-license
2022-09-30T15:21:26.812Z
2022-09-26T00:00:00.000
252604083
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/13/10/1726/pdf?version=1664186098", "pdf_hash": "341cc421c5fa88808583bf1fa0951a9a52ef3c47", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41539", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "1adfa76dceab2b7353ce784e161e285b3becf29c", "year": 2022 }
pes2o/s2orc
Gene Panel Sequencing Identifies a Novel RYR1 p.Ser2300Pro Variant as Candidate for Malignant Hyperthermia with Multi-Minicore Myopathy Malignant hyperthermia (MH), a rare autosomal dominant pharmacogenetic disorder of skeletal muscle calcium regulation, is triggered by sevoflurane in susceptible individuals. We report a Korean having MH with multi-minicore myopathy functionally supported by RYR1-mediated intracellular Ca2+ release testing in B lymphocytes. A 14-year-old boy was admitted for the evaluation of progressive torticollis accompanied by cervicothoracic scoliosis. During the preoperative drape of the patient for the release of the sternocleidomastoid muscle under general anesthesia, his wrist and ankle were observed to have severe flexion contracture. The body temperature was 37.1 °C. To treat MH, the patient was administered a bolus of dantrolene intravenously (1.5 mg/kg) and sodium bicarbonate. After a few minutes, muscle rigidity, tachycardia, and EtCO2 all resolved. Next-generation panel sequencing for hereditary myopathy identified a novel RYR1 heterozygous missense variant (NM_000540.2: c.6898T > C; p.Ser2300Pro), which mapped to the MH2 domain of the protein, a hot spot for MH mutations. Ex vivo RYR1-mediated intracellular Ca2+ release testing in B lymphocytes showed hypersensitive Ca2+ responses to isoflurane and caffeine, resulting in an abnormal Ca2+ release only in the proband, not in his family members. Our findings expand the clinical and pathological spectra of information associated with MH with multi-minicore myopathy. Introduction Malignant hyperthermia (MH, OMIM #145600), a rare autosomal dominant pharmacogenetic disorder of skeletal muscle calcium regulation, is triggered by sevoflurane in susceptible individuals [1,2]. Triggering substances can cause the uncontrolled release of calcium from the sarcoplasmic reticulum. They can promote the entry of extracellular calcium into the myoplasm, causing the contracture of skeletal muscles, glycogenolysis, and increased cellular metabolism, resulting in the production of heat and excess lactate. During an episode of MH, clinical manifestations occur as a result of the unregulated accumulation of myoplasmic calcium, which can lead to sustained muscular contraction and breakdown (rhabdomyolysis), cellular hypermetabolism, anaerobic metabolism, acidosis, and their sequelae [3]. Without proper and prompt treatment with dantrolene sodium, mortality is extremely high [4,5]. The diagnosis of MH is established with in vitro muscle contracture testing by measuring the contracture responses of biopsied muscle samples to halothane and graded concentrations of caffeine or halothane [6]. The diagnosis of MH can also be established by identifying a pathogenic variant in RYR1, CACNA1S, or STAC3 genes in molecular genetic testing [7]. RYR1 gene encodes for a ryanodine receptor of the skeletal muscle Ca 2+ release channel and a voltage-gated dihydropyridine receptor. CACNA1S gene encodes the voltage-gated calcium channel α-subunit Cav1.1 and has an important role in Ca 2+ -mediated excitation-contraction coupling [8]. Mutations in RYR1 and CACNA1S genes account for 50 to 70% of MH cases [7]. However, according to the European Malignant Hyperthermia Group (https://emhg.org/genetics/, accessed on 16 January 2022), currently, only 48 reported RYR1 mutations and two CACNA1S mutations have proven pathogenic according to those stringent criteria [9,10]. To date, 16 cases of MH with RYR1 mutations in 13 unrelated Korean families have been reported; however, a functional assay for analysis of variants of uncertain significance in RYR1 was not studied [11][12][13][14]. We report a Korean with MH having multi-minicore myopathy functionally supported by ex vivo RYR1-mediated intracellular Ca 2+ release testing in B lymphocytes. Muscle Biopsy After washing and draping the proximal 1/3 of the anterolateral tibia, a local anesthetic (lidocaine 1%) was injected skin area taking care not to infiltrate into the muscle. About a 3 cm skin incision was made with a pointed scalpel blade, and the skin and subcutaneous fat tissue on both sides of the incision were retracted. The fascia of the tibialis anterior (TA) muscle was incised, and 1 cm × 0.5 cm of the TA muscle was biopsied using a Metzenbaum scissor. After irrigation, the skin was sutured appropriately, and a dressing was applied. A fresh TA muscle specimen was freeze-fixed using isopentane and liquid nitrogen within 30 min. Succinate Dehydrogenase Staining The tibialis anterior muscle was placed in a 30% sucrose solution and embedded in liquid nitrogen-cooled isopentane. Frozen sections (8-µm thick) were incubated in 0.2 M sodium phosphate-buffered solution (pH 7.6) containing 0.6 mM nitro blue tetrazolium and 50 mM sodium succinate (Sigma-Aldrich, St. Louis, MO, USA) for 20 min at 37 • C. The slides were washed with DiH2O and mounted with aqueous mounting media. Gene Panel Sequencing To determine the potential genetic cause of the suspected MH in our proband, his genomic DNA was analyzed by gene panel sequencing using a Celemics G-Mendeliome Hereditary Myopathy Panel (Celemics, Seoul, Korea) (Table S1 in Supplementary Materials). Paired end (PE) sequencing was conducted using a NextSeq500 instrument (Illumina, San Diego, CA, USA) with a high output flow cell and 300 PE cycles (150 × 2) at the Green Cross Genome (Yongin, Korea) to detect the variant given the suspicion of a hereditary myopathy. Base-calling, alignment, variant calling, annotation, and quality control reporting were performed using the Genome Analysis Tool Kit best-practice pipeline workflow for germline short variant discovery (https://gatk.broadinstitute.org/hc/en-us/, accessed on 13 May 2021). DNA sequencing reads were aligned to the human genome reference assembly GRCh38 (hg38) using Burrows-Wheeler Aligner (BWA). Gene panel sequencing generated a yield of 226,451,035 target reads in the proband's sample by estimating the sequence quality along all sequences. The mean read depth (x) was 195. The percentage of bases above a read depth of 30x was 99.8%. The interpretation of sequence variants was manually reviewed by medical laboratory geneticists according to the standards and guidelines from the Joint Consensus Recommendation of the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) for classifying pathogenic variants [15]. Particularly, the pathogenic effect of missense variants was esti- Sanger Sequencing The presence of the RYR1 variant was confirmed with bidirectional Sanger sequencing using primer pair of 5 -aggtctcaagctcctgttca-3 and 5 -tcgagggaggtgtgtgac-3 on a 3730xl DNA Analyzer (Applied Biosystems, Foster City, CA, USA). Segregation analysis was also performed to determine carrier status for family members using Sanger sequencing. Ex Vivo RYR1-Mediated Intracellular Ca 2+ Release Testing in B Lymphocytes A volume of 10 mL of fresh peripheral blood was drawn from the proband and his family members. Peripheral blood mononuclear cells (PBMCs) were isolated by Ficoll-Hypaque density gradient centrifugation. For infection with Epstein-Barr virus, PBMCs were exposed to supernatants of the B95.8 cell line in the presence of interleukin (IL)-6 and cyclosporin A, according to standard procedures. Cells were cultured in RPMI medium supplemented with 2 mM L-glutamine, 10% fetal bovine serum (FBS), and 100 units of streptomycin and penicillin. The EBV-immortalized B cells were observed after 2 to 3 weeks. Then, the cells were passaged in standard RPMI 1640 medium containing 20% fetal bovine serum without any supplement change every 3 to 4 days [17,18]. The measurement of RyR1-mediated intracellular Ca 2+ ([Ca 2+] i) release response to isoflurane and caffeine can differentiate between MH-susceptible individuals and normal controls [19]. To measure sarcoplasmic reticulum (SR) Ca 2+ release, SR Ca 2+ release was measured by the method with modifications described previously [21]. The kinetics of Ca 2+ release was monitored under the standard condition at 25 • C in a medium containing 20 mM MOPS-Tris (pH 6.8), 5 mM MgCl 2 , 5 mM sodium oxalate and 20 nM Flou-2. The Final concentration of SR vesicle was maintained at 200 Mg/mL for all experiments. Fluorescence was recorded in a 1 cm cuvette with continuous magnetic stirring, using a Photon Technology International (PTI) spectrofluorometer. Simultaneous recordings were obtained at 0.85 Hz, and data was collected and analyzed with the PTI computer interface. Case Presentation A 14-year-old boy was admitted to Jeonbuk National University Hospital (Jeonju, Korea) for orthopedic surgery to evaluate progressive torticollis accompanied by cervicothoracic scoliosis. He was the first child of healthy, non-consanguineous Korean parents. There was no personal or family history of problems with anesthesia or intolerance to exercise or heat, neuromuscular disorders, or drug allergies. On physical examination, his head was tilted toward the left, with shoulder elevation, facial asymmetry, sternocleidomastoid muscle (SCM) tightness (Figure 1a,b), and uncorrected muscular torticollis. Although both elbow flexion, extension, and ankle dorsiflexion were observed in manual muscle testing grade 4+, there were no gait abnormalities or sensory problems. Computed tomography (CT) and magnetic resonance imaging (MRI) showed no abnormal findings on the spine and scapula (Figure 1c,d). The patient was considered to have neglected congenital muscular torticollis. We planned surgical intervention for the release of the SCM muscle under general anesthesia. The preoperative evaluation showed no abnormal findings. The patient was induced with thiopental sodium. General anesthesia was maintained with sevoflurane. During the preoperative drape of the patient, his wrist and ankle were observed to have severe flexion contracture. The EtCO2 increased to 74 mm Hg at that time. The heart rate was 160 beats/min. arterial blood gas (ABG) analysis showed lactic acidosis (lactate = 4.1, pH = 7.16) with hypercapnia (pCO2 = 73 mm Hg). The body temperature was 37.1 • C. Based on the patient's symptoms, we suspected MH. Thus, we did not perform surgery and discontinued the sevoflurane. For the treatment of MH, the patient was administered a bolus of dantrolene intravenously (IV) (1.5 mg/kg) and sodium bicarbonate. After a few minutes, muscle rigidity, tachycardia, and EtCO2 all resolved. Laboratory tests showed increased creatine kinase (CK) (17,481 IU/L), lactate dehydrogenase (LD) (1020 IU/L), and aldolase (89.3 U/L) levels. Although generalized weakness and muscle soreness were observed for up to two days afterward, the patient made a full recovery and was discharged to his home. Based on the MH clinical grading scale [22], the proband scored 48 points: generalized muscle rigidity, 15 points; muscle breakdown with creatine kinase (CK) > 10,000 units/L, 15 points; respiratory acidosis with end-tidal CO 2 > 55 mmHg and PaCO 2 > 60 mmHg, 15 points; and cardiac involvement with sinus or ventricular tachycardia, 3 points. Thus, the MH rank of the proband was 5 with a "very likely" probability. Case Presentation A 14-year-old boy was admitted to Jeonbuk National University Hospital ( Korea) for orthopedic surgery to evaluate progressive torticollis accompanied by cothoracic scoliosis. He was the first child of healthy, non-consanguineous Korea ents. There was no personal or family history of problems with anesthesia or into to exercise or heat, neuromuscular disorders, or drug allergies. On physical exami his head was tilted toward the left, with shoulder elevation, facial asymmetry nocleidomastoid muscle (SCM) tightness (Figure 1a,b), and uncorrected muscula collis. Although both elbow flexion, extension, and ankle dorsiflexion were obser manual muscle testing grade 4+, there were no gait abnormalities or sensory pro Computed tomography (CT) and magnetic resonance imaging (MRI) showed no mal findings on the spine and scapula (Figure 1c,d). The patient was considered t neglected congenital muscular torticollis. We planned surgical intervention for the of the SCM muscle under general anesthesia. The preoperative evaluation showed normal findings. The patient was induced with thiopental sodium. General ane was maintained with sevoflurane. During the preoperative drape of the patient, hi and ankle were observed to have severe flexion contracture. The EtCO2 increase mm Hg at that time. The heart rate was 160 beats/min. arterial blood gas (ABG) a showed lactic acidosis (lactate = 4.1, pH = 7.16) with hypercapnia (pCO2 = 73 mm H body temperature was 37.1 °C. Based on the patient's symptoms, we suspected MH we did not perform surgery and discontinued the sevoflurane. For the treatment the patient was administered a bolus of dantrolene intravenously (IV) (1.5 mg/k sodium bicarbonate. After a few minutes, muscle rigidity, tachycardia, and EtCO2 solved. Laboratory tests showed increased creatine kinase (CK) (17,481 IU/L), lact hydrogenase (LD) (1020 IU/L), and aldolase (89.3 U/L) levels. Although generalized ness and muscle soreness were observed for up to two days afterward, the patien a full recovery and was discharged to his home. Based on the MH clinical gradin [22], the proband scored 48 points: generalized muscle rigidity, 15 points; muscle down with creatine kinase (CK) > 10,000 units/L, 15 points; respiratory acidosis wi tidal CO2 > 55 mmHg and PaCO2 > 60 mmHg, 15 points; and cardiac involvemen sinus or ventricular tachycardia, 3 points. Thus, the MH rank of the proband was a "very likely" probability. Results To verify the association of MH with myopathy, electrodiagnostic testing was performed on the proband. Motor and sensory nerve conduction velocities were normal in the extremities. However, the electromyographic study showed abnormal spontaneous activity in all extremities, and the motor unit analysis study showed low amplitude and short duration in all extremities. To confirm MH-related myopathy, a muscle biopsy was performed under local anesthesia. Succinic dehydrogenase (SDH) staining of the muscle biopsy from the proband (II-1 in Figure 2a) demonstrated 5 or more eccentric multi-minicores in several fibers with various sizes and numbers, showing foci of decreased or absent enzymatic activity (Figure 2b). Genes 2022, 13, x FOR PEER REVIEW 6 tion and Korean ethnic population were observed. Thus, Novel RYR1 p.Ser2300Pro ant was presumptively classified as PM2 (Absent from controls in Exome Sequencing ject, 1000 Genomes or ExAC) by original ACMG-AMP criteria, however, not PP3 (Mu lines of computational evidence support a deleterious effect on the gene or gene prod because of conflicting results of in silico computational analysis. To prove the functional effect of this variant, RYR1-mediated [Ca 2+ ]I release te in B lymphocytes was performed. B lymphocytes isolated from the proband showed persensitive Ca 2+ responses to isoflurane and caffeine, resulting in an abnormal Ca lease only in the proband, but not in his family members. As a result, two independe vivo studies all show the release of Ca 2+ in response to the RYR1 agonist ( Figure 3 vivo functional study using B lymphocytes supports the damaging effect of this rare R variant classified as PS3_Supporting by modified ACMG-AMP criteria suggested fo tosomal dominantly inherited RYR1/MH [23]. Therefore, the RYR1 p.Ser2300Pro va seems the most likely genetic cause of the clinical manifestations associated with su tibility to MH in the proband. After excluding variants with a population allele frequency >0.001 in the gnomAD, heterozygous missense variants of the three different genes were identified as a genetic cause of hereditary myopathy by gene panel sequencing in the proband. Among them, RYR1 heterozygous missense variant, c.6898T > C/p.Ser2300Pro (reference transcript ID of RYR1: NM_000540.2) was the best candidate as the cause of autosomal dominant MH. Because the proband's parents and his younger brother presented with no clinical symptoms associated with MH, genetic counseling and segregation analysis were estimated to identify the genetic cause of MH. As a result, Sanger sequencing confirmed the genetic origin of the RYR1 variant as de novo (Figure 2c). In addition, the GAA and GNE heterozygous variants were excluded because clinical features and inheritance patterns did not match the patient (Table 1). Paternity and kinship analysis was conducted using short tandem repeat (STR) multiplex assay (AmpFLSTR Identifiler; Applied Biosystems), and STR analysis confirmed the biological association between the proband and his parents. Multiple lines of computational evidence support the deleterious effect of this rare RYR1 variant: It was predicted to be disease-causing by MutationTester and REVEL (score of 0.725), and possibly damaged according to PolyPhen-2 (score of 0.586). In addition, the protein sequence of the Ser2300 residue was conserved between humans and Takifugu except for the Canis, Bos, Monodelphis, Oryzias, Tetraodon, and Takifugu genera (Figure 2d). The predicted conservation scores of this rare variant were phyloP100 with a value of 0.67, SiPhy 29 way with a value of 8.435, and GERP with a value of 4.11 (Deleterious cutoff, PhyloP > 1.6; SiPhy > 12.17; GERP > 2). The allele frequencies of this rare variant in the general population and Korean ethnic population were observed. Thus, Novel RYR1 p.Ser2300Pro Variant was presumptively classified as PM2 (Absent from controls in Exome Sequencing Project, 1000 Genomes or ExAC) by original ACMG-AMP criteria, however, not PP3 (Multiple lines of computational evidence support a deleterious effect on the gene or gene product) because of conflicting results of in silico computational analysis. To prove the functional effect of this variant, RYR1-mediated [Ca 2+ ]I release testing in B lymphocytes was performed. B lymphocytes isolated from the proband showed hypersensitive Ca 2+ responses to isoflurane and caffeine, resulting in an abnormal Ca 2+ release only in the proband, but not in his family members. As a result, two independent ex vivo studies all show the release of Ca 2+ in response to the RYR1 agonist ( Figure 3). Ex vivo functional study using B lymphocytes supports the damaging effect of this rare RYR1 variant classified as PS3_Supporting by modified ACMG-AMP criteria suggested for autosomal dominantly inherited RYR1/MH [23]. Therefore, the RYR1 p.Ser2300Pro variant seems the most likely genetic cause of the clinical manifestations associated with susceptibility to MH in the proband. Discussion MH is a rare anesthetic emergency. It has been estimated to occur in between 1 and 1:150,000 general anesthetics [24]. One reason that diagnosis may be delayed i anesthetist incorrectly assumes that a history of uneventful anesthesia precludes t sibility that the patient is at risk of developing MH [25]. The potential for any of the inhalational anesthetics to trigger an MH reaction. Indeed, the number of cases tri by each of the inhalational anesthetics reflects the overall usage of that particula [5]. Furthermore, the patient and their family should be informed about the sus diagnosis of MH before discharge from the hospital. They should be specifically a to warn all blood relatives of the patient that can be contacted about the risk of M the need to mention this should any member of the family require admission to h Each member of the family should be advised that this information applies to them it is proved otherwise using definitive diagnostic tests [25]. This study showed some differences in the genetic and pathological aspect pared to previous studies. RyR1 can mediate the release of Ca 2+ from intracellula in response to nerve stimulation. It plays a crucial role in excitation-contraction co [26]. Classically in the RYR1 sequence, three hot spots are considered: MH1 and domains as the large hydrophilic domain and MH3 as the C-terminal hydropho main. Mutations in the recessive central core and multi-minicore myopathies ar extensively distributed along the RYR1 sequence, whereas most heterozygous do mutations in central core myopathy are mapped to the C-terminal domain [27]. Ho in this study, the novel pSer2300Pro variant substituted a conserved serine resid mapped to the MH2 domain of the protein, a hot spot for MH mutations [28,29], being a hetero-dominant mutation. Ex vivo functional study using B lymphocyt porting the deleterious effect of this rare RYR1 variant contributes to an appropria sification of variant pathogenicity as PS3_Supporting by modified ACMG-AMP from PM2 by original ACMG-AMP criteria. Furthermore, although MH has prev Discussion MH is a rare anesthetic emergency. It has been estimated to occur in between 1:10,000 and 1:150,000 general anesthetics [24]. One reason that diagnosis may be delayed is if the anesthetist incorrectly assumes that a history of uneventful anesthesia precludes the possibility that the patient is at risk of developing MH [25]. The potential for any of the potent inhalational anesthetics to trigger an MH reaction. Indeed, the number of cases triggered by each of the inhalational anesthetics reflects the overall usage of that particular agent [5]. Furthermore, the patient and their family should be informed about the suspected diagnosis of MH before discharge from the hospital. They should be specifically advised to warn all blood relatives of the patient that can be contacted about the risk of MH and the need to mention this should any member of the family require admission to hospital. Each member of the family should be advised that this information applies to them until it is proved otherwise using definitive diagnostic tests [25]. This study showed some differences in the genetic and pathological aspects compared to previous studies. RyR1 can mediate the release of Ca 2+ from intracellular pools in response to nerve stimulation. It plays a crucial role in excitation-contraction coupling [26]. Classically in the RYR1 sequence, three hot spots are considered: MH1 and MH2 domains as the large hydrophilic domain and MH3 as the C-terminal hydrophobic domain. Mutations in the recessive central core and multi-minicore myopathies are more extensively distributed along the RYR1 sequence, whereas most heterozygous dominant mutations in central core myopathy are mapped to the C-terminal domain [27]. However, in this study, the novel pSer2300Pro variant substituted a conserved serine residue that mapped to the MH2 domain of the protein, a hot spot for MH mutations [28,29], despite being a hetero-dominant mutation. Ex vivo functional study using B lymphocytes supporting the deleterious effect of this rare RYR1 variant contributes to an appropriate classification of variant pathogenicity as PS3_Supporting by modified ACMG-AMP criteria from PM2 by original ACMG-AMP criteria. Furthermore, although MH has previously been associated with multi-minicore disease, the cases identified as multi-minicore myopathy were exclusively linked to recessive mutations [30][31][32][33] and recently, in patients with fiber-type disproportion as their only pathological feature [34]. Interestingly, multi-minicore myopathy was observed in muscle biopsies by SDH histochemical staining in our dominant RYR1-related case. It is possible that this pathologic finding may change to the central core in adults because a previous report showed pathological findings that changed from multi-minicore to central core according to age in RYR-related myopathy [31]. Currently, there are two diagnostic approaches for patients potentially at increased risk of developing an MH reaction. These include molecular genetic testing and ex vivo muscle contracture testing, the in vitro contracture test (IVCT) [35] and the caffeine-halothane contracture test (CHCT) [36]. The contracture test measures the contracture response of excised skeletal muscle strips to caffeine and halothane and is regarded as the "gold standard" diagnostic test for MH susceptibility. The IVCT has a sensitivity of 100% and a specificity of 94%, whereas the CHCT has a sensitivity of 97% and a specificity of 78% [37]. The disadvantages of contracture testing are the need for specialized testing centers with trained personnel, its invasiveness, and its high cost [38]. The ryanodine receptor on B lymphocytes is identical to RyR1, which controls the calcium channel in skeletal muscle. When exposed to 4-chloro-m-cresol (4CmC), B lymphocyte ryanodine receptors can cause an acute increase in intracellular calcium [39]. Significantly higher intracellular calcium in B lymphocytes in MH-susceptible and MH-normal examinees was reported when caffeine and 4CmC were introduced [40]. In our study, Ca 2+ responses to isoflurane and caffeine in B lymphocytes showed significant differences between the proband carrying the RYR1 p.Ser2300Pro variant (MH-susceptible) and his family members (MH-normal). Furthermore, molecular diagnosis revealed the deleterious effect of this rare RYR1 variant classified as PS3_Supporting by modified ACMG-AMP criteria. These results suggest that enhanced Ca 2+ responses are associated with mutations in the RYR1 gene in MHsusceptible individuals. Although the criteria for MH diagnosis have not been established and further comparative studies are required, RYR1-mediated intracellular Ca 2+ release testing in B lymphocytes might have promise as an adjunct to in vitro muscle biopsies. On the other hand, RYR1 and CACNA1S as two major MH-causative genes, are huge, containing numerous exons. Mutations in these two genes account for 50-70% of the known MH cases. A comprehensive analysis by traditional Sanger sequencing is challenging because it is costly, time-consuming, and labor-intensive, without even considering other possible genes in the genome [7,9,10]. Next-generation sequencing (NGS) analysis can help delineate the genetic diagnosis of MH and several MH-susceptible clinical presentations [41,42]. Moreover, the use of NGS in unselected cohorts is an important tool to understand the prevalence and penetrance of MH susceptibility, a critical challenge in the field [43,44]. Conclusions In conclusion, we reported a novel heterozygous RYR1 p.Ser2300Pro variant classified as PS3_Supporting by modified ACMG-AMP criteria, leading to MH accompanied by multi-minicore myopathy. Our findings expand the clinical and pathological spectra of information associated with MH with multi-minicore myopathy. Ex vivo functional studies such as RYR1-mediated intracellular Ca 2+ release testing in B lymphocytes may support the damaging effect of the rare RYR1 variant.
v3-fos-license
2023-02-17T15:17:17.320Z
2017-08-25T00:00:00.000
256910101
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1038/s41598-017-07826-0", "pdf_hash": "7863ae8bba3a9498a433e89303a86f504cc4ba5a", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41540", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "7863ae8bba3a9498a433e89303a86f504cc4ba5a", "year": 2017 }
pes2o/s2orc
Improved Self-cleaning Properties of an Efficient and Easy to Scale up TiO2 Thin Films Prepared by Adsorptive Self-Assembly Transparent titania coatings have self-cleaning and anti-reflection properties (AR) that are of great importance to minimize soiling effect on photovoltaic modules. In this work, TiO2 nanocolloids prepared by polyol reduction method were successfully used as coating thin films onto borosilicate glass substrates via adsorptive self-assembly process. The nanocolloids were characterized by transmission electron microscopy and x-ray diffraction. The average particle size was around 2.6 nm. The films which have an average thickness of 76.2 nm and refractive index of 1.51 showed distinctive anti soiling properties under desert environment. The film surface topography, uniformity, wettability, thickness and refractive index were characterized using x-ray diffraction, atomic force microscopy, scanning electron microscopy, water contact angle measurements and ellipsometry. The self-cleaning properties were investigated by optical microscopy and UV-Vis spectroscopy. The optical images show 56% reduction of dust deposition rate over the coated surfaces compared with bare glass substrates after 7 days of soiling. The transmission optical spectra of these films collected at normal incidence angle show high anti-reflection properties with the coated substrates having transmission loss of less than 6% compared to bare clean glass. process in the presence of ultraviolet light, and second, the diffusal of water to the whole surface instead of getting together due to the hydrophilicity of TiO 2 and hence rinsing the dust. In addition to its hydrophilic properties and high photo reactivity, titania also exhibits long term stability, good mechanical and chemical properties, in addition to high thermal resistance together with low toxicity and costs 11 . The functional characteristics of titania is related to its crystal structure and morphology which both depend on the preparation method. In order to reduce the accumulation of contaminants on PV surfaces, self-cleaning properties should be combined with the efficient photo catalytic activity of titania. Both self-cleaning and AR properties depend on the surface nano-texture and roughness 10 . The latter influences the optical properties of the thin film and depends on the preparation procedure as well 12 . Since this application would be of interest for industrial scalability of the procedure as well as cost efficiency, focus was put to utilize a simple, controllable, cost effective and an easy to scale-up coating method; that is the adsorptive self-assembly with thermal annealing of the film which was first developed by Xi et al. 8 . This low temperature synthesis method is also suitable to prepare many other coating layers. In this work, TiO 2 nanocolloids were prepared via modified polyol reduction method and used as self cleaning coating for glass substrates. These coatings were characterized by atomic x-ray diffraction, force microscopy, scanning electron microscopy, ellipsometery, optical microscope, UV-Vis spectroscopy and contact angle measurements. Moreover, the effect of soiling on the optical transmission properties of the coated glass was investigated and the effectiveness of the anti-soiling properties was evaluated and compared to uncoated glass surfaces. Experimental TiO 2 nanocolloids synthesis. TiO 2 nanocolloids were prepared by polyol reduction method [13][14][15] . First, 125 mL of tetraethyleneglycol (TEG) (Sigma-Aldrich, ≥99%) were measured and poured in a three-neck flask. Then, 2.7 g of titanium (IV) oxysulfate TiOSO 4 (Sigma-Aldrich 99%) precursor salt were measured and introduced into the three-neck flask and stirred for 30 min to dissolve in the TEG at room temperature. TEG is used as a solvent and as a reducing agent to the precursor salt 15,16 . To control the particle size of the synthesized nanoparticles, 2.9 g of sodium hydroxide pellets were dissolved in 5 mL of deionized water and the solution was gradually added to the mixture using a syringe. The mixture was mechanically stirred and heated at a rate of 6 °C min −1 from room temperature to 165 °C under reflux for 3 hours. At this stage, TiO 2 nanocollids were synthesized (Fig. 1). These colloids are used as is for the coating of substrates as per Fig. 2. In order to obtain powder nanoparticles for x-ray diffraction analysis, the prepared colloidal suspension containing the residual polyol and the nanoparticles were left to cool down at room temperature, and then centrifuged at a speed of 5000 rpm for 20 min followed by washing with pure ethanol. The procedure of centrifugation and ethanol-washing was repeated five times. Finally, the obtained powders were dried in an oven (Thermolyne, Thermo Scientific) overnight at 50 °C. Preparation of coating films. Borosilicate plate glass slides (Chemglass CG-1904-36) of 25 mm × 10 mm × 2 mm (width × length × thickness) were used as work pieces. Before deposition, the glass substrate samples were cleaned with ethanol and rinsed with deionized water. Then, each substrate sample was soaked in Piranha solution which was prepared by mixing sulfuric acid (H 2 SO 4 , concentration of 5-6%, Merck) with hydrogen peroxide (H 2 O 2 , concentration of 30%, Merck) in a volume ratio of 2:1 for 30 minutes. Later, the samples were thoroughly rinsed with deionized water and left to dry in the oven at 70 °C for another 30 minutes. The dried glasses were then dipped in nanocolloids for 2 hours (Fig. 2) at 20 °C and at a relative humidity of around 30%. The samples were then placed in an oven (Thermolyne, Thermo Scientific) at a temperature of 400 °C under air for 2 hours following the procedure depicted in reference 8 . The prepared sample had a replicate of 3 to assure reproducibility of the experimental results. Characterization techniques. The transmission electron microscopy (TEM) analysis was carried out using FEI TALOS X operated at 200 kV. The TEM specimens were prepared by sonicating the as-prepared TiO 2 catalyst powder in ethanol. One drop of the solution was then placed onto a 200 mesh TEM copper grid coated with a lacey carbon support film (Ted Pella) and dried in air. We used Image J software for the determination of particles size distribution. X-ray diffraction (XRD) patterns of the synthesized nanoparticles and thin films were acquired using a Rigaku Ultima IV X-ray diffractometer equipped with fixed monochromator which was used for the data collection. The XRD was operated at 40 KV and 40 mA with divergence slit and scattering slit of 2/3degree, divergence height limiting slit of 10 mm and a receiving slit of 0.3 mm. Continuous scans on the samples were carried out between 10-90 2θ degree with 0.02 degree step size and 1 degree/minute data collection time. The crystallite size of the TiO 2 nanoparticles was estimated from XRD line broadening using Scherrer equation 17 . where, D is the averaged dimension of crystallites; K is the Scherrer constant, an arbitrary value that falls in the range 0.87-1.0 (usually assumed to be 0.9); λ is the wavelength of X-ray; and FWHM is pure diffraction broadening of a peak at half height, due to crystallite dimensions located at 2θ. Scanning Electron Microscopy (SEM) images were obtained using SEM Model JCM-6000PLUS NeoScope to observe the morphology of coating layers of TiO 2 in different positions of the glass cross sections. SEM/SEI topographical images were obtained on the three glass samples with magnification of 500X and 200X and resolution of 256 × 192 Pixels at an accelerating voltage 10 kV, an energy range 0-20 KeV and high vacuum mode. AFM characterization was carried out using a Dimension Icon model AFM with NanoScope V Controller (Bruker AXS, USA) operating in PeakForce mode. All measurements were made in ambient conditions using NSG30 silicon tapping mode probes (NT-MDT, Russia). Optical transmittance spectra of the coated samples were recorded in the 200-1000 nm range using a Jenway-67 Series Spectrophotometer. Optical analysis of the coated surfaces was performed using Olympus (IX73) optical microscope. The objective lens of 40X magnification was used. The surface wettability was evaluated by measuring the contact angle of deionized water droplets deposited on the film surface under ambient conditions using Rame-Hart with three replicates. The acquired images have been elaborated with Drop Image software to obtain the average contact angles. The thin film thickness and refractive index for all samples were measured using J.A. Woollam WVASE Ellipsometer. Each sample was tested using 2 ellipsometer angles (65° and 70°), and light wavelength scan between 300 and 1000 nm. The ellipsometer measures the changes in the light polarization state and generates a graph of the changes in ellipsometeric angles (∆, ψ) as a function of wavelength. In the analyses of the ellipsometric data, the samples were treated as composed of a TiO 2 thin film on a thick SiO 2 substrate. For simplicity, a basic Cauchy model for transparent films was used to fit the data, and the average thickness and refractive index of the TiO 2 layer were calculated numerically from the ψ and ∆ functions 18 . Figure 3 shows bright field TEM and high resolution TEM images of the TiO 2 colloidal particles where the black localized features corresponding to the TiO 2 nanoparticles. Typically, the nanopartciles sizes were found to be uniform and spherical in shape, with the presence of few non-uniform particles resulting most probably from agglomeration process. The size of the nano-particles ranges from 2 to 5 nm with an average particle size estimated to be around 2.61 nm. This average size value is well corroborated by the crystal size as estimated by XRD analysis and shown in Table 1. Results and Discussion The X-ray diffraction patterns of the synthesized titania nanoparticles and the deposited TiO 2 thin film are shown in Fig. 4a and b, respectively. The peak details are summarized in Table 1. The experimental XRD pattern is in agreement with the JCPDS card no. 21-1272 (anatase TiO 2 ) and the XRD pattern of TiO 2 nanoparticles as reported in literature 19,20 . The 2θ peaks located at around 25° and 48° confirm the TiO 2 anatase structure 19,20 . The high intensity of XRD peaks reflects that the formed nanoparticles are crystalline and broad diffraction peaks indicate very small size crystallite. Figure 5 shows a top view AFM image of the TiO 2 thin films on glass substrate. The image shows dense and compact surface morphology. Moreover, the surface roughness was obtained of the coated surfaced to be 4.32 nm 2 . Figure 6(a) and (b) shows representative SEM images of the uncoated and coated samples after soiling (dust accumulation) for 7 days, respectively. It is clearly seen that the dust particles deposited on the uncoated sample surface (Fig. 6a) are dense and agglomerated while those settled on the coated glass sample surface (Fig. 6b) are scares and dispersed. Figure 6c and d show optical images at 40X magnification object lens of the uncoated and coated samples, respectively. The titania coating deposited using TiO 2 colloidal nanoparticles shows less accumulation of dust particles compared with uncoated glass sample. Moreover, Fig. 7 shows the rate of dust deposition per surface area of glass substrate after the samples were left in the field for 7 days soiling. Dust deposition rate was obtained through the following equation: The values show a significant decrease of dust deposition rate (56% reduction) on the coated sample compared with the uncoated glass samples. Table 2 shows that the measured film thickness of one process cycle of coating is 76.2 nm. This value is in good agreement with the film thickness obtained by Xi et al. 8 who first proposed this technique for self-cleaning applications. The obtained thickness for one process cycle in Xi et al. work was reported to be 70 ± 4 nm 8 . The process cycle was defined as dipping of the substrate in the colloidal solution for a soaking period of 2 h followed Table 1. XRD characteristics of Pt colloids by polyol reduction method. by a subsequent heating at 400 °C for another 2 h. If another cycle was required to form another coating layer, the coated sample would be dipped again in the colloidal solution for another 2 h and heated up again in air for 2 h, and so on, layer by layer. As mentioned earlier, this technique is simple, cost effective, straight forward and does not involve any sophisticated experimental equipment. As a matter of fact, Xi et al. 8 reported that they could successfully scale up the coating films on larger areas (125 mm × 125 cm × 2 mm) due to the simplicity of this coating method. The measured contact angles of the deposited films show medium hydrophilicity and the values are in total agreement with what was reported earlier for anatase TiO 2 thin films deposited on glass substrates as summarized in Table 3 [21][22][23] . The water spread is an important factor for self-cleaning as well as for manual cleaning with water as an outside cleaning source (Fig. 8). Table 3. Summary of the measured contact angles of water on TiO 2 films produced by various methods. It has been reported that the contact angles of the films deposited on glass substrates depend on the characteristics of the nanocolloids 10 . Salvaggio et al. 10 reported that increasing acid concentration from 0.1 to 0.5 N (HNO 3 ) during TiO 2 nanaoparticles synthesis lead to a decrease in the particle size from 31 to 20 nm. This decrease in the nanoparticle size triggered an increase in the surface roughness which in turn increased the contact angle from 44 to 60°. Besides, the increase in contact angle can be attributed to the formation of rough micro-nano-morphological pattern reduction thereby the surface energy of the particles 24 . In order to investigate the effect of sunlight exposure on the hydrophilic property of the coated samples, the contact angle was measured after the samples were left to soiling process for a week. The value of the contact angle is found to be around 41°. This slight reduction in the contact angle (which was measured to be 43° before light exposure) is rather expected 25,26 . The effect of the sunlight has been reported earlier and was found to decrease the contact angle over time 25,26 . On the other hand, when the samples were stored in dark conditions for several months, the contact angle was measured afterwards and found to increase up to 59°. This property of surface wettability conversion of TiO 2 has been extensively reported as well [27][28][29] . Literature survey has shown that contact angle of TiO 2 , for either water and/or different liquid, decreases (surface becomes more hydrophilic) during UV irradiation with dependence on the intensity and time of irradiation. On the other hand, the hydrophilic surface can be reversed to a hydrophobic one (having higher contact angles) when it was kept in the dark conditions for a period of time 28,29 . The mechanism of this hydrophilic conversion of titania surface was presented by Wang et al. 28 and Sakai et al. 30 who postulated that upon UV irradiation, the produced electrons and holes are trapped by the surface and O 2− ions producing Ti +3 and oxygen vacancies, respectively. This results in adsorption of water molecules at the defect sites forming a hydrophilic dominant surface. To investigate the effect of coating on the optical transmission of the glass substrate, UV-Vis spectra was collected for the coated and uncoated glass surfaces before and after soiling for 7 days. The Spectra are shown in Fig. 9. The transmission of the clean glass had a maximum value of 91.75% in the range of values (90-92%) as reported in many studies 31,32 . To investigate the effect of coating in mitigating the dust accumulation onto the coated substrates, the samples were left at The Solar Test Facility located in Doha/ The State of Qatar, for 7 days. It is shown in Fig. 7(b) that the coated sample shows an average reduction of only 6% in the visible region (400-800 nm) compared with clean uncoated surface. It has been reported in the literature that the increased surface roughness was responsible for the slight loss in optical transmission in the visible range 33 . The enhanced performance of the coated sample is due to several factors. First, the synthesized material is TiO 2 anatase which is well known for its high photocatalytic activity. The self-cleaning process using TiO 2 films involves two stages; first, a split of organic dirt via photocatalytic process in the presence of ultraviolet light. This stage is very critical as it reduces the presence of the sticky material that attracts the deposition of dust and other particulate material on the surfaces. The second factor is the hydrophilicity of TiO 2 films. Coating films of solar panels in desert and arid regions is preferred to be hydrophilic since further scheduled cleaning from time to time using water is essential to eliminate the dirt from the surface by the diffusal of water to the whole surface instead of getting together and hence rinsing the dust. The low refractive index enhances transmittance which is one of the main factor for film coatings specifically applicable for solar panels 13,32 . Table 2 shows the measured refractive index of 1.51. The antireflection properties which depend on the interference of the reflected light from air-coating and coating substrate interfaces were examined out. For an ideal homogeneous single coating which has a refractive index value between that of air and the substrate, the antireflection coating should satisfy the following conditions: the thickness of the coating should be λ/4, where λ is the wavelength of the incident light; and n c = (n a × n s ) 0.5 , where n c , n a , and n s are the refractive indices of the coating, air, and substrate, respectively 34 . Taking into consideration that n a = 1 for air, n s is 1.52 for glass substrate, n c must be 1.23 to achieve zero reflection. Since this value is lower than that of any homogeneous dielectric material, AR coatings always adopt 2-or 3-dimensional porous structures to meet the requirement for very low average refractive index 32,34 . Conclusions Photocatalytic TiO 2 coatings are known for their excellent self-cleaning behavior and anti-reflection properties, where very thin water layer formed on the hydrophilic surface can easily wash-off the dirt particles. In the present work, we reported on a costly efficient and simple preparation method of the optically transparent, good wettability towards water and photocatalytic TiO 2 coatings on glass substrate for self-cleaning applications. TiO 2 nanocolloids particles were successfully synthesized via polyol reduction method and were subsequently used as coating onto borosilicate glass substrates via adsorptive self-assembly process. Our films were characterized by atomic force microscopy, scanning electron microscopy, ellipsometry and water contact angle measurements. The self-cleaning capability of the films was investigated after several days of dust accumulation by optical microscopy and UV-Vis spectroscopy. Our results showed that the coated glass samples have transmission loss of less than 6% along with 56% reduction in dust deposition rate compared with bare uncoated glass, paving the way to their application on PV panels, light-weight window and/or door polycarbonates for excellent self-cleaning applications.
v3-fos-license
2020-02-25T16:26:16.727Z
2020-02-24T00:00:00.000
211265389
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-6655-4", "pdf_hash": "3d019469e7ed675970e420f87079ad504c3a5363", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41542", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3d019469e7ed675970e420f87079ad504c3a5363", "year": 2020 }
pes2o/s2orc
A panel of protein kinase high expression is associated with postoperative recurrence in cholangiocarcinoma Background Cancer recurrence is one of the most concerning clinical problems of cholangiocarcinoma (CCA) patients after treatment. However, an identification of predictive factor on Opisthorchis viverrini (OV)-associated CCA recurrence is not well elucidated. In the present study, we aimed to investigate the correlation of twelve targeted protein kinases with CCA recurrence. Methods Twelve protein kinases, epidermal growth factor receptor (EGFR), human epidermal growth factor receptor 2, 3, 4 (HER2, HER3, HER4), vascular endothelial growth factor receptor 3 (VEGFR3), vascular endothelial growth factor-C (VEGF-C), erythropoietin-producing hepatocellular carcinoma receptor type-A3 (EphA3), EphrinA1, phosphor-serine/threonine kinase 1 (p-Akt1), serine/threonine kinase 1 (Akt1), beta-catenin and protein Wnt5a (Wnt5a) were examined using immunohistochemistry. Pre-operative serum tumor markers, CA19–9 and CEA were also investigated. Results Among twelve protein kinases, EGFR, HER4, and EphA3 were associated with tumor recurrence status, recurrence-free survival (RFS) and overall survival (OS). Multivariate cox regression demonstrated that EGFR, HER4, EphA3 or the panel of high expression of these proteins was an independent prognostic factor for tumor recurrence. The combination of high expression of these proteins with a high level of CA19–9 could improve the predictive ability on tumor recurrence. Moreover, the patients were stratified more accurately when analyzed using the combination of high expression of these proteins with primary tumor (T) or lymph node metastasis (N) status. Conclusion EGFR, HER4, EphA3 or the panel of high expression of these proteins is an independent prognostic factor for post-operative CCA recurrence. Background Cholangiocarcinoma (CCA) is a malignant tumor of bile duct epithelium with very high incidence in Thailand, particularly in northeastern region, of which Opisthorchis viverrini (OV) infection is reported as the major risk factor of CCA development in this area [1]. CCA is usually asymptomatic in early stage and most patients are diagnosed with CCA when the disease becomes advanced, resulting in poor outcome [2]. Moreover, the recurrence after treatment is nowadays very important, because it is a significant problem for many patients with cancer and is involved in poor prognosis of patients [3]. In CCA, the high recurrence rate was reported in many studies [4,5]. A precious study reported that most CCA patients developed recurrence within 2 years after surgery and the percentage of recurrence accounted for 62.2% [5]. Recently, recurrence rate in mass-forming type of intrahepatic CCA patients was reported with the recurrence rate of 80%. 1-, 2-, and 3-year RFS rate were very low which were 16.2, 5.4, and 2.7%, respectively. However, the association between RFS and clinicopathological data was not significant [4]. Thus, the effective prognostic biomarkers are required to assess outcome of CCA patients as well as the probability of recurrence after treatment. Nowadays, there are several markers reported as tumor behavior predictors. They can be used for disease management including progression and the relapse indicators of cancer. Serum tumor markers are the wellestablished markers for monitoring tumor and have been reported to predict tumor recurrence in many types of cancer [6,7]. However, molecular biomarkers are widely studied because it is not only used for the predicting of tumor progression or recurrence, but can also be employed as drug target for cancer treatment. Our group previously reported the alteration of protein kinase expression in CCA. We found that many protein kinases were upregulated in CCA tissue and cell lines, including receptor tyrosine kinase, the epidermal growth factor receptor (EGFR) family, vascular endothelial growth factor (VEGFR) receptor, erythropoietinproducing hepatocellular carcinoma (Eph) receptor, and also many down-steam kinases such as serine/threonine kinase or protein kinase B (Akt), and Wnt/beta-catenin signaling pathways [8]. The evaluation of EGFR expression was reported in CCA and associated with poor prognosis of CCA patients [9]. Furthermore, our group also reported that high expressions of VEGFR3, EphA3 and their ligands were correlated with CCA metastasis [10]. The role of protein kinase in PI3K/Akt signaling pathway was also studied in CCA. The results showed that high expression of protein in this pathway was mostly involved in the worse clinical outcome of CCA patients. Moreover, targeting of this pathway using NVP-BEZ235 could inhibit tumor growth and metastasis through reduced protein kinase activation [11]. The association of Wnt/beta-catenin signaling pathway with CCA progression was also reported. The result showed the alteration of Wnt proteins was associated with poor prognosis of CCA patients, and inhibition of betacatenin expression could inhibit CCA cell growth [12]. Large-scale multi-omics have also been employed in many studies in order to understand the carcinogenesis as well as the progression of disease. In 2015, a previous study reported the genomic alteration which characterized biliary tract cancer (BTC) patients. EGFR family genes including EGFR, ERBB2 (HER2), ERBB3 (HER3) were the most activating gene in gallbladder cancer while EPHA2 mutation was found frequently in intrahepatic CCA (iCCA) [13]. ERBB2 amplification was reported for 3.9-8.5% of CCAs. This was more frequent in fluke-associated CCA which account for 10.4% compared with 2.7% of fluke-negative CCA, resulting in the elevation of ERBB2 gene expression in fluke-associated CCA compared with fluke-negative cases. In addition, the upregulation of AKT1 and WNT5B was also reported [14]. Single-nucleotide variations (SNVs) and insertion-deletions (indels) were found in ERBB3 gene in BTCs (5%). This mutation was significantly enriched in extrahepatic CCA (eCCA) [15]. Recently, Nepal et al. reported that the mutation of ERBB4 gene was also found in intrahepatic CCA (iCCA). In addition, pathway dysregulation in each subgroup of patients was explored. They found that the patients who have KRAS mutation were enriched for immune-related pathways, ErbB and VEGF pathways. On the other hand, WNT pathway was enriched in patients with TP53 gene mutation [16]. Since protein kinases play an important role in CCA progression and are involved in poor prognosis of CCA patients. In the current study, we hypothesized that the alteration of these protein kinases including EGFR family, VEGFR3 and its ligand, Eph receptor and its ligand, Akt1 and its activated form, Wnt, and beta-catenin may be used as the predicting markers for post-operative CCA recurrence. Therefore, twelve protein kinases were examined using immunohistochemistry and analyzed against CCA recurrence status, recurrence location, recurrence-free survival (RFS) and overall survival (OS). Patient selection criteria and follow-up OV-associated cholangiocarcinoma (CCA) patients who underwent surgery at Srinagarind Hospital, Khon Kaen University, Khon Kaen, Thailand between February, 2007 and December, 2016 were retrospectively studied. In order to avoid the effect of neoadjuvant on protein expression, the patients were excluded if they received either radiotherapy or chemotherapy before operation. Tissue samples were obtained from CCA patients and kept in the BioBank of the Cholangiocarcinoma Research Institute. The clinical information was assessed in all CCA patients including sex, age, tumor location, histology, size of primary tumor (T stage), lymph node metastasis status (N stage), distant metastasis status (M stage), and TNM staging. In addition, tumor makers (carbohydrate antigen 19-9; CA19-9 and carcinoembryonic antigen; CEA) were examined in pre-operative serum. For the recurrence, first year after surgery, all CCA patients were followed-up every 3 months and every 6 months thereafter. Post-operative recurrence was defined in the patients who developed new tumor which confirmed by computed tomography (CT)/magnetic resonance imaging (MRI). The interval between the date of operation until the date of recurrence or until the last of follow-up was defined as recurrence-free survival (RFS) and the interval between the date of operation until the date of death or until the last of follow-up was defined as overall survival (OS). Early recurrence was defined if patients developed the new tumor within 1 year after surgery, while late recurrence was defined if patients developed the new tumor after 1 year. This study was approved by the Human Research Ethics Committees, Khon Kaen University, Thailand (HE611412). Immunohistochemical staining (IHC) A CCA tissue microarray (TMA) was prepared from two independent puncture from each patient and cut into 4 μm for each section. The expression of protein was investigated using IHC. Briefly, the sections were deparaffinized with xylene and rehydrated with stepwise of 100, 90, 80 and 70% ethanol, respectively. Microwave cooking was used for antigen retrieval for 10 mins. Then tissue sections were incubated with 0.3% hydrogen peroxide followed by 10% skim milk for 30 mins of each in order to inhibited endogenous hydrogen peroxide activity, and nonspecific binding. After washing the sections were incubated with primary antibodies at room temperature for 1 h followed by 4°C overnight. The excess antibodies were washed for 3 times using phosphate buffered saline (PBS) with 0.1% tween20 followed by PBS for 5 mins of each. The sections were then incubated with HRP-conjugated secondary antibodies for 1 h, and the excess antibodies were also washed using PBS with 0.1% tween20 followed by PBS for 5 mins of each. A 3, 3'diaminobenzidine tetrahydrochloride (DAB) substrate kit (Vector Laboratories, Inc., CA) was used to develop the signal. The tissues were then counterstained using hematoxylin for 2 mins. After washing, the tissue sections were dehydrated with stepwise of 70, 80, 90, 100% ethanol and xylene, respectively. Tissue sections were mounted with permount, and finally observed under light microscopy. Immunohistochemical (IHC) scoring The expression of each protein was scored based on intensity and frequency which is the proportion of positive cells stanning. The intensity of protein expression was classified into four levels including 0 = negative, 1 = weak, 2 = moderate, and 3 = strong stanning. The proportion of positive cells stanning was semi-qualitatively, and classified into negative = 0%, 1 = 1-25%, 2 = 26-50%, and 3 = more than 50% positive stanning. The grading score was calculated by multiplying between intensity and frequency, and the minimum score was 0 while the maximum score was 9. The grading score of each patient was calculated from the average value of two independent punctures. Finally, the median value was calculated from all cases and used as cut-off point. The patients having a grading score lower, equal to or higher than the median was classified as the low or high expression group, respectively. For the proteins which have a median equal to zero, the patients have a grading score equal to zero, being classified as the negative group, while those with a grading score above zero are classified as the positive group. Statistical analysis Statistical Package for the Social Science; SPSS software v.25 was used to analyze data in this study. Chi-square test was used to analyze the correlation between protein kinase expression with recurrence status and clinicopathological characteristics of CCA patients. The difference in IHC score and tumor marker levels on recurrence and recurrence location was analyzed using the Kruskal-Wallis test and Mann-Whitney U-test. Kaplan-Meier (log-rank) analysis was used to analyze RFS and OS. The predictive ability of protein kinases on RFS and OS was analyzed by Cox proportional hazards regression. Statistical significance was considered if p-value less than 0.05. EGFR Epidermal growth factor receptor, HER Human epidermal growth factor receptor, VEGFR3 Vascular endothelial growth factor receptor 3, VEGF-C Vascular endothelial growth factor-C, EphA3 Erythropoietin-producing hepatocellular carcinoma receptor type-A3, p-Akt1: Phosphor-serine/threonine kinase 1, Akt1 Serine/threonine kinase 1, Wnt5a Protein Wnt5a, NA Not applicable Patients characteristics A total of 190 CCA patients (35% female and 65% male) were recruited in the current study. The median of age was 61 years (rang between 39 and 82). 55% of patients were classified as intrahepatic CCA cases while 45% were extrahepatic CCA cases. 43% of patients were characterized as papillary type and 57% were other types. Size of primary tumor (T) was also classified and 57% of patients were T stage I and II, whereas 43% were T stage III and IV. From 190 patients, lymph node (N) and distant (M) metastasis were shown in 55 and 6% of patients, respectively. TNM staging was also characterized according to size of primary tumor, lymph node and distant metastasis status. In this study, 40% of patients were stage I and II and 60% were stage III and IV and recurrence after surgery was also detected in 31% ( Fig. 1) (Table S1). Among patients with recurrence, 53% were classified as early recurrence while 47% were late recurrence. The median follow-up was 16, 28, and 13 months for no recurrence, late recurrence and early recurrence groups, respectively. The correlation of protein kinases with post-operative recurrence and clinicopathological characteristics In the present study, 12 protein kinases including EGFR, HER2, HER3, HER4, VEGFR3, VEGF-C, EphA3, EphrinA1, p-Akt1, Akt1, beta-catenin and Wnt5a were examined in CCA tissues obtained from190 cases using IHC. The expression of each protein was defined as high = 19). b, the levels of CA19-9 and CEA in different recurrence location, locoregional (n = 11) and distant recurrence/ combination between locoregional recurrence with distant recurrence (n = 27). p-value less than 0.05 was considered as statistical significance and low expression or positive and negative. The expression in individual patients was showed in Fig. 1 (Fig. 2). The expressions of all proteins were analyzed with post-operative recurrence including early and late recurrence in order to identify proteins that can be used for the prediction of tumor recurrence. In addition, the expression of beta-catenin was examined in the different cellular compartments, cytoplasm, membrane and nucleus. Positive expression of beta-catenin in cytoplasm, membrane and nucleus were 17, 8 and 2%, respectively. Among 12 protein kinases, the expression of EGFR, HER4, and EphA3 was significantly associated with early recurrence (p = 0.038: p = 0.033: p = 0.008; Table 2), while HER2 and p-Akt1 were significantly correlated with late recurrence (p = 0.035: p = 0.029; Table 2). In contrast, there was no correlation between HER3, VEGFR3, VEGF-C, EphrinA1, Akt1, beta-catenin, Wnt5a and post-operative recurrence ( Table 2). The IHC scores of EGFR, HER2, HER4, EphA3 and p-Akt1 were also compared between patients with and without recurrence. The IHC score of EGFR was significantly different between patients with early recurrence compared with late or without recurrence (p = 0.029: p = 0.024; Fig. 3a). The IHC scores of HER2 and p-Akt1 were significantly higher in patients with late recurrence compared with no-recurrence (p = 0.002: p = 0.013; Fig. 3a), while IHC scores of HER4 and EphA3 were significantly higher in patients with early recurrence compared with no-recurrence (p = 0.003: p = 0.004; Fig. 3a). On the contrary, there was no difference between IHC scores of HER3, VEGFR3, VEGF-C, Ehprin-A1, Akt1, beta-catenin and Wnt5a (Fig. S1 and S2). The IHC scores of these proteins were also analyzed with recurrent location. The expressing level of p-Akt1 was significantly higher in the patients with distant recurrence/combination between locoregional recurrence with distant recurrence compared with locoregional recurrence (p = 0.004; Fig. 3b), while there was no statistical difference in EGFR, HER2, HER4 and EphA3 (Fig. 3b). The expression levels of EGFR, HER2, HER4, EphA3 and p-Akt1 were also analyzed with clinicopathological characteristics. Our finding only showed the sigfinicant correlation between expression of HER4 and lymph node metastasis (p = 0.045; Table 3). TNM Size of primary tumor-node metastasis-distant metastasis, EGFR Epidermal growth factor receptor, HER Human epidermal growth factor receptor, EphA3 Erythropoietin-producing hepatocellular carcinoma receptor type-A3, Protein panel a : the expression of EGFR, HER4 and EphA, Others b : 0-1 marker high, Others c : three groups of patients (0-1 marker high and CA19-9 low, 0-1 marker high and CA19-9 high or 2-3 markers high and CA19-9 low) Fig. 8 Kaplan-Meier analysis for RFS and OS according to the combined of three protein kinase expression (EGFR, HER4 and EphA3). Upper and lower panels demonstrated the prognostic significantly of the combined of three protein kinase expression or the combined of three protein kinase expression with CA19-9 level on RFS and OS, respectively. 0-1 marker high represented the patients with all markers low or one marker high, 2-3 markers high represented the patients with at least two markers high, others represented three groups of patients (0-1 marker high and CA19-9 low, 0-1 marker high and CA19-9 high or 2-3 markers high and CA19-9 low). p-value less than 0.05 was considered as statistical significance The correlation of tumor maker level with post-operative recurrence Since tumor markers were also used to monitor patients after treatment. Therefore, in the present study, CA19-9 and CEA levels were analyzed with tumor recurrence. The result revealed that the level of CA19-9 was significantly higher in early recurrence compared with no-recurrence (p = 0.017) (Fig. 4a), whereas there was no difference between CEA level in patients with and without recurrence (Fig. 4a). In addition, the levels of CA19-9 and CEA were also analyzed with recurrence location. All markers were likely to increase in distant recurrence/combination between locoregional recurrence with distant recurrence, compared with locoregional recurrence. However, there was no such statistically significant correlation in this study (Fig. 4b). The above results demonstrate that the expression of EGFR, HER2, HER4, EphA3 and p-Akt1 was significantly associated with post-operative recurrence. Thus, the Fig. 6) and OS (p = 0.016: p = 0.025: 0.018; Fig. 6), compared with those patients with low expression. However, there was no significance found in HER2 and p-Akt1 (Fig. 6). Because the expressing levels of EGFR, HER4 and EphA3 were highly correlated with each other, their expressing levels were also associated with patient prognosis. Therefore, the combination of these proteins was also analyzed with patient prognosis. High expression of the protein pairs, EGFR and HER4, EGFR and EphA3, and HER4 and EphA3 was significantly associated with shorter RFS (p = 0.001: p = 0.008: p = 4.0 × 10 − 4 ; Fig. 7). High expression of EGFR and HER4, HER4 and EphA3 was also associated with a shorter OS (p = 0.043: p = 0.002; Fig. 7). In addition, patients who had high expression of two and three proteins were significantly associated with shorter RFS (p = 3.5 × 10 − 4 ; Fig. 8) and OS (p = 0.012; Fig. 8). The level of tumor marker CA19-9 was also correlated with tumor relapse. Thus, the prognostic efficiency of the combination of protein kinases expression and tumor marker level was also explored. It was significantly associated Erythropoietin-producing hepatocellular carcinoma receptor type-A3, Protein panel a : the expression of EGFR, HER4 and EphA3, Protein panel a : the expression of EGFR, HER4 and EphA, Others b : 0-1 marker high, Others c : three groups of patients (0-1 marker high and CA19-9 low, 0-1 marker high and CA19-9 high or 2-3 markers high and CA19-9 low) with shorter RFS (p = 1.5 × 10 − 4 ; Fig. 8) and OS (p = 0.008; Fig. 8). Independent prognostic value of EGFR, HER4 and EphA3 In order to investigate whether EGFR, HER4 and EphA3 could be used as prognostic factors, independent of clinicopathological characteristics, Cox regression analysis was used. The univariate result for factors predicting the RFS and OS is shown in Table 5. Multivariate Cox regression for RFS and OS was analyzed using the different models that are summarized in Table 6 and Table 7. The result demonstrated that EGFR, HER4 and EphA3 were the independent prognostic factors for RFS (HR: 1.542; p = 0.006, HR:1.388; p = 0.042, HR: 1.469; p = 0.001; Table 6). EGFR and EphA4 were also independent prognostic factors for OS (HR: 1.450; p = 0.019, HR: 1.372; p = 0.040; Table 7). The combination of high expression of two and three markers or the high expression of two and three markers with high level of CA19-9 could be used to improve the predictive ability for RFS (HR: 1.528; p = 0008, HR: 2.080; p = 0.004; Table 6). Moreover, the patients were stratified more accurately when analyzed using the combintion of protein kinase expression and primary tumor (T) or lymph node metastasis (N) status. The patients with high stage T and high expression of two and three markers or high expression of two and three markers with high level of CA19-9 have shorter RFS, compared with other groups (p = 2.1 × 10 − 9 : p = 6.9 × 10 − 9 ; Fig. 9). Similarly, patients with lymph node metastasis and high expression of two and three markers or high expression of two and three markers with a high level of CA19-9 have shorter RFS compared with other groups (p = 9.0 × 10 − 7 : p = 3.8 × 10 − 5 ; Fig. 9). Fig. 9 Kaplan-Meier analysis for RFS according to the combination of three protein kinase expression (EGFR, HER4 and EphA3) and clinicopathological features. 0-1 marker high represented the patients with all markers low or one marker high, 2-3 markers high represented the patients with at least two markers high, others represented three groups of patients (0-1 marker high and CA19-9 low, 0-1 marker high and CA19-9 high or 2-3 markers high and CA19-9 low). T represented the primary tumor stage, N represented the lymph node metastasis status (N0: no lymph node metastasis, N1: lymph node metastasis). p-value less than 0.05 was considered as statistical significance may be used as a guideline for clinical intervention in order to improve patient survival. Conclusion Our results demonstrate that the elevated expression of EGFR, HER4, and EphA3 is correlated with OVassociated CCA recurrence. Moreover, the panel of high expression of EGFR, HER4, and EphA3 can be used as a prognostic factor for CCA recurrence, especially when combined with CA19-9 or clinicopathological features, primary tumor (T) or lymph node metastasis (N) status.
v3-fos-license
2021-01-07T09:04:48.746Z
2020-12-31T00:00:00.000
234394367
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jneonatalsurg.com/ojs/index.php/jns/article/download/704/990", "pdf_hash": "b116e5c89546b3d0a0ae358207e5185078f36394", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41545", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5e813aa8bed86015029585a015d8c04aed855b2f", "year": 2020 }
pes2o/s2orc
Phytobezoar causing intestinal obstruction in a neonate: A case report Case presentation: We report a case of intestinal obstruction in a 2-day-old neonate with no specific radiological features pointing to any common etiology. On exploratory laparotomy, a swollen raisin was found impacted in the ileum causing intestinal obstruction. The history taken in retrospect revealed that the elder sibling had witnessed her father perform a traditional ritual of putting a drop of honey into the mouth of the newborn and she imitated the same with a raisin, which led to the obstruction. INTRODUCTION Neonatal intestinal obstruction is commonly caused by small bowel atresia, intestinal malrotation, Hirschsprung's disease, or meconium ileus, with subtle differences in clinical and radiological findings. Foreign bodies in the intestinal lumen of newborns are very uncommon events. In older children, most of the foreign bodies are swallowed accidentally. [1] In most instances, a foreign body does not cause complete obstruction, and after it has passed through the esophagus, it will pass the entire gastrointestinal tract. [2] However, there is almost no information about intestinal foreign bodies in newborns. [3] Intestinal bezoars are a rare cause of intestinal obstruction in children. It is even rarer in a newborn. Bezoar as the etiology of intestinal obstruction is not usually considered in the neonatal and early infancy. Herein, we report a case of intestinal obstruction in a neonate caused by a raisin impacted in the distal ileum. CASE REPORT A term male baby, with a birth weight of 3.2kg, was born by normal vaginal delivery to a gravida-3 mother. Feeds were started and the baby was well till about 30 hours of life when the child developed vomiting which was initially nonbilious, and later turned bilious in nature. The child was reviewed by a Pediatric Surgeon at another center, where the child was admitted and started on IV antibiotics and IV fluids. Upper GI contrast study was done which showed dilated proximal small bowel loops and absence of distal bowel gas. The patient was referred to our center; on arrival, the patient's heart rate was 140 beats per minute, and respiratory rate of 42 breaths per minute. Oxygen saturation was 97% on ambient air and nasogastric output was bilious. The abdomen was distended, soft, the hernial sites were normal, and the anal opening was normally located. On auscultation, bowel sounds were exaggerated. A supine radiograph of the abdomen showed dilated central small bowel loops but no air shadows in the pelvis (Fig.1). Ultrasound of the abdomen showed dilated gas-filled bowel loops and Doppler showed normal orientation of superior mesenteric vessels. Laboratory examination showed a normal CBC and serum electrolytes. A diagnosis of intestinal obstruction was made; however, the clinical and radiological features did not point at any likely etiology. As the patient had acute intestinal obstruction, he was taken up for emergency exploratory laparotomy. The intraoperative findings showed dilated small bowel loops with an intraluminal soft, globular foreign body impacted in the distal ileum (Fig.2). The bowel was opened longitudinally over the foreign body which was found to be a swollen raisin that had imbibed fluid and swollen up and was completely obstructing the narrow lumen of the terminal ileum (Fig.3). The raisin was removed and the enterotomy was closed. On re-inquiry, it was found that the elder sister who was five years old, playfully put a raisin into the mouth of the newborn, which went unnoticed by the parents. Postoperative recovery was uneventful, and the patient was discharged in good clinical condition. The child was initially on 3 monthly follow up for one year. During the last follow-up, he was 2 years old, asymptomatic, and had normal developmental milestones. DISCUSSION Our patient developed features of intestinal obstruction on the second day of life, though the baby passed meconium spontaneously on the first day of life and tolerated breastfeeding initially, thus minimizing the possibility of congenital causes of neonatal intestinal obstruction. The abdominal radiograph showed dilated central small bowel loops but the absence of gas shadows in the pelvis indicating a small bowel obstruction. The malrotation was also ruled out in our patient based on history, x-ray, and doppler findings. The operative findings were surprising and brought forth an unusual cause of the neonatal intestinal obstruction. The newborn Hindu ritual of putting a drop of honey and ghee (clarified butter) is known as "Jatakarma". [4] The elder sibling tried to imitate the father with a raisin and, it is likely that a small raisin was swallowed easily by the neonate without choking. During the time taken for its transit in the gut, the raisin had imbibed fluids and swollen up to cause intestinal obstruction. The occurrence of a swallowed foreign body in a neonate is rare with only a few cases reported. If the incident has not been witnessed and the ingested object is radiolucent, the diagnosis of foreign body ingestion can be very tricky in neonates. [5] Patients and their families are rarely aware of swallowed foreign bodies, which could cause complete intestinal obstruction or even intestinal perforation. [4] The literature on ingested foreign bodies in neonates is limited to case reports which have been tabulated in Table 1. The most-reported esophageal foreign body in a neonate is a swallowed endotracheal tube (ETT). [5] Other ingested foreign bodies reported are stones, button, nail, thumbtack, marble, bean, etc. as mentioned in Table 1. [3], [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21] Sharma et al, [9] in 1993 reported a case of a cotton ball phytobezoar in a neonate, wherein the neonate accidentally swallowed a cotton ball which was being used to administer sweetened water as part of a ritual in north India. Bezoars are rare in neonates. Based on their composition, they are classified into phytobezoars (concretions of vegetable matters), trichobezoars (hair), lactobezoars (concentrated milk formulas), pharmacobezoars (drugs), and food bolus bezoars. Phytobezoars are con-cretions of indigestible fibers derived from ingested vegetables and fruits. They have been ascribed to the ingestion of mainly persimmon, coconut fibers, celery, skin, and stems of grapes, prunes, raisins, leek, mallow, and wild beets. [22] The mechanism of phytobezoar formation from plant substances is probably mechanical and depends upon the insoluble and indigestible fiber content. [22] They are commoner in adults and rarely reported in the pediatric age group. [23] The most common site of formation is the stomach and it is not unusual to find parts of phytobezoar in the small bowel. [24] Primary small bowel phytobezoars are rare and are almost always obstructive. They usually become impacted in the narrowest portion of the small bowel, the commonest site being the terminal ileum followed by the jejunum. [25] In our patient, the raisin had gradually swollen up and got impacted in the terminal ileum causing the obstruction, which could be discovered only after an enterotomy. To conclude, rare causes of neonatal intestinal obstruction need to be considered when the clinicoradiological features do not point to a specific etiology. Knowledge of local traditional practices and rituals is at times the most important pointer towards the etiology of a clinical condition. The basic clinical skill of history taking is still so important despite the availability of advanced radiological investigations.
v3-fos-license
2018-04-03T01:52:40.887Z
2017-07-27T00:00:00.000
31317872
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0180925&type=printable", "pdf_hash": "24b7e1defe39937acb11a27293551b472c543958", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41547", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "24b7e1defe39937acb11a27293551b472c543958", "year": 2017 }
pes2o/s2orc
2-aminoimidazoles potentiate ß-lactam antimicrobial activity against Mycobacterium tuberculosis by reducing ß-lactamase secretion and increasing cell envelope permeability There is an urgent need to develop new drug treatment strategies to control the global spread of drug-sensitive and multidrug-resistant Mycobacterium tuberculosis (M. tuberculosis). The ß-lactam class of antibiotics is among the safest and most widely prescribed antibiotics, but they are not effective against M. tuberculosis due to intrinsic resistance. This study shows that 2-aminoimidazole (2-AI)-based small molecules potentiate ß-lactam antibiotics against M. tuberculosis. Active 2-AI compounds significantly reduced the minimal inhibitory and bactericidal concentrations of ß-lactams by increasing M. tuberculosis cell envelope permeability and decreasing protein secretion including ß-lactamase. Metabolic labeling and transcriptional profiling experiments revealed that 2-AI compounds impair mycolic acid biosynthesis, export and linkage to the mycobacterial envelope, counteracting an important defense mechanism reducing permeability to external agents. Additionally, other important constituents of the M. tuberculosis outer membrane including sulfolipid-1 and polyacyltrehalose were also less abundant in 2-AI treated bacilli. As a consequence of 2-AI treatment, M. tuberculosis displayed increased sensitivity to SDS, increased permeability to nucleic acid staining dyes, and rapid binding of cell wall targeting antibiotics. Transcriptional profiling analysis further confirmed that 2-AI induces transcriptional regulators associated with cell envelope stress. 2-AI based small molecules potentiate the antimicrobial activity of ß-lactams by a mechanism that is distinct from specific inhibitors of ß-lactamase activity and therefore may have value as an adjunctive anti-TB treatment. Introduction The ongoing, global spread of tuberculosis (TB), is due in part to the lack of new and more effective antimicrobial drugs to treat drug-sensitive and multidrug-resistant (MDR) strains of Mycobacterium tuberculosis (M. tuberculosis) [1]. In 2015 alone, an estimated 10.4 million people developed TB resulting in 1.4 million deaths. Moreover, the incidence of new MDR-TB cases continues to increase and was estimated at 480,000 [2]. Treating drug-sensitive TB is challenging and requires a minimum six month course of combination antimicrobial drug therapy consisting of the first-line drugs isoniazid, rifampicin, ethambutol, and pyrazinamide that result in undesirable side-effects in some patients [3]. Moreover, treatment of MDR-TB is considerably more difficult and expensive requiring stronger and potentially more toxic drug combination therapy lasting approximately 2 years [2]. In short, the difficulty in controlling TB partially stems from the prolonged treatment using classical chemotherapy and the low treatment success rate in patients infected with MDR strains of M. tuberculosis. Unfortunately, the current pipeline of new anti-TB drugs for the treatment of resistant infections have unproven efficacy with undesirable side effects [4,5]. The 2-aminoimidazole (2-AI) class of small molecules are derived from the marine sponge metabolites oroidin and bromoageliferin [6]. Importantly, treatment with 2-AI reversed isoniazid tolerance of attached M. tuberculosis communities and significantly reduced numbers of viable bacilli when combined with isoniazid in an in vitro model of non-replicating persistence [7,8]. These observations suggested that combining 2-AI compounds and conventional antibiotics can be a viable option to overcome M. tuberculosis drug-tolerance or resistance as shown for other clinically important Gram-positive and Gram-negative bacteria [6,9,10]. For example, 2-AI derivatives were shown to revert oxacillin resistance in methicillin resistant Staphylococcus aureus (MRSA) [11], and suppress PmrAB mediated colistin resistance of drug-resistant Acinetobacter (A.) baumannii [12][13][14]. Altogether, these reports suggest that 2-AI compounds have promising potential as an adjunctive therapy when combined with antibiotics to treat drug-tolerant or-resistant bacteria. Since their introduction, ß-lactam antibiotics have proven to be safe and effective at controlling a variety of bacterial infections [15,16]. However, ß-lactams are not currently used to treat TB, due to the intrinsic resistance exhibited by M. tuberculosis. The inherent resistance of M. tuberculosis to ß-lactams is mainly attributed to two mechanisms: a) inactivation of the antibiotics by blaC encoded ß-lactamase and b) low permeability of the mycobacteria cell envelope limiting the diffusion of antibiotics such as ß-lactams [16][17][18][19][20][21][22][23]. As occurs in Gram-negative bacteria, mycobacteria have an outer cell membrane [24][25][26], a major permeability barrier against ß-lactams targeting penicillin binding proteins (PBPs) that reside in the periplasmic compartment [27]. The inner leaflet of the mycobacterial outer membrane is composed of mycolic acids, long fatty acids approximately 90 carbons in length, that are covalently bound to arabinogalactan and tightly packed together effectively blocking the diffusion of hydrophilic molecules. The outer leaflet of the M. tuberculosis outer membrane is enriched with non-covalently bound lipids such as trehalose dimycolate (TDM) and phthiocerol dimycoserosates (PDIMs) [24,28]. Together, this outer membrane serves as a low fluidity and low permeability barrier to antibiotics. Since ß-lactams are currently only considered in the treatment of drug-resistant TB, any strategy that circumvents M. tuberculosis ß-lactam resistance may provide new opportunities to utilize this class of drugs to treat both drug susceptible and drug-resistant strains of M. tuberculosis [29,30]. Indeed, there is renewed interest in repurposing ß-lactams to treat TB in combination with ß-lactamase inhibitors [22,31,32], which are supported by recent human clinical trials [33]. This study investigated the use of 2-AI compounds to potentiate ß-lactams against M. tuberculosis, and the mechanisms by which these compounds work. It was hypothesized that 2-AI compounds would interfere with mechanisms conferring M. tuberculosis intrinsic ß-lactam resistance. Herein it is reported that 2-AI compounds lower MIC values and improve the bactericidal activity of ß-lactams against M. tuberculosis. 2-AI compounds reduce M. tuberculosis ß-lactamase activity by altering secretion of the enzyme rather than by directly inhibiting the enzymatic activity as in the case of the classic ß-lactamase inhibitor clavulanic acid. Mechanistic studies revealed that 2-AI treatment alters M. tuberculosis cell envelope composition, leading to increased permeability and thus increased binding of cell wall targeting antibiotics. Taken together, these data demonstrate that 2-AI compounds potentiate ß-lactam antibiotics through a novel mechanism, which may be further exploited in the development of adjunctive anti-TB therapy against drug sensitive and drug resistant M. tuberculosis. 2-AI compounds Structures and synthesis of 2B8 (compound 2 in reference [7]) and RA11 were previously disclosed (S1 Fig). Compounds were dissolved at 100 mM as their HCl salts in molecular biology grade DMSO (Sigma-Aldrich, USA) and stored at -80˚C until use. Broth microdilution method for MIC determination for ß-lactams with or without 2-AI compounds Determination of ß-lactam MICs against mycobacteria was carried out by using a broth microdilution method with alamarBlue 1 (Invitrogen, Carlsbad, CA, USA) as previously described [35]. Briefly, in 96-well flat-bottomed cell culture plates (Thermo-Fisher, Waltham, MA, USA), ß-lactams were serially two-fold diluted in 7H9 media starting from the following concentrations: 1024 mg/L (ampicillin, oxacillin, carbenicillin, penicillin V and amoxicillin), 512 mg/L (cefotaxime and ceftazidime), 256 mg/L (cefoxitin), and 16 mg/L (meropenem). M. smegmatis and M. tuberculosis H37Rv grown in 7H9 media to an OD 600 of 0.4 to 0.6 were further diluted 1:20 and inoculated to wells containing ß-lactams. Final volume of each well was 200 μL. Plates were incubated under stationary conditions at 37˚C. After 48 h (for M. smegmatis) or 5 days (for M. tuberculosis), 20 μL alamarBlue 1 was added to each well and incubated for an additional 6 h (for M. smegmatis) or 24 to 48 h (for M. tuberculosis). MIC was determined as the lowest drug concentration that prevented color change from blue to purple or pink. This was confirmed as MIC 95 when measured by fluorescence [36]. Briefly, fluorescence was recorded at 560 ex /590 em and % inhibition of reduction was calculated as follows: MIC = 1-100 × (Sample fluorescence intensity-Negative control fluorescence intensity) / (Positive control fluorescence intensity-Negative control fluorescence intensity), where the positive control is M. tuberculosis without drugs and the negative control is media. All ß-lactams were purchased from Sigma-Aldrich except for carbenicillin and meropenem (Gold Biotechnology, St. Louis, MO, USA). For MIC determination when ß-lactams are combined with 2-AI compounds, broth microdilution MIC assay was performed in the presence of two-fold diluted RA11 or 2B8 ranging between 7.8125 and 250 μM. One column of the plate contained only 2-AI compounds without any ß-lactams to determine the MIC of 2-AI compounds against mycobacteria. MICs were determined as above and MIC of ß-lactams alone was divided by MIC of ßlactams combined with 2-AI compounds to calculate fold-reduction of MIC resulting from 2-AI treatment. Evaluation of bactericidal activity of ß-lactams against M. tuberculosis M. tuberculosis H37Rv was incubated at 37˚C for five days in 96-well flat-bottomed cell culture plates containing different concentrations of ß-lactams alone, or with 2B8 or potassium clavulanate (Sigma-Aldrich, USA). 2B8 was added at 31.25, 62.5, and 125 μM and clavulanate was added at 8 mg/L. Tested concentrations for each ß-lactam were as follows: carbenicillin (2 and 32 mg/L), amoxicillin (2 and 32 mg/L), ceftazidime (1 and 16 mg/L), and meropenem (0.03125 and 0.5 mg/L). After 5 days of incubation, cultures were serially diluted in sterile PBS and plated on Middlebrook 7H11 agar (BD, USA) with glycerol, OADC and 8 mg/L cyclohexamide (Gold biotechnology, USA). The number of CFUs was determined by counting visible colonies after three or four weeks of incubation at 37˚C. Bactericidal activity (%) was calculated as follows: 100-100 × (Treatment group CFUs/Control CFUs). For this calculation, control CFUs were obtained from non-treated (for ß-lactams only group), 2B8 only (for 2B8/ßlactams combination group), and clavulanate only (for clavulanate/ß-lactams combination group) treated samples. Collection of M. tuberculosis culture filtrate protein (CFP) To obtain M. tuberculosis H37Rv CFP, it was grown to an OD 600 of 0.4 to 0.6 in glycerol-alanine-salts (GAS) media containing 0.03% Bacto Casitone (Difco, Franklin Lakes, NJ, USA), 0.005% ferric ammonium citrate (Sigma-Aldrich, USA), 0.4% dibasic potassium phosphate, 0.2% citric acid, 0.1% L-alanine, 0.12% magnesium chloride hexahydrate, 0.06% potassium sulfate, 0.2% ammonium chloride, 0.18% sodium hydroxide, and 1% glycerol (all purchased from Sigma-Aldrich, USA). Subsequently, cultures were centrifuged (×1,700g) for 10 min and supernatants were harvested. Collected supernatant was filtered through a 0.45 μM syringe filter (Millipore, USA) to obtain CFP. Total protein concentration present in CFP was determined using the BCA assay (Pierce, Waltham, MA, USA) following the manufacturer's instruction. CFP was also obtained from 2-AI compounds treated M. tuberculosis. Briefly, after the initial culture in GAS media, cells were washed twice with sterile PBS and reconstituted in GAS media to an OD 600 of 0.4. Cultures were treated with 2-AI compounds (RA11 or 2B8) or clavulanate (8 mg/L) and incubated at 37˚C for 24 h and CFP was harvested as described above. ß-lactamase activity assay ß-lactamase activity was evaluated with the colorimetric kit from Biovision (Milpitas, CA, USA). Briefly, a total of 50 μL samples were transferred to a 96-well cell flat-bottomed culture plate and nitrocefin included in the kit was added to a final concentration of 20 μM. Immediately following the addition of nitrocefin to samples, absorbance at 490 nm was monitored every five min for 2 h at 37˚C using a Synergy 2 multi-mode plate reader (BioTek, Winooski, VT, USA) to obtain a nitrocefin hydrolysis curve. A standard curve was derived from known amounts of hydrolyzed nitrocefin provided in the assay kit. Total nitrocefin hydrolyzed (nM) per min was calculated from a standard curve. Data were normalized to CFUs or nM nitrocefin hydrolyzed/min/mg of protein. SDS sensitivity assay M. tuberculosis H37Rv was grown to an OD 600 of 0.4 to 0.6 in 7H9 media with OADC supplement and 0.05% Tween 80, then treated with 125 μM 2B8 for 24 h. After treatment, bacterial pellets were washed twice with sterile PBS and reconstituted with PBS to an OD 600 of 0.1. Sodium dodecyl sulfate (SDS, Cayman Chemical, Ann Arbor, MI, USA) was added to the cultures to achieve a final concentration of 0.005 and 0.05%. Cultures were plated on 7H11 agar plates at 0, 1, 2, 3 and 4 h post addition of SDS. After three or four weeks of incubation at 37˚C, CFUs were enumerated and percent survival through 4 h was calculated for each sample and compared to starting CFUs at 0 h time-point. Evaluation of cell envelope permeability and cell membrane integrity using BOCILLIN ® , BODIPY ® FL vancomycin and propidium iodide M. tuberculosis H37Rv was grown to an OD 600 of 0.4 to 0.6 in 7H9 media supplemented with OADC and 0.05% Tween 80, then diluted to OD 600 of 0.1 in the same media prior to use. Diluted cultures were treated with 62.5 and 125 μM 2-AI compounds (RA11 or 2B8), SDS 0.05% or 20 μM meropenem in combination with 100 μM clavulanate (MCA) [40]. After treating for 30 min, 120 min, or 24 h at 37˚C while shaking, cultures were aliquoted into 5 mL polystyrene tubes and stained with 1 mg/L BODIPY 1 FL vancomycin, 10 mg/L BODIPY 1 -tagged penicillin V (BOCILLIN 1) ) or 15 μM propidium iodide (PI) (Life Technologies, Carlsbad, CA, USA), for 30 min at 37˚C in the dark. Cells were pelleted by centrifuging (×1,700g) for 5 min to remove remaining free dye and washed with sterile PBS two times (PI stained samples were not washed, but directly fixed as described below after removing the supernatant). After these washes, PBS was removed and cell pellets were fixed with 4% paraformaldehyde (VWR, Radnor, PA, USA) in PBS for 15 min. After fixation, bacterial cells were analyzed by flow cytometry using an LSRII flow cytometer (BD, USA). The cytometer was adjusted as follows: Forward scatter (FSC) and side scatter (SSC) were set to logarithmic scale, threshold was set at 2000 FSC and SSC, acquisition was set to low (<1000 events/sec) and 10,000 to 50,000 events were collected for each sample. Fluorescence of BODIPY 1 -labeled antibiotics and PI was excited with the 488 nm blue laser and emission detected with the 530/30 nm and 610LP filters, respectively. Data were analyzed using Kaluza 1.3 software (Beckman Coulter, Brea, CA, USA). For competitive inhibition of BODIPY 1 FL vancomycin binding to M. tuberculosis, unlabeled vancomycin (Gold Biotechnology, USA) at 50×, 100×, and 500× the amount of BODIPY 1 FL vancomycin (1 mg/L), was added to 2B8 treated samples prior to the addition of BODIPY 1 FL vancomycin. Fixed BODIPY 1 FL vancomycin stained bacteria were also analyzed under a microscope equipped with X-cite 120 fluorescence illuminator (Excelitas Technologies, Waltham, MA, USA). RNA isolation and next generation sequencing For RNA isolation, M. tuberculosis H37Rv was cultured in 7H9 media to an OD 600 of 0.4 then treated with 125 μM 2B8 for 2 and 24 h. After treatment, mycobacterial RNA was extracted using trizol/chloroform (Sigma-Aldrich, USA) as previously described with minor modifications [41]. Briefly, cells were resuspended in trizol and disrupted using Zirconia beads (Biospec, Bartlesville, OK, USA) by beating six times for 30 sec and cooling on ice for one min in between. Beads were removed and chloroform was added to trizol (0.2:1, v/v). Samples were vortexed and centrifuged to extract solubilized RNA in the aqueous phase. To precipitate RNA, molecular grade 100% ethanol (Sigma-Aldrich Aldrich, USA) was added to the aqueous phase and incubated at -80˚C overnight. The RNA was pelleted by centrifugation (×17,000g) at 4˚C, for 15 min followed by washing with 75% ethanol. DNA contamination was removed by treating with 10 μL DNase1 (New England Biomed, Ipswich, MA, USA) at 37˚C for 30 min. Then 100 μL of acid phenol (Sigma-Aldrich, USA) was added to the samples and vortexed. After centrifugation (×17,000g) for one min, the aqueous phase was collected and transferred to clean RNAse-free tubes, followed by the addition of 33 μL sodium acetate (Sigma-Aldrich, USA) and 250 μL of 100% ethanol. Samples were gently mixed by inverting and placed at -80˚C overnight. Samples were centrifuged (×17,000g) for 15 min at 4˚C to collect RNA pellets. Pellets were further washed with 80% ethanol. After ethanol removal, pellets were airdried at room temperature for five min and reconstituted in 15 μL of RNAse-free water (Corning, USA). Isolated total RNA samples were prepared for next generation sequencing (NGS) using the Illumina Stranded Total RNA Library Prep Kit (Illumina, San Diego, CA, USA) with the Epicentre Ribo-Zero Gram positive bacteria ribosomal RNA depletion (Illumina, USA). After validation and quantitation, the libraries were pooled for multiplexed sequencing. The pool was loaded on one lane of a HiSeq Rapid Run flow cell (v1) and sequenced in a 1×50 bp single end (SE50) format using Rapid SBS reagents. Base calling was performed by Illumina Real Time Analysis (RTA, Illumina, USA) v1.18.61 and output of RTA was de-multiplexed and converted to FastQ format with Illumina Bcl2fastq v1.8.4 (Illumina, USA). To compensate for the low number of reads obtained for one sample, MiSeq (SE50) sequencing was performed with this one library. Reads from both the HiSeq and MiSeq runs were combined for that particular sample. Sequencing data were analyzed following methods similar to those previously described [42]. Briefly, raw reads were subjected to trimming of low-quality bases and removal of adapter sequences using Trimmomatic (v0.32) [43] with a 4 bp sliding window, cutting when the read quality was below 15 (using the Phred33 quality scoring system) or read length was less than 36 bp. Trimmed reads were then aligned to the M. tuberculosis H37Rv genome (assembly 19595v2) using Bowtie (v1.0.0) [44] with the-S option to produce SAM files as output. Alignment quality control was performed using the HTSeq-qa function within the HTSeq software package (v.0.6.1) [45]. Further graphical quality control analyses were performed using the Qualimap software suite (v2.0) [46]. Sequencing depth was calculated using SAMtools (v1.2) [47]. Aligned reads were then counted per gene feature in the M. tuberculosis H37Rv genome using the HTSeq software suite (v0.6.1). Differential gene expression was calculated by normalizing the data utilizing the trimmed mean of M-values normalization method [48] and filtering out genes that had <23 counts per million (CPM) within the edgeR package (v3.0.8) in R (v2.15.3) [49]. The transcriptional profiling data have been submitted to the NCBI GEO database (accession no. GSE95773). Statistical analysis Statistical analyses were carried out using one-way ANOVA with Tukey's post hoc test using GraphPad Prism 5 (GraphPad Software, La Jolla, CA, USA). P values less than 0.05 were considered significant. For RNA transcriptome data, statistical analysis was performed in R Studio (ver. 0.98.1091) by the exact test with a negative binomial distribution for each set of conditions and testing for differential gene expression [50] using edgeR (v3.0.8). Differentially expressed genes were determined to be statistically significant based on a q < 0.05 and >1.5-fold differentially regulated. Magnitude amplitude (MA) plots were generated by modifying a function within the edgeR package (v3.0.8). 2B8 treated M. tuberculosis fails to grow in the presence of carbenicillin In untreated M. tuberculosis, the presence or absence of carbenicillin had no effect on growth (Fig 1A and 1B). In contrast, when M. tuberculosis was treated with 125 μM 2B8, the lead 2-AI compound, the presence of carbenicillin abrogated growth ( Fig 1D) whereas growth of 2-AI treated cultures was observed when carbenicillin was excluded from the media (Fig 1C). 2-AI treatment reduces MIC of ß-lactams against mycobacteria Based on the observation that 2B8 treated M. tuberculosis failed to grow in the presence of carbenicillin, it was hypothesized that 2-AI compounds could potentiate ß-lactams against mycobacteria. Therefore, the MICs of multiple ß-lactams against M. tuberculosis H37Rv and M. smegmatis were evaluated in the presence or absence of 2-AI compounds. Compared to the use of ß-lactams alone, combination with 2B8 significantly reduced MIC 95 values against both M. smegmatis and M. tuberculosis (Table 1). For M. smegmatis, 2B8 reduced the MICs of the four ß-lactams (carbenicillin, amoxicillin, ceftazidime, and meropenem) tested at 25% of the 2B8 MIC (10 μM). MIC reduction was highest when 2B8 was used at 50% of its MIC (20 μM), was 128-fold for cefotaxime and 32-fold for the other tested drugs. For M. tuberculosis, reduction in ß-lactam MICs became evident at a 2B8 concentration of 12.5% of its MIC (31.25 μM). When used at 50% of its MIC (125 μM), 2B8 reduced the MICs of the five tested ß-lactams at least 32-fold, with the highest fold-reductions of 128-fold observed for carbenicillin and meropenem. RA11, a 2B8 derivative differing only in its alkyl side chains [7], was also tested against M. tuberculosis. The addition of RA11 also resulted in a reduction in ß-lactam MICs, but to a lesser degree than that for 2B8. For example, the amoxicillin MIC was reduced 64-fold by 2B8 while RA11 only reduced the MIC by 16-fold. 2-AI improves bactericidal effect of ß-lactams Since 2-AI compounds reduced the MICs of ß-lactams against M. tuberculosis, it was hypothesized that these compounds may augment bactericidal effects. This set of experiments focused on 2B8 because it showed a superior effect over RA11 in the MIC assays. It should be noted that at high concentrations (125 μM), 2B8 treated M. tuberculosis had impaired growth compared to non-treated culture (S2 Fig). Thus, the bactericidal activity was calculated from nontreated (for ß-lactams only group), 2B8 only treated (2B8/ß-lactams combination group), and clavulanate only treated (clavulanate/ß-lactams combination group) cultures. For the four ß-lactams tested, co-treatment for five days with 2B8 led to a significant increase in bactericidal activity compared to ß-lactams alone. For carbenicillin (2 mg/L), amoxicillin (2 mg/L), and ceftazidime (1 mg/L), a dose-dependent effect was observed with increasing concentrations of 2B8 (Fig 2). The combination of ß-lactams with clavulanate, a widely used ß-lactamase inhibitor, was also evaluated. As expected, improved bactericidal activity was observed when clavulanate was combined with all tested ß-lactams as depicted in Fig 2. 2-AI treated M. tuberculosis cultures have reduced ß-lactamase activity An important factor contributing to M. tuberculosis intrinsic ß-lactam resistance is the synthesis and secretion of ß-lactamase [20,51,52]. Indeed, the combination of a meropenem and a ß-lactamase inhibitor such as clavulanate (with or without amoxicillin) has been demonstrated to be [22,33]. Thus, the possibility that 2-AI compounds potentiate ß-lactams by reducing ß-lactamase activity was investigated. To evaluate whether the compounds have a direct effect on the enzyme's activity, 2-AI compounds were added to either purified Bacillus (B.) cereus ß-lactamase (provided in the colorimetric kit) or M. tuberculosis CFP, a rich source of mycobacterial specific ß-lactamase [53], and the enzymatic activity was evaluated using the nitrocefin hydrolysis assay. Under these experimental conditions, 2-AI compounds did not directly inhibit ß-lactamase activity (Fig 3A and 3B). As expected, however, clavulanate efficiently inhibited ß-lactamase activity from M. tuberculosis CFP (Fig 3B). Alternatively, ß-lactamase activity was measured in CFP obtained from 2-AI treated M. tuberculosis cultures. M. tuberculosis cultures treated with RA11 or 2B8 resulted in a dose dependent decrease in ß-lactamase activity after normalization of the data to CFUs (Fig 4A). Consistent with results obtained from the assays described above, 2B8 more effectively reduced ß-lactamase activity than RA11. Reduced ß-lactamase activity in 2-AI treated cultures correlated with a lower total protein concentration present in the CFP of these cultures (Fig 4B). In fact, no differences were observed between control and 2-AI treated cultures when ß-lactamase activity was normalized to protein concentration (Fig 4C). Clavulanate treatment also effectively decreased ß-lactamase activity when data were normalized to viable CFUs (Fig 4A), but did not have any effect in overall protein concentration in the CFP (Fig 4B). Therefore, clavulanate treatment still decreased ß-lactamase activity in the sample even when the data were normalized by total protein concentration ( Fig 4C). 2-AI treatment alters M. tuberculosis cell envelope lipid composition Intrinsic resistance of mycobacteria to ß-lactams has also been attributed to several unique features including the low permeability of the lipid rich cell envelope [54]. It was posited that ß-lactam potentiation against Mycobacterium tuberculosis by 2-aminoimidazole compounds 2-AI treatment could increase the susceptibility to ß-lactams by altering the mycobacterial cell envelope composition and increasing permeability to this class of antibiotics. To investigate if 2-AI compounds impair mycobacterial cell envelope lipid synthesis or composition, metabolic labeling of mycobacterial lipids was performed with radiolabeled acetate or propionate and relative lipid abundance analyzed by TLC and autoradiogram. After 24 h of 2-AI treatment, total radioactive counts from treated samples showed dose dependent decrease, implying reduced biosynthesis of cell envelope extractable lipids (data not shown). Thus, TLC loading was normalized to total radioactive count so that every sample would have equal amounts of labeled total lipids. From TLC analysis of 14 C acetate labelled extractable lipids (Fig 5A), it was observed that 2-AI treatment led to decreased TDM biosynthesis while accumulating its precursor trehalose monomycolate (TMM). Also, significantly less mycolic acid methyl esters (MAMEs) were extracted from 2-AI treated samples (Fig 5B). Consistent with the MIC assay results, 2B8 treatment resulted in a more dramatic effect than RA11 treatment. Importantly, the biosynthesis of total mycolic acids (determined as the sum of MAMEs from cell-bound and extractable lipids) was reduced in 2-AI treated bacilli ( Fig 5C). In the TLC analysis of 14 C propionate labelled extractable lipids, a significant decrease in sulfolipid-1 (SL-1) and polyacyltrehalose (PAT) biosynthesis in 2B8 treated cultures was observed (Fig 5D). However, diacyltrehalose (DAT), a precursor of PAT, accumulated with 2B8 treatment. Again, 2B8 treatment more pronouncedly affected 14 C propionate labelled lipids than RA11. 2-AI treated M. tuberculosis becomes hypersensitive to SDS The lipid rich cell envelope of M. tuberculosis is known to serve as an impermeable barrier to various chemicals and antibiotics [21]. Based on the alteration in cell envelope lipid composition after 2-AI treatment, it was hypothesized that 2-AI treated M. tuberculosis would become hypersensitive to membrane-targeting agents such as detergents like SDS. After 24 h treatment with 125 μM 2B8, M. tuberculosis cultures were exposed to 0.005% or 0.05% SDS. Following SDS exposure, CFUs were enumerated every hour for 4 h. As shown in Fig 6A, significant survival differences were observed between non-treated and 2B8-treated cultures exposed to 0.005% SDS (left panel). The viability of 2B8 treated cultures exposed to SDS dropped more rapidly and to a greater extent than non-treated cultures. As expected, when exposed to SDS at 0.05%, the viability of all the samples declined throughout the course of exposure. However, the decline was more rapid and pronounced in 2B8 treated M. tuberculosis cultures (Fig 6A, right panel). 2-AI treatment increased M. tuberculosis permeability to multiple nucleic acid staining dyes Accumulation of nucleic acid staining dyes such as EtBr and Sytox Orange has been used to evaluate M. tuberculosis cell envelope permeability [38, 39,55]. Based on the altered cell envelope lipid composition and increased sensitivity to SDS, it was hypothesized that 2-AI treatment increased M. tuberculosis cell envelope permeability. EtBr or Sytox Orange was applied simultaneously with 2-AI compounds, and the kinetics of fluorescence signal due to dye accumulation was monitored over time. M. tuberculosis treatment with 125 μM 2B8 resulted in a time-dependent increase in EtBr fluorescent signal that was significantly higher than untreated control (Fig 6B). This increase was not seen in RA11 treated cultures. Net dye accumulation determined upon completion of the assay was also significantly higher in 2B8 treated cultures (S4A Fig). Reserpine, an inhibitor of the EtBr efflux pump [56], also increased EtBr accumulation within M. tuberculosis, however it was not statistically significant ( Fig 6B). The same trend was observed using Sytox Orange, as both the time-dependent increase and net dye accumulation were significantly elevated in 2B8, but not in RA11 treated cultures (Fig 6C and S4B Fig). In agreement with a recent report [39], thioridazine (an additional positive control) also increased Sytox Orange accumulation ( Fig 6C). Flow cytometry was performed to further evaluate the permeability of 2-AI treated M. tuberculosis to a third nucleic acid staining dye, PI. Despite sharing a similar chemical structure, PI is a better indicator of plasma membrane integrity than EtBr, due to its additional positive group [57]. In contrast to the above results with EtBr ( Fig 6B) and Sytox Orange (Fig 6C), M. tuberculosis treatment with 2-AI compounds did not acutely increase permeability to PI when evaluated at 30 min and 120 min post-exposure ( Fig 6D). However, approximately 30% of M. tuberculosis became permeable to PI, a classical marker of cell death, upon prolonged exposure to 2B8 for 24 h. Again, M. tuberculosis membrane disruption was greater for 2B8 than RA11 (Fig 6D). Finally, the two positive controls, SDS and MCA, showed potent M. tuberculosis cell membrane disrupting capacity, albeit with different kinetics. As expected, a detergent like SDS acutely lysed bacteria within 30 min (Fig 6D), while the effect of MCA only became significant after 24 h. (Table 1), the MIC of vancomycin (S1 Table) and penicillin V against M. tuberculosis was also lower after exposure to 2-AI compounds. There was a significant increase in binding of both fluorescent penicillin V (Fig 7A and S5 Fig) and vancomycin (Fig 7B and S5 Fig) to M. tuberculosis compared to controls after 30 min treatment with 2B8. For both fluorescent antibiotics, binding to treated M. tuberculosis increased proportionally to treatment duration. Treatment with RA11 also increased antibiotic binding to M. tuberculosis but the extent was significantly less compared to 2B8. The binding of fluorescent vancomycin to M. tuberculosis increased when treated with MCA (Fig 7B), as expected considering meropenem's inhibition of the mycobacterial D,D-carboxypeptidase that cleaves vancomycin's target, the D-Ala-D-Ala peptide motif [40]. However, this effect was delayed and only became evident after 24 h treatment. Moreover, despite being a potent disruptor of the cell membrane (Fig 7A and 7B), SDS only minimally increased binding of either BODIPY 1 FL vancomycin or BOCILLIN 1 to M. tuberculosis, suggesting that direct membrane disruption does not underpin activity. The specificity of BODIPY 1 FL vancomycin's staining was confirmed by a competitive inhibition assay and fluorescent microscopy. Increasing amounts of unlabeled vancomycin competitively inhibited binding of BODIPY 1 FL vancomycin to 2B8 treated M. tuberculosis (S6 Fig). These data confirm that the fluorescent drug is binding specifically to its cognate target, rather than non-specifically through the BODIPY 1 moiety. Furthermore, in accordance with previous publications [58,59], punctate staining predominantly at the mycobacterial poles was observed when M. tuberculosis was stained with BODIPY 1 FL vancomycin (Fig 7C). Consistent with the flow cytometry results, 2B8 treatment increased the number of stained bacteria and fluorescence intensity versus DMSO-treated control (Fig 7C). BOCILLIN 1 's fluorescence was not bright enough for microscopy (data not shown). M. tuberculosis transcriptional responses to 2B8 To further characterize the impact of 2-AI compounds on M. tuberculosis physiology, transcriptional profiling of M. tuberculosis exposed to 2B8 for 2 and 24 hours was performed. M. tuberculosis H37Rv was treated with 125 μM 2B8 and following 2 or 24 h of treatment, RNA was isolated and transcriptional profiles were determined by RNA-sequencing (RNA-seq). Upregulated or downregulated genes (>1.5-fold with a q <0.05) were identified after 2 h (S3 Table) or 24 h (S4 Table) post-treatment. To identify genes with both early and sustained differential gene expression, the gene lists at 2 and 24 h (S5 Table) were compared for common differentially regulated genes (Fig 8A). At both time points, 124 genes were induced and 77 significantly increased accumulation of Sytox Orange. D) As indicated by increased positive staining with PI analyzed by flow cytometry, SDS acutely (30 min) permeabilized M. tuberculosis cell membrane, whereas 2B8 and RA11 treatment only compromised the membrane integrity past 2 h. Disruption of cell membrane by 2-AI compounds was more pronounced with 2B8 than RA11 (24 h). MCA treatment also resulted in increased staining with PI after 2 and 24 h of treatment. *p<0.05, **p<0.01, ***p<0.001 by ANOVA, (For EtBr and Sytox Orange accumulation assay, statistical significance marked for 90 min time-point). Experiments were carried out three separate times and representative data are shown. https://doi.org/10.1371/journal.pone.0180925.g006 ß-lactam potentiation against Mycobacterium tuberculosis by 2-aminoimidazole compounds genes were repressed (>1.5-fold, q<0.05). Genes encoding for several transcriptional regulators including the alternative sigma factors sigB, sigE and sigK and the response regulator mprA were induced (1.5 to 1.8-fold for 2 h, 2 to 3.7-fold for 24 h) (Fig 8B). 2B8 treatment enhanced expression of SigK regulated genes including mpt83, dipZ, mpt70, and rv0449c [60], supporting that the SigK regulon was induced in a sustained manner by 2B8 (Fig 8B). Other genes of interest that were strongly induced at both time-points include: prpCD which is proposed to play a role in propionate detoxification [61], rv3160c and rv3161c, a putative dioxygenase and its regulator, previously shown to be strongly regulated by triclosan and suggested to be involved in degradation of arenes [62]. Finally, it was noted that mmpL8 (2 and 24 h) and mmpL10 (24 h) involved in SL-1 and acyl-trehalose export, respectively [63,64], were induced by 2B8 treatment (Fig 8B). The downregulated genes at both time-points show a strong signature for inhibition of genes associated with mycolic acid biosynthesis (Fig 8B) including fas, fabD, acpM, kasAB, pks13 and fadD32. This result is consistent with 2-AI-mediated modulation of mycolic acids biosynthesis determined by metabolic labeling experiment (Fig 5A and 5B). Multiple genes involved in peptidoglycan biosynthesis such as the mur family were repressed in response to treatment. However, genes involved in protein secretion were not generally modulated by 2B8. The blaC and other recently identified genes [65,66] encoding for ß-lactamases were also unaffected ( Fig 8B). Finally, one of the most significantly downregulated genes was rv0280, which encodes for a member of the PE/PPE family with no known function to date, but previously reported to be downregulated in a phoP mutant strain [67]. Discussion While investigating the mechanism of action of 2-AI compounds against non-replicating mycobacteria in vitro, these compounds were serendipitously observed to potentiate ß-lactam antibiotics against M. tuberculosis. Specifically, 2-AI treated M. tuberculosis failed to grow on agar containing carbenicillin. 2-AI compounds were subsequently shown to effectively reduce MICs and bactericidal concentrations of multiple ß-lactams against M. tuberculosis. It is noteworthy that the effect of 2-AI was not limited to a specific class of ß-lactams. Indeed, 2-AI's effect was observed (albeit to a different degree) across all ß-lactams tested including carboxypenicillin (carbenicillin), aminopenicillin (amoxicillin), third generation cephalosporin (ceftazidime), and carbapanem (meropenem). In essence, 2-AI compounds effectively nullified M. tuberculosis intrinsic ß-lactam resistance. Based upon this observation, the goals of this study were to determine if 2-AI compounds circumvents M. tuberculosis intrinsic resistance to ß-lactams and if so, to elucidate the mechanism of action. It is reported herein that selected 2-AI compounds were effective in reducing ß-lactam MICs against M. tuberculosis while also improving their bactericidal activity. Mechanistic studies have revealed that 2-AI compounds achieve this response by at least two distinct mechanisms: a) increasing M. tuberculosis cell envelope permeability and accessibility of cell wall targeting drugs and b) reducing M. tuberculosis ß-lactamase secretion. Several lines of evidence indicate that a major mechanism contributing to ß-lactam potentiation by 2-AI compounds is the alteration of the mycobacterial cell envelope and an increase in the accessibility of ß-lactams to their target [12,13,16,21,54,55]. Indeed, as early as 30 min post-treatment with 2-AI compounds, increased M. tuberculosis permeability to a fluorescent ß-lactam or glycopeptide antibiotic, such as penicillin V and vancomycin, respectively was observed. In order to reach their targets located within the periplasmic space delimited between the outer and inner cell membrane, these antibiotics first have to diffuse through the outer cell membrane [19,24,68,69]. It is well documented in both mycobacteria and Gramnegative bacteria that the presence of an outer cell membrane acts as a permeability barrier to limit the diffusion of hydrophilic molecules [55,[70][71][72][73]. Specifically, in mycobacteria, the outer cell membrane impermeability is further enforced by the presence of mycolic acids and complex lipids such as TDM and PDIM in the inner and outer leaflets [24]. The fact that and statistical significance (q<0.05) were considered no change (1 in fold-change scale, white color). *Hypothetical protein, **Conserved hypothetical protein, ***Upregulated in response to triclosan. https://doi.org/10.1371/journal.pone.0180925.g008 ß-lactam potentiation against Mycobacterium tuberculosis by 2-aminoimidazole compounds 2B8-treated M. tuberculosis was readily stained with a bulky antibiotic (such as BODIPY 1 FL vancomycin) within 30 min, in contrast to multiple hours usually required to stain mycobacteria [58,74], strongly suggests that 2B8 increases the mycobacterial outer cell membrane permeability. This hypothesis was further supported by the acute (90 min) increased permeability of 2B8-treated M. tuberculosis to nucleic acid staining dyes EtBr and Sytox Orange, which are routinely used as indicators of mycobacterial cell envelope integrity [54,55]. The ability to increase the outer membrane permeability could potentially be attributed to 2-AI compound's alkyl chain(s) and could explain the different structure-function relationship of several related 2-AI compounds sharing the same 2-aminoimidazole polar head group but derivatized with distinct apolar, alkyl chains. As suggested by the results with fluorescent antibiotics and nucleic acid staining dyes, 2B8's branched, albeit short alkyl chain induces a superior effect on the mycobacterial outer cell membrane permeability compared to RA11's straight alkyl chain containing 11 carbons. Derivatization of the 2-aminoimidazole group with a straight 13-carbon alkyl chain (such as in another 2B8 derivative, RA13), completely abrogated the compound's activity in the mycobacterial biofilm dispersion assay [7], as well as in some of the assays performed herein (data not shown). This dependence on a critical chain length and structure of the hydrophobic tail could explain why SDS, with a straight 12-carbon acyl chain, failed to dramatically potentiate ß-lactams (S2 Table) or increase binding of fluorescent antibiotics to M. tuberculosis despite being a potent inner cell membrane disruptor (as determined by PI staining). Meanwhile, mycobacterial inner cell membrane disruption with 2B8 was only observed for a small fraction of bacilli after 24 h. In contrast, Stowe et al. previously reported acute and dramatic A. baumannii lysis in the presence of reverse-amide 2-AIs [12]. Beyond the role played by their different hydrophobic tails, opposing outcomes induced by 2-AI compounds or SDS could also be determined or modulated by their respective polar head group and charge, which remains to be investigated. Collectively, these results indicate ß-lactam potentiation by 2-AI compounds is uncoupled from direct inner cell membrane disruption. Further evidence that 2-AI compounds affect the mycobacterial cell envelope was obtained from metabolic labeling experiments evaluating M. tuberculosis cell envelope lipid composition, as well as from transcriptional responses to 2B8. 2-AI treated M. tuberculosis had a dramatic decrease in mycolic acids covalently esterified to the cell wall (MAMEs), likely resulting from a combination of: a) reduced biosynthesis, b) defect in TMM export, and c) decreased crosslinking to arabinose residues in the mycolyl-arabinogalactan-peptidoglycan (mAGP) complex. Genes encoding for FAS-I (synthesizes mycolic acid's α-alkyl chain), FAS-II (synthesizes mycolic acid's meromycolate chain) and pks13 (condenses both chains to form mycolic acids) [75,76], were among the most significantly down-regulated genes at 2 h post-treatment. Furthermore, mycolic acid export was also compromised by 2B8 treatment as suggested by the inverse relationship between high TMM (precursor) and low TDM (end product) levels. However, this defect was not a consequence of reduced mmpL3 transcription, recently identified to encode for a TMM transporter [77]. Finally, by down-regulating expression of two members of the mycolyl-transferase complex (fbpA and fbpB), which catalyze mycolic acid-arabinose linkage [78], 2B8 could further compound the deficit of mycolic acid-dependent fortification of the mycobacterial cell wall core. Trehalose-based lipids such as SL-1 and PAT were also less abundant in 2-AI treated bacilli, while PAT's precursor, DAT, accumulated. Again, this deficit was not due to decreased transcription of their cognate transporters. In fact, transcription of mmpL8 and mmpL10, encoding for SL-1 [63] and DAT transporter [64], respectively, was upregulated. The fact that 2-AI treatment leads to mycobacterial accumulation of unexported lipid precursors without decreased transporter transcription, suggest that the defect is elsewhere, perhaps at the level of proton motive force (PMF) generation required for MmpL-catalyzed transport of related lipids [79]. This line of research is currently being evaluated. Finally, it was recently shown that methyl-branched lipid biosynthesis acts as a propionate sink to limit its toxicity [61,80,81]. Interestingly, both prpC and prpD of the methyl-citrate cycle that plays a role in detoxifying propionate were induced 17-and 34-fold, respectively, following 2 h of 2B8 treatment. This strong induction suggests that 2B8 may result in propionate toxicity, possibly as part of changes to the cell envelope. Altogether, beyond the acute increased outer cell membrane permeability discussed above, alterations in the biosynthesis and/or export of M. tuberculosis cell envelope lipids likely contributes to ß-lactam potentiation and enhanced SDS sensitivity induced by 2-AI compounds. Bacteria respond to cell envelope stress by modulating their transcriptome through the activation of transcription regulators. Consistent with this, at 2 h after treatment, 2B8 induced several transcriptional regulators including the alternative sigma factors sigB, sigE, and sigK and the response regulator mprA. Notably, the two component regulatory system MprAB regulates sigB and sigE [82] in response to envelope stresses, such as SDS or Triton X-100 treatment, which is consistent with 2-AI promoting envelope stress and stimulating the MprAB regulatory network. Both SigE and SigK belong to the family of extracytoplasmic-function sigma factors (ECF), kept inactive/transcriptionally silent by remaining tethered to a transmembrane protein, an anti-sigma factor [83][84][85]. In the presence of extracellular stress, proteolytic cleavage of the anti-sigma factor releases the cognate sigma factor to become transcriptionally active and regulate the expression of multiple genes. Particularly interesting was the upregulation of sigK and SigK regulated genes such as mpt70, mpt83, dipZ, and rv0449c [60], supporting that the SigK regulon is induced in a sustained manner by 2B8. Unfortunately, not much is known about the biological role of the SigK regulon, besides the fact the mpt70/83 are highly immunogenic proteins and their expression levels significantly differ between members of the M. tuberculosis complex [60,86]. Two redox-sensitive cysteine residues were recently suggested to regulate the transcriptional activity of SigK by altering the interaction with RskA, the cognate anti-sigma factor [87]. This is a recurring mechanism regulating other ECF such as SigF and SigL [88,89], however the upregulation of these sigma factors or their regulons in response to 2B8 was not observed. Thus, the mechanism leading to the specific upregulation of the SigK regulon by 2B8 is still unclear. The M. tuberculosis blaC gene encodes for a highly active class A ß-lactamase, which significantly contributes to M. tuberculosis intrinsic ß-lactam resistance [20,51]. Therefore, studies have attempted to circumvent this resistance by combining ß-lactams with a ß-lactamase inhibitor such as clavulanate. This combination has indeed proven to be promising against M. tuberculosis [22,32,90]. Thus, it was important to determine if 2-AI compounds potentiate ß-lactams by interfering with any aspect of M. tuberculosis ß-lactamase function. 2-AI compounds did not directly inhibit ß-lactamase activity like the classical inhibitor clavulanate, supporting our recently published results obtained with additional compounds derived from the 2-AI scaffold [91]. This result was expected considering 2-AI compounds similarly potentiated ß-lactams regardless of their respective susceptibility to ß-lactamase degradation, suggesting ß-lactamase inhibition was not the mechanism driving potentiation. Comparable MIC foldreduction was observed for carbenicillin/amoxicillin/penicillin V (early generation of ß-lactams highly susceptible to ß-lactamase) and meropenem (a carbapenem with superior resistance to ß-lactamase). Furthermore, ß-lactam potentiation by 2B8 was not due to downregulation of genes encoding for ß-lactamases such as blaC (rv2068c), rv0406, rv2421c, rv2422, and rv3677. or those encoding for the twin-arginine protein translocating system (tat), responsible for BlaC secretion [92]. However, reduced ß-lactamase activity in the CFP of 2B8 treated M. tuberculosis, correlated with reduced protein concentration in this fraction. As 2B8 did not alter the transcription of genes encoding for other major M. tuberculosis protein secretion systems (namely SecA, SecA2 or type VII), it is currently being explored if these compounds affect other parameters such as mycobacterial bioenergetics required for protein secretion and lipid export, as described above. Thus, it is concluded that reduced ß-lactamase secretion could contribute in part to ß-lactam potentiation by 2-AI compounds. Other aspects of M. tuberculosis intrinsic resistance to ß-lactams that could be affected by 2-AI treatment but were not directly evaluated, include the role of efflux pumps, peptidoglycan structure and PBPs or L,D-transpeptidases involved in peptidoglycan crosslinking [16,93]. However, through transcriptional profiling it was possible to rule out decreased gene expression of efflux pumps, PBPs or L,D-transpeptidases as a mechanism explaining ß-lactam potentiation by 2-AI compounds. In fact, 2B8 induced rv1218c and rv1258c, previously shown to be specifically involved in ß-lactam resistance [56,94], as well as other efflux pumps associated with resistance to bedaquiline and clofazimine (mmpL5) [95]. Nevertheless, it cannot be excluded that 2-AI compounds have an indirect effect on ß-lactam efflux pumps, via alteration of mycobacterial bioenergetics as discussed above. Finally, as suggested by lower expression levels for mur genes, 2B8 could potentially be reducing the amount of uncrosslinked peptidoglycan precursors, the substrates for PBPs and L,D-transpeptidases. Ultimately, in the face of dwindling amounts of peptidoglycan precursors, PBPs and L,D-transpeptidase inhibition by ß-lactams would have additive catastrophic effects. Taken together, the data suggests that 2-AI compounds potentiate ß-lactams by affecting M. tuberculosis cell envelope integrity and ß-lactamase secretion. Evidently, a limitation in this study was the high concentration of 2-AI compounds required for most of the assays, including ß-lactam potentiation which started to occur at 31.25 μM. Through medicinal chemistry, a second generation of 2-AI compounds is being developed with similar activity against M. tuberculosis, at lower concentrations. Importantly, findings in this study are a proof of concept of the possibility of potentiating ß-lactams with 2-AI compounds as a novel therapeutic regimen for drug susceptible and multi-drug resistant TB. M. tuberculosis H37Rv was plated on 7H11 agar after five days of culture with or without clavulanate or increasing concentrations of 2B8, and CFUs were enumerated three weeks after. Compared to control, CFUs from cultures containing 2B8 were significantly lower, suggesting that 2B8 affects normal growth of M. tuberculosis by itself. In contrast, clavulanate did not affect bacterial growth.
v3-fos-license
2019-05-04T13:04:39.390Z
2019-04-01T00:00:00.000
143434836
{ "extfieldsofstudy": [ "Medicine", "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://asa.scitation.org/doi/pdf/10.1121/1.5093546", "pdf_hash": "1b75c3db7669d1c4eb7e4d67eb867f88c2124816", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41548", "s2fieldsofstudy": [ "Physics", "Psychology" ], "sha1": "5ad19a851d2d636c5c743f27dc9f5dd55de139df", "year": 2019 }
pes2o/s2orc
Noise edge pitch and models of pitch perception Monaural noise edge pitch (NEP) is evoked by a broadband noise with a sharp falling edge in the power spectrum. The pitch is heard near the spectral edge frequency but shifted slightly into the frequency region of the noise. Thus, the pitch of a lowpass (LP) noise is matched by a pure tone typically 2%–5% below the edge, whereas the pitch of highpass (HP) noise is matched a comparable amount above the edge. Musically trained listeners can recognize musical intervals between NEPs. The pitches can be understood from a temporal pattern-matching model of pitch perception based on the peaks of a simplified autocorrelation function. The pitch shifts arise from limits on the autocorrelation window duration. An alternative place-theory approach explains the pitch shifts as the result of lateral inhibition. Psychophysical experiments using edge frequencies of 100 Hz and below find that LP-noise pitches exist but HP-noise pitches do not. The result is consistent with a temporal analysis in tonotopic regions outside the noise band. LP and HP experiments with high-frequency edges find that pitch tends to disappear as the edge frequency approaches 5000 Hz, as expected from a timing theory, though exceptional listeners can go an octave higher. Monaural noise edge pitch (NEP) is evoked by a broadband noise with a sharp falling edge in the power spectrum. The pitch is heard near the spectral edge frequency but shifted slightly into the frequency region of the noise. Thus, the pitch of a lowpass (LP) noise is matched by a pure tone typically 2%-5% below the edge, whereas the pitch of highpass (HP) noise is matched a comparable amount above the edge. Musically trained listeners can recognize musical intervals between NEPs. The pitches can be understood from a temporal pattern-matching model of pitch perception based on the peaks of a simplified autocorrelation function. The pitch shifts arise from limits on the autocorrelation window duration. An alternative place-theory approach explains the pitch shifts as the result of lateral inhibition. Psychophysical experiments using edge frequencies of 100 Hz and below find that LP-noise pitches exist but HP-noise pitches do not. The result is consistent with a temporal analysis in tonotopic regions outside the noise band. LP and HP experiments with high-frequency edges find that pitch tends to disappear as the edge frequency approaches 5000 Hz, as expected from a timing theory, though exceptional listeners can go an octave higher. V C 2019 Acoustical Society of America. https://doi.org/10.1121/1.5093546 [JJL] Pages: 1993-2008 In 1963, B ek esy observed that an octave band of noise (400-800 Hz) produces two pitch sensations, one near each frequency edge of the noise spectral band. B ek esy attributed the pitch sensations to lateral inhibition at the edges and compared them to Mach bands in vision. Small and Daniloff (1967) simplified B ek esy's experiment by using lowpass (LP) noise bands and highpass (HP) noise bands. Their listeners adjusted the edge frequency of a noise band to produce a pitch an octave higher or lower than the pitch of a noise band with a standard edge frequency. Fastl (1971) performed pitch matching experiments in which the pitches elicited by LP and HP noise bands with sharp edges were matched by adjusting the frequency of a sine tone. In all of these early studies, the average matching frequencies were reported to be the same as the edge frequencies (e.g., Zwicker and Fastl, 1999), but later studies, as described below, showed pitch shifts where the matching frequencies deviate systematically from the edge frequencies. In their work on binaural edge pitch, Klein and Hartmann (1981) recorded pitch matches for diotic LP and HP noise bands with sharp edges as a diotic analog to binaural edge pitch. Their pitch matching data consistently revealed pitch shifts away from the edge and into the noise. Thus, the pitch of a LP noise was found to be slightly below the edge frequency, and the pitch of a HP noise was found to be slightly above the edge frequency. Klein and Hartmann generated their stimuli digitally in the frequency domain, leading to spectral edges with a 30-dB discontinuity at the edge frequency. These edges were much sharper than were available with the analog filters used by previous studies. For example, the filters used by Small and Daniloff (1967) had slopes of 35 dB/octave, and those used by Fastl were 120 dB/octave. Digital noise generation was able to reveal pitch shifts for several reasons. First, the sharp edges removed the uncertainty intrinsic to analog filtering about how the edge frequency, f e , should be defined. Second, the sharp edges led to relatively stronger pitch sensations that allowed for more precise matching. The observed pitch shifts were 5%-10% of the edge frequency, f e , in the range 200 Hz < f e < 400 Hz, and the shift percentages became smaller with increasing edge frequency. These observed shifts afford an opportunity for experimental tests of pitch perception models. Pitch shifts as reported in this paper indicate that a temporal theory, implemented here by an analytic model, is required for low edge frequencies while a place model, as envisioned by B ek esy, likely applies for high edge frequencies. B. Plan of the paper In Sec. II, an autocorrelation-based model of pitch is presented. The model predicts pitches based on an apparent periodicity determined by the pattern of lag times of the peaks of the autocorrelation function (ACF). Because of the unusual structure of its ACF, the noise edge stimulus is an a) Electronic mail: hartmann@pa.msu.edu especially powerful test for pitch models based on neural timing. A sinc function approximation to the ACF is used to predict pitches. Model predictions are compared with the data of Klein and Hartmann (1981) for both LP and HP noise. Despite its simplicity, the sinc-autocorrelation function (sinc-ACF) model successfully reproduces major features of the data. Success depends on incorporating multiple peaks of the ACF in the pitch computation. In Sec. III, a competing, place-based lateral-inhibition-based model is presented using physiological and psychophysical parameters. This place model is almost as successful in matching the 1981 data. In Sec. IV, new experimental data for LP and HP noise test the low-edge-frequency limit for edge pitch. The relative weakness of pitch in the HP case is attributed to the restricted tonotopic region for temporal coding of the low-frequency edge. In Sec. V, pitch-interval identification data show that the edge pitch qualifies as a true musical pitch. Section VI presents experimental data for LP and HP noise with high edge frequencies, testing the upper limits of edge-pitch perception. In Sec. VII, edge pitch is related to more general models and experiments. Finally, Sec. VIII is a summary. The mathematical foundation for the sinc-ACF model is given in Appendixes A and B. II. AUTOCORRELATION MODEL As a consequence of phase-locking in auditory nerve fibers, the temporal pattern of neural spikes is highly correlated with the stimulus waveform, after taking into account cochlear filtering and auditory transduction. As per the models by Licklider (1959), Meddis and Hewitt (1991a,b), and Patterson et al. (2000), the temporal character of our model is represented by an ACF-a representation that highlights the periodic character, or approximate periodic character, of waveforms that lead to pitches. The model estimates pitches using an algorithm based on the lags of the peaks in the ACF. Noise edge pitch (NEP) offers a particularly interesting pattern of peaks. A. Autocorrelation and pitch Because the noise stimuli of interest are broadband, a physiologically detailed model might begin by dividing the noise spectrum into auditory filter bands. Cochlear auditory filtering might be followed by half-wave rectification and compression, known to apply to the auditory periphery. Then, autocorrelation may be calculated within each band. Subsequently, ACFs for the separate auditory bands may be summed to generate a "summary autocorrelogram" (Meddis and Hewitt, 1991a,b) or "population interval distribution" (Cariani and Delgutte, 1996a,b). Although we have investigated models like that in the past Hartmann et al., 2015), the model in this report is much simpler. Specifically, it makes a linear approximation for the periphery. With that approximation, a summary autocorrelogram, summed over contiguous, rectangular bands, is mathematically the same as the ACF for the broadband stimulus as explored here. The initial modeling in this section also assumes that the noise stimulus has no intrinsic variability. The noise spectrum is approximated by its long-term average. B. The sinc-autocorrelation model The long-term average power spectrum for a noise edge stimulus can be represented as a rectangle. For the LP NEP, the rectangle extends from zero to the edge frequency f e . For the HP NEP, the rectangle extends from f e to infinity. This report first treats the LP stimulus in detail and then shows how a simple modification applies to the HP condition. LP noise The ACF is the inverse Fourier transform of the power spectrum. Because the long-term average power spectrum of a LP noise with mean power P is either zero or constant at P=ð2pf e Þ, the ACF is a sinc function of lag s, for P ¼ 1. This function can be thought of as an approximation to the all-order interspike-interval histogram. The function is shown in Fig. 1 for an edge frequency of 200 Hz. We assume that the pitch of the LP NEP is determined by the values of lag s where a LP (s) has peaks. The prediction of an edge pitch from the peaks in the sinc-ACF is derived in Appendix A. To a good approximation, the first peak of a LP (s) (after the zeroth peak at the origin) occurs at f e s ¼ 5/4 cycles. To an even better approximation, all the other positive peaks are separated from the first by integer multiples of the period. Therefore, the nth peak occurs very near the lag value of s n ¼ ðn þ 1=4Þ=f e ; n ¼ 1; 2; 3; …; N: ( This approximation is tested in Appendix B. As noted by Cariani (2004), each of these peaks is a potential temporal pitch cue, but because of the finite auditory integration time, only the first N of them are important to the pitch sensation. The value of N is a critical matter addressed in this paper. If only the first peak is included (N ¼ 1), then [from Eq. (2)] the pitch cue is s 1 ¼ 1.25/f e , and the predicted pitch becomes p ¼ 1/s 1 ¼ f e /1.25. Thus, the ratio of pitch to the edge frequency is p/f e ¼ 1/1.25 ¼ 0.8, i.e., the pitch is predicted to be 20% below the edge frequency. That prediction gets one thing right: for a LP noise, the perceived pitch is lower than the edge frequency. However, this predicted pitch shift away from the edge is too large by a factor of about 5. The experimental pitch match is usually lower than f e by much less than 20%, typically near 4%. Apparently the first peak in the ACF is an inadequate cue for the pitch shift for NEP. The problem is solved by incorporating more peaks. The model which relates the multiple peaks of the ACF to a pitch prediction is described in Appendix A. There, it becomes evident that including more peaks (larger N) in the computation of pitch maintains the sign of the predicted shift but reduces its magnitude. It is reasonable to suppose that N is limited by a temporal window, defined by a maximum lag time s max , over which peaks can be obtained. Because N is approximately equal to f e s max , the number of important peaks decreases as the edge frequency decreases. It will be seen that this effect predicts that the pitch shift percentage should increase with decreasing edge frequency, if it is assumed that s max is relatively insensitive to edge frequency or tonotopic region. Appendix A shows that the ratio of predicted pitch to edge frequency for LP noise is given by where c is the Euler-Mascheroni constant, c % 0.57722. This formula should hold good in the case that N is not too small. Also in that case, N % f e s max , and p/f e can be computed making that substitution for N in Eq. (3). However, N is an integer, and it is only reasonable to take the integer part of the continuous function, i.e., N ¼ INT(f e s max ). That was assumed for the computation of the final pitch predictions shown in Fig. 2. HP noise Appendix A also shows that the corresponding prediction for HP noise is Two predictions follow from the equations above: first, the pitch of HP NEP should be above the edge frequency just as the pitch of LP NEP should be below. Second, the percentage shift for HP NEP [Eq. (4)] should be somewhat larger in magnitude than the percentage shift for LP NEP [Eq. (3)]. The latter follows because: Klein and Hartmann (1981), filled for HP noise and open for LP noise. Each diamond symbol shows the mean of at least four matches. Error bars are two standard deviations in overall length. Most error bars are smaller than the points, but error bars become larger below 200 Hz, where matching became difficult and data may be less reliable. Lines marked with circles, triangles, and squares indicate the predictions of the sinc-ACF model from Eqs. (4)] for a 30-ms temporal window so that the number of peaks is N ¼ 5 (LP) or 6 (HP). The mathematics of Appendix A puts those two predictions between 1/f e and the first peak of the sinc-ACF at s 1 ¼ (1 6 1/4)/f e . Open diamond symbols show integer multiples ofŝ. The insets show the rectangular model power spectra. j1=ð1 À xÞ À 1j > j1=ð1 þ xÞ À 1j; (5) for x small and positive. The temporal model introduced here for LP and HP noise bands and based on a limited number of peaks of the sinc function will be called the "sinc-ACF model." C. Edge pitch listening experiment In Fig. 2, the predictions from the sinc-ACF model are compared to the pitch matches by three listeners in the experiment by Klein and Hartmann (1981). The details of the experiment can be found in the original publication. Briefly, there were three male listeners, G, M, and W (ages, 22, 22, and 41 yr) with normal hearing. They used a sine tone with adjustable frequency and level to match the pitch of noise bands with sharp edges (30-dB discontinuity). The 12-bit noise stimuli were made by adding 251 equally spaced sine components-equal amplitude random phase. Stimuli were presented through headphones at 60 dBA sound pressure level (SPL) in a sound treated room. 1 Figure 2 indicates notable qualitative agreement among the three listeners, at least above 200 Hz, where there was no exception to the rule that the mean matching frequencies were shifted into the region of spectral power for both HP and LP. The sinc-ACF model is consistent with that rule. D. Comparing the temporal model and experiment Predictions from the sinc-ACF model for LP and HP NEP are shown in Fig. 2 for three values of the maximum lag, s max ¼ 15, 30, and 60 ms. Figure 2 shows that the pitch shifts are larger as a percentage of the edge frequency for lower edge frequency though the experimental shifts below 200 Hz are uncertain. The model calculations agree; the predicted pitch shifts increase with decreasing edge frequency. The shifts increase because, with a given time window for autocorrelation (e.g., s max ¼ 30 ms), fewer autocorrelation peaks are in a window of given duration when the edge frequency is low. Figure 2 also shows that the pitch shifts are larger in magnitude for HP edges than for LP edges. As noted in Eq. (5), this is also a feature of the sinc-ACF model. Although the model predictions in Fig. 2 are for fixed lag windows, s max , a careful comparison between matches and models indicates that the best fitting lag window increases in duration with decreasing edge frequency for all the listeners. The effect is evident also in Fig. 3(a) where the matching data are averaged over the three listeners. Whereas matches to edge frequencies near 2000 Hz seem to agree with the model for s max ¼ 15 ms, matches to edge frequencies below about 600 Hz agree better if s max ¼ 30 or 60 ms. This observation is consistent with other evidence that auditory integration times become longer as the frequency range decreases (Moore, 1982;Bernstein and Oxenham, 2005;de Cheveigne and Pressnitzer, 2006). The apparent dependence of window duration on edge frequency needs to be understood in context. Window durations depend on the characteristic frequencies of neural channels. The channels that are important for a given edge frequency f e are those with characteristic frequencies in the neighborhood of f e . According to Eqs. (3) and (4), the predictions in Fig. 3 depend only on the maximum lag s max through parameter N % f e s max . We used a two-parameter model to optimize the dependence of s max on edge frequency to fit the data in Fig. 3(a) from 200 to 2500 Hz. The best fitting parameters and root-mean-square (RMS) errors are given in Table I(a). It is evident that the best fit requires a longer window s 200 at low edge frequencies than at high s 2500 by a factor between 2 and 4. The window durations in Table I(a) can be compared with the window durations suggested by Moore (1982) for a pitch model based on interspike intervals (first-order histogram): a minimum duration of 0.5/f c and a maximum of 15/f c , where f c is a characteristic frequency for the tonotopic region. The maximum duration, arguably applicable to our all-order distribution, is 75 ms and 6 ms for 200 Hz and 2500 Hz, respectively. However, unlike the Moore formulas, our optimum window duration does not scale with edge frequency. The factor between 2 and 4 is much less than 12.5. As evident in Eq. (1), the ACF oscillates with an approximate spacing given by the reciprocal of the edge frequency. Therefore, a pitch model that identifies pitch perception with the spacing of the oscillations of the ACF must fail in view of the observed pitch shifts. Reasoning like that caused Klein and Hartmann (1981) to abandon temporal models for edge pitch. By contrast, the autocorrelation model of Secs. II A and II B above uses the actual lag values of the peaks and not their regular spacing. Similarly, Yost et al. (1996Yost et al. ( , 1998 and Patterson et al. (2000) accounted for the pitch of iterated rippled noise in terms of the lag value of the first peak of the ACF. However, modeling NEP requires more than just the first peak because the first peak alone predicts a pitch shift that is too large. III. LATERAL INHIBITION PLACE MODELS Lateral inhibition is a neural phenomenon known to occur in the visual system (Hartline et al., 1956). Lateral inhibition has been a hypothetical element in model auditory systems (B ek esy, 1963), and masking data have been interpreted in terms of it (Carterette et al., 1970a,b). The lateral inhibition concept can account for NEPs in a very natural way because it enhances contrast at edges. Plausible quantitative models can be expected to predict the pitch shifts into the noise, as observed experimentally, because lateral inhibition causes the peaks to be on the large-excitation side of the edge. The purpose of this section is to examine the predictions of plausible lateral inhibition models and compare the predicted pitch shifts with observed shifts for NEP. Lateral inhibition models are tonotopic, ultimately related back to the displacement pattern on the basilar membrane. 2 Because of the approximately logarithmic nature of the human tonotopic axis, it is natural to choose shifts of the excitation pattern on a logarithmic scale. For example, Shamma (1985) used 1/3 octave. Such a shift would lead to a flat line prediction for p/f e on a logarithmic plot like Fig. 3(b). It would not capture the tendency for shifts to increase as edge frequency decreases. The model considered here relates frequency to basilar membrane place through the Greenwood formula (Greenwood, 1961), where f is the frequency in Hertz and z is the tonotopic coordinate in mm measured from the apex. Parameter a is 0.14 mm À1 for human cochleas. The Greenwood formula, based on psychoacoustical masking experiments, shows a low-frequency compression of the tonotopic coordinate when plotted as a function of the logarithm of the best frequency, i.e., changing the best frequency by a semitone corresponds to a smaller change in tonotopic coordinate (Dz) at low frequency than at high. Such low-frequency compression of the tonotopic scale is characteristic of auditory filter models of the periphery, e.g., Zwicker (Bark scale; 1961), Glasberg and Moore (Cam scale;1990). Our procedure for computing the peak caused by lateral inhibition first uses the Greenwood formula to convert the edge frequency to a tonotopic place z. Then it calculates the place of peak excitation z 0 by applying a constant shift in millimeters, and finally it reconverts place z 0 to a frequency to represent the pitch p. In order to model edge enhancement, the peak shift must be positive for HP edges and negative for LP edges. The comparison is shown in Fig. 3 where the data to be modeled are the same in Figs. 3(a) and 3(b). Figure 3(b) shows that the shift calculated from the lateral inhibition model has the right sign and approximately the right shape to agree with the data. Because of the lowfrequency compression of the tonotopic scale, the magnitude of the predicted shift is larger for lower frequencies than for high, in agreement with pitch matching data. However, the compression is only modest, and the curvature of the predicted shift functions is smaller for the lateral inhibition model [ Fig. 3 (b)] than for the temporal model [ Fig. 3(a)]. The lateral inhibition model was tested quantitatively against the data in Figs. 2 and 3 in the frequency range from 200 to 2500 Hz (actually 197 Hz to 2438 Hz). Tables I(a) and I(b) show the results of parametric best fits to the experimental matches. For the temporal model [ Overall, the RMS fitting errors are somewhat smaller for the temporal model, and the cause of this difference is the relatively flat nature of the predictions by the place model. The rectangular bandwidth scale by Glasberg and Moore (1990) becomes an alternative tonotopic scale if the bands are stacked in order of center frequency. The formula (7)], but the fit is still slightly inferior to the temporal model. Finally, the frequency-dependent predictions shown in Figs. 3(a) and 3(b) were replaced by the best straight linesseparately for LP and HP conditions. The straight lines correspond to a temporal model in which the duration of the lag window (s max ) depends on frequency range in such a way that the number of autocorrelation peaks (N) within the window is always the same, independent of edge frequency. Therefore, s max / 1/f e . The straight lines also correspond to a place model in which the displacement of the excitation peak caused by lateral inhibition is a constant fraction of the edge frequency as in the appendix in Shamma (1985). Table I (d) shows that RMS errors are largest for these straight line fits indicating that the frequency dependences introduced into the temporal and place models make useful (though small) contributions. All of the models compared in Table I have two adjustable parameters, which makes the comparisons fair. However, the place models from Tables I(b) and I(c) both require parameters to relate place to frequency, determined originally from other experiments. The temporal model is more economical in that respect. The sign of the pitch shift results from the logic of lateral inhibition and the fitting procedure for the place model, but it emerges automatically from the temporal model. Small and Daniloff (1967) reported that listeners were unable to do octave matching for HP noise with edge frequencies less than 610 Hz, but octave matches could be made for LP noise with an edge as low as 145 Hz. Similarly, Fastl (1971) found that highpassed noise with an edge frequency below 500 Hz did not produce a pitch. Fastl and Stoll (1979) asked listeners to rate the pitch strength of 12 different stimulus types, including LP and HP noises with edges ( 6192 dB/octave) at 125, 250, and 500 Hz. Their listeners found LP edge pitches to be stronger and HP edge pitches to be inaudible sometimes. Also, the data from Klein and Hartmann tended to show more variability for HP noise than for LP noise as the edge frequency decreased. By contrast, with window durations as long as 60 ms, the sinc-ACF model does not immediately suggest any particular difficulty for the HP condition. Low-edge-frequency experiments were done using our sharp edges to test the low-edge-frequency limit and compare with the previous experiments. The limit for HP noise was of particular interest. IV. LOW-EDGE-FREQUENCY EXPERIMENTS A. HP noise Procedure The HP noise bands were computed in the frequency domain with a sharp edge at f e at the low-frequency end, where the spectral discontinuity, as presented to the listeners, was more than 40 dB. Above f e the noise was equalamplitude, random-phase extending to 20 000 Hz except that the amplitude decreased linearly by 20 dB between 16 000 and 20 000 Hz to avoid a sharp edge at very high frequency. Because noise bands were very wide for all edge frequencies, the noise power was essentially constant. Noise bands were generated with 16-bit precision at a sample rate of 100 000 samples per second. They were presented diotically to listeners in a double-walled sound room through Sennheiser HD600 headphones (Wedemark, Germany). The level was 65 dBA. Noise stimuli were 520 ms in duration with 20-ms raised-cosine onsets and offsets. A noise interval was followed by a 400-ms silent interval 3 and then by a 500-ms sine tone with a frequency that could be adjusted by the listener to match the pitch of the noise edge. The entire audible range of frequencies was available for the matching tone through a combination of push-button range switching and a ten-turn potentiometer on the response box. The control voltage from the response box was read by a 12-bit analog-to-digital converter, and the matching tone was then generated digitally so that the matching frequency was known precisely. The relationship between the potentiometer setting and the tone frequency was randomly offset at the start of each trial. The level of the matching tone was also adjustable by the listener, and the matching tone could be muted with the press of a button. The cycle of target noise and matching tone repeated indefinitely until the listener was satisfied with the match. After making a match, the listener received feedback, including the matching frequency, the edge frequency, and the percentage difference between the two. Listeners were told to expect matches to be close to the edge, but not necessarily identical to the edge. (See footnote 8 in Sec. VIII.) Our experiment looked for pitches far below the limits found by Fastl (1971) and Small and Daniloff (1967). It used eight edge frequencies: 50, 70, 100, 150, 200, 280, 400, and 560 Hz. The different edge frequencies were presented in random order in an experiment run. As a control experiment, listeners adjusted the frequencies of sine tones to match the pitches of eight lowfrequency sine tones, ranging from 40 to 500 Hz. These were presented at a nominal level of 70 dB SPL, considering the elevated audiogram at 40 Hz. Listeners There were six male listeners in the matching experiments overall. Listeners A, B, I, S, and Z were between the ages of 20 and 25 yr. They were accepted as listeners based on their ability in a high-frequency sine-sine pitch matching test. Listeners A, I, and S were able to match sine tones with a standard deviation of less than 10 cents (0.6%) at least up to 13 kHz. Listener B's standard deviation was less than 10 cents between 350 and 9000 Hz. Listener Z was tested only up to 8 kHz where his standard deviation was 10 cents. Listener W 0 was the same as Listener W (data in Fig. 2) but tested 37 yr later. He only participated in low-frequency experiments. All listeners were amateur musicians except for listener A, who was a professional. The young listeners were all students at Michigan State University; they signed a consent form approved by the University Institutional Review Board (IRB). Results Four of the listeners participated in the low-edge-frequency HP experiments. The matching data are shown in Fig. 4. Small numbers near the horizontal axis show the total number of trials. Matching ratios are shown by circles. Matching ratios one octave (occasionally, but rarely, two octaves) higher than plotted are shown by upward pointing, filled triangles. For instance, an upward triangle at f e ¼ 70 Hz plotted near À300 cents represents a matching tone that was about 900 cents above 70 Hz (900-1200 ¼ À300). Downward pointing triangles indicate matches that were an octave lower than plotted. Solid lines show the predictions of the sincautocorrelation peak model for different window durations. The model is the same as for Figs. 2 and 3, but the plots look different because the model was evaluated at a fine mesh of points for Fig. 4. Comparison with the predictions confirms the observation made in Sec. II that the optimum window duration is relatively long for edge frequencies below 600 Hz. Most of the matching frequencies in Fig. 4 agree best with the model with a 60-ms window. All listeners made consistent matches for edge frequencies above 200 Hz with negligible octave errors and most of the matches within a semitone of the target. Listeners S and W 0 matched consistently down to 150 Hz. Consistent matches were almost all above the edge frequency as expected from the sinc-autocorrelation peak model. Listeners found the pitches evoked by those edges to be salient. By contrast, for edge frequencies below 150 Hz, there was no evidence for salient pitches. There were frequent octave errors, and many matches were well away from the target. Matches showed no consistent pattern except possibly for listener S. However, the consistency seen for listener S does not seem to indicate actual edge pitch perception below 150 Hz. 4 We conclude that the low-frequency limit for highpassed edge pitch is between 100 and 150 Hz, though values that low were not achieved by all listeners. B. LP noise Low-edge-frequency experiments were done using LP noise. The same edge frequencies and protocol from the HP experiments of Sec. IV A above were used. The levels were the same, too, except that the listener had the option of requesting a level increase for the lowest edge frequencies, particularly 50 Hz. Again, listeners A, B, S, and W 0 participated, and results are shown in Fig. 5. The data show that matches could be made reliably down to 50 Hz, the lowest edge frequency tested. Although the matching variance grew for decreasing edge frequency, the growth may relate more to low-frequency loudness and pitch acuity, in general, than to the strength of edge pitch. As expected for lowpassed noise, most (89%) of the pitch shifts were negative, though the percentage fell to 81% for listener A. C. Discussion A comparison of the HP and LP experiments shows that the LP edge pitch could be heard for edge frequencies at least an octave lower than the limit for the HP edge pitch. The difference can be understood within a temporal model by considering the amount of tonotopic axis available to represent the timing for these two noise types. The difference is FIG. 4. (Color online) Probing the low-frequency limit with HP noise. The ratios of matching frequency to edge frequency are expressed in cents and shown by circles. Matches that were an octave (occasionally two octaves) higher (lower) than plotted here are shown by an upward (downward) triangle. Small numbers indicate the number of matches (trials) for each frequency. The shaded region near zero cents is centered on the average match (six trials) for sine tones in the control experiment, and the width of the region is two standard deviations. Solid lines give the prediction of the sincautocorrelation model for temporal windows of 15, 30, and 60 ms. At these low frequencies there are few sinc function peaks in the temporal window, and the predicted plots show a succession of plateaus as new peaks enter the window. The plateaus appear because the plots use a fine mesh of points. inconsistent with a place model that incorporates auditory filters with the usual tuning asymmetry. Temporal model For LP noise with a low edge frequency, the entire tonotopic axis with best frequencies (BF) above the edge experiences no on-frequency excitation. Instead, neurons with high BF mainly experience the excitation that is near the edge, especially because their low-frequency slopes are relatively shallow. Calculations with a gammatone filter model show an ACF with peaks at lag values determined by the edge frequency, independent of BF when BF is greater than the edge frequency. A strong ACF over a major part of the tonotopic axis can be expected to lead to a strong pitch, in agreement with experiment. At the same time, a LP noise with a low edge frequency has relatively few components in its spectrum, and that leads to a rough-sounding temporal envelope making the matching experience unpleasant and somewhat difficult. For HP noise with a low edge frequency, the only neurons free of on-frequency excitation are in the region of the tonotopic axis with even lower BF. Because of their sharp cutoff in the high-frequency tails, these neurons experience only little excitation having a temporal structure determined by the edge frequency. Therefore, HP edge pitch can be expected to disappear for low edge frequency, as observed experimentally. The above explanation for the differences between LP and HP noise for low edge frequency retains the temporal character of our autocorrelation model but augments it with place considerations to obtain a qualitative understanding of the strength of the temporal information that is available. Because excitation pattern models are well developed, it would be possible to make quantitative predictions for the relative strengths of edge pitches with different edge frequencies. Such calculations are beyond the scope of the present paper. Place model For LP noise with low edge frequency, the edge of the excitation pattern is broad because neurons with BF near, and slightly above, the edge are excited by noise components that are below the neuron BF. For HP noise with low edge frequency the edge of the excitation pattern is sharp because neurons with BF just below the edge are inefficiently excited by noise above their BF. Therefore, edge pitch is predicted to be stronger for HP noise, contrary to experiment. V. INTERVAL IDENTIFICATION Musical pitch is recognized if a listener can identify melodies without rhythm or adjust or identify musical intervals (Houtsma and Goldstein, 1972;Plack and Oxenham, 2005;Moore and Ernst, 2012;Oxenham et al., 2011;Gockel and Carlyon, 2016). To determine whether NEP qualifies as a musical pitch, interval identification experiments were inserted into the schedule of pitch matching runs for high and low edge frequency. Listeners made open set identifications of melodic intervals to verify that the pitch elicited by noise bands with a sharp edge qualifies as a musical pitch. Because the musical nature of the edge pitch itself was in question, there was no special concern with frequency range, and edge frequencies ranged from 600 to 2400 Hz. A. Intervals-LP noise In the LP experiment, the intervals and noise edge frequencies (Hz) were these: octave (600,1200), fifth (1600,2400), fourth (1200,1600), and major third (1600,2000). These four intervals were always melodic and ascending. For each experimental trial, an interval was randomly chosen from the four and presented to the listener four times-a standard cycle. After the cycle the listener could either identify the interval or could request a repetition of the cycle. Listeners were familiar with musical intervals, but they did not know which intervals were in the test. Stimuli were generated according to the procedure described in Sec. IV A. Stimuli were again 520 ms in duration with 20-ms raised cosine onsets and offsets. A pause of 400 ms separated the two noises of an interval. Results of the experiment were as follows: • Listener A immediately identified all four intervals without waiting for the four intervals of a cycle to complete. • Listener I correctly identified the major third and the fourth after one cycle. He required two cycles to correctly identify the fifth, and misidentified the octave as a perfect fifth. • Listener S correctly identified three intervals but called the major third a minor third. When then presented with a minor third (2000,2400 Hz), the listener responded, "major third." Upon further testing with major thirds (3200,4000) and (3600,4500) and a minor third (3000,3600), the listener made one error, calling (3200,4000) a minor third. • Listener W 0 correctly identified all four intervals but required two cycles to identify the major third and the fourth. Listener W 0 designed the experiment and knew which four intervals were in the set, but not the order of presentation. • Listener Z correctly identified all four intervals after hearing one standard cycle for each. B. Intervals-HP noise In the HP experiment, the intervals and edge frequencies were the same as those in the LP experiment. Another interval, a minor third, (2000,2400) was added to the standard set. • Listener A correctly identified four intervals, usually before the completion of a cycle, but he misidentified the octave, insisting that it was a minor 7th! Such a misidentification might have been predicted. The octave interval was made with relatively low edge frequencies, 600 and 1200 Hz, where the pitch shift gradient is large and negative (Fig. 2). The pitch of the 600-Hz edge is expected to be increased more than the pitch of the 1200-Hz edge, leading to a compression of the perceived interval. 5 • Listener S correctly identified all five intervals, always at the end of a single standard cycle. He made no mistakes in seven random trials. C. Discussion-Interval identification The results of the LP and HP interval identification experiments, as summarized in Table II, indicate that noises with a sharp spectral edge elicit a musical pitch. Although some listeners made some mistakes and others required more than a single cycle, the difficulties appear to represent only isolated cases with possible additional confusion from the pitch shifts. The edge-pitch noise stimuli are clearly capable of generating a musical pitch. Isochronous melodies made with edge pitches have been recognized by audiences at conferences (e.g., Hartmann et al., 2015). This positive result is hardly surprising. Akeroyd et al. (2001) found that binaural analogs of monaural edge pitches lead to musical pitch sensations, and binaural edge pitches are more challenging to listeners than the monaural NEPs investigated here. VI. HIGH-EDGE-FREQUENCY EXPERIMENTS If the pitch of a noise band with a sharp spectral edge is the result of a temporal process, as conjectured in Sec. II, then the pitch sensation requires the synchrony of neural firing. Neural synchrony, at all levels of the auditory system, is known to decrease dramatically with increasing frequency, though the frequency at which synchrony is no longer operative is a subject of ongoing debate. Whereas frequency difference limen data from Moore and Ernst (2012) suggested a limit from 8 to 10 kHz, scaling arguments (Joris and Verschooten, 2013) and cochlear measurements (Verschooten et al., 2018) suggest a limit no higher than a few kilohertz. At the level of the auditory nerve, it is common to set an upper limit near 5000 Hz based on Johnson's data on cat (Johnson, 1980). To determine whether edge pitch persists at high frequencies, we performed pitch matching tests for LP and HP noise with high edge frequencies. As a control experiment, we performed similar tests with sine tone targets. A. LP matches Pitch matches to LP noises are shown in two ways in Figs. 6 and 7: (1) The circles show the ratios of matching frequencies to edge frequencies, on a scale of cents, when the ratios were in the range À400 to 400 cents. Errors identified as octave discrepancies are shown by filled triangles. An upward pointing triangle indicates a match one octave above the plotted symbol. A downward pointing triangle indicates a match one octave (occasionally two octaves) below the plotted symbol. Some matches did not fall on the plot, even allowing for octave discrepancies. These are shown by circles on dashed lines above (match too high) and below (match too low) the plot. (2) The average value and standard deviation of the difference between the matching frequency and the edge frequency is shown as a percentage of the edge frequency. Matches with octave discrepancy assignment were not included in the averaging. Matches in the one-octave range from À600 cents to þ600 cents discrepancy were included in the average. The numbers of matches included in the average are shown by small numbers in the upper half plane. The averages themselves, followed by the standard deviation, are in the lower half plane. The shaded region indicates the matches in a control experiment where the listener matched a sine tone target with a sine tone probe. The center of the shaded region indicates the mean and width of the region is two standard deviations in overall width. Figure 6 shows that for listeners A and Z, matches became highly unstable as the edge frequency approached 5000 Hz. The same was true for listener I, but his data are not shown because his matches at 2000 Hz were not consistent enough to make a strong contrast with matches at higher edge frequencies such as 5000 Hz. For listener A, the standard deviation was less than a semitone for the eight edges below 4.5 kHz and greater than a semitone for the six highest edge frequencies-4.5 kHz and above. For listener Z, the standard deviation was greater than a semitone for the two edges above 4.5 kHz. Listener S (Fig. 7) was an exception. Experiments with listener S began in the same range as for the other listeners, 2-8 kHz. With time, it became evident that this listener could make successful matches for edge frequencies well above 5 kHz. 6 Therefore, listener S was retained for another four months and experiments were restarted using the range from 2 to 16 kHz. Figure 7 shows that the matching standard deviation was less than a semitone for the 14 edges below 9 kHz and was greater than or equal to a semitone for the 6 edges at 9 kHz and above. B. HP matches The pitch matches by listeners A and B for HP noise are shown Fig. 8. Similar to the matches for LP noise for listeners A and Z (Fig. 6), the scatter among the matches increases near 5000 Hz. For listener A, the standard deviation was less than one semitone for the six edges below 5 kHz and greater than one semitone for the six edges at 5 kHz and above. For listener B, the standard deviation was less than one semitone 6. (Color online) The ratios of matching frequency to edge frequency for listeners A and Z with LP noise are shown by circles. Some are slightly displaced horizontally for clarity. Matches that were an octave higher than plotted here are shown by an upward triangle. Matches that were an octave lower than plotted here are shown by a downward triangle. Small numbers in the lower rows show the mean and standard deviation of the percentage shift. Small numbers in the upper row indicate the number of matches (trials) included in the average for each frequency. Matches with discrepancies greater than 6400 cents are plotted above/below the main graph. The dashed line is the prediction of the sinc-autocorrelation model for a lag window of 15 ms. A longer window, such as 30 ms, would fit better for listener A. The shaded region near zero cents is centered on the matches for sine tones, and the width of the region is two standard deviations. for the two edges below 3 kHz and greater than one semitone for the ten edges at 3 kHz and above. The exceptional listener, listener S was extensively tested with HP noise with high-frequency edges. His results are shown in Fig. 9. Listener S is indeed extraordinary. First, in the control experiment with sine tone matching, his standard deviation for 16-kHz tones was less than 15 cents. Only at 17 kHz did his standard deviation (86 cents) approach the 105 cents difference between 16 and 17 kHz. For edge pitches, the standard deviation was less than one semitone for the 13 edges below 10 kHz and greater than 1 semitone for 5 of the 6 edges at or above 10 kHz. It appears that a 12-kHz edge was not distinguishable from a 10-kHz edge. It is very unlikely that the extraordinary performance of listener S was the result of an experimental artifact. The noise bands with spectral edges and the matching sine tones were generated by different electronic systems. It is hard to see how a stimulus generation artifact common to the noise and tone could serve as a cue. Further, the matching ability for very high frequencies was resistant to the introduction of LP masking noise (100-2000 Hz, 50 dBA). Experiment runs, including very high-frequency edges, always also included medium-frequency edges (2-4 kHz). Therefore, the listener was required to make large leaps in pitch range throughout. It seemed possible that the matches by listener S were facilitated by the procedure whereby experimental runs either presented all HP noises or presented all LP noises. To test this idea, listener S did 26 runs where each run had 4 HP and 4 LP noises randomly ordered. Edge frequencies ranged from 400 to 12 000 Hz. The results showed that highfrequency edge pitch matching accuracy was not adversely affected by this mixing procedure. Standard deviations for edges at 6, 7, and 9 kHz were less than 1%. The standard deviation at 10 kHz was only 1.2%, but the standard deviation at 12 kHz was again large. Clearly, the data from listener S do not agree with a model which posits a temporal origin for edge pitch with consequent limitation to 5 kHz. There are several possible explanations: perhaps listener S has an auditory system that preserves neural timing up to frequencies that are an octave higher than other humans-also cats (Johnson, 1980). Alternatively, listener S found some other process, presumably based on place mechanisms, for matching pitches. Unlike other listeners, listener S may be responsive to a tonotopic excitation pattern sharpened by an abrupt disappearance of inhibition at a high-frequency edge. C. LP and HP NEP Two of the listeners in the high-frequency experiments (listeners A and S) participated in both the LP and the HP versions testing the high-frequency limit for NEP. Identical edge frequencies were used for both versions enabling a paired perceptual comparison between LP and HP NEP. The relevant data are the standard deviations appearing in Figs. 6(a), 7, 8(a), and 9 for these listeners. For listener A, the standard deviation was smaller for HP NEP than for LP NEP for 8 out of 11 edge frequencies, and the 3 exceptions were all for f e > 5000 Hz, where matching was difficult for listener A. Similarly, for listener S, the standard deviation was smaller for HP NEP than for LP NEP for 15 out of 18 edge frequencies, and the 3 exceptions were all for f e > 11 000 Hz. Listener S remarked informally that he found HP NEP easier to hear, even though HP NEP was presented with lowfrequency masking noise. Temporal model A straightforward explanation for the advantage of HP noise over LP noise for high f e is the flip-side of the explanation of the advantage of LP noise for low f e . A HP noise with an edge frequency of 2 kHz or above leaves much of the pitch-critical apical region of the cochlea unexcited by lowfrequency components. Further, calculations with a gammatone filter bank show that this region is excited by the remote high-frequency components. The size of the ACF oscillations decreases as the best frequency becomes ever smaller than the edge frequency but, because it is normalized, the ACF does not disappear and it retains the periodicity expected from the sinc-autocorrelation model for BF at least several octaves below the edge frequency. For a LP noise with high edge frequency the (basilar) region for spectrally remote excitation is smaller. Therefore, a temporal model predicts that HP noise leads to a stronger pitch sensation, in agreement with experiment. Place model For LP noise with high edge frequency, the edge of the excitation pattern is broad because neurons with BF near the edge are excited by noise below their BF. For HP noise with high edge frequency, the edge of the excitation pattern is sharp because neurons with BF near the edge are excited by noise above their BF. Therefore, the usual asymmetry of auditory tuning predicts that edge pitch should be stronger for HP noise, again in agreement with experiment. For very high edge frequency, the place model has an advantage over the temporal model because it does not require neural synchrony. VII. DISCUSSION The edge pitch stimuli are of special interest for temporal models of pitch because of the pattern of peaks in their ACFs. The pattern can be viewed with reference to a periodic stimulus for which the ACF exhibits a set of regularly occurring major and minor peaks. The major peaks are found at time delays (lags) corresponding to integer multiples of the fundamental period (n ¼ 0,1,2,…). Patterns of minor peaks in the ACF depend on the amplitudes of harmonics in the stimulus. By contrast, the peaks of the ACF for aperiodic noise with a sharp spectral edge are displaced from integer multiples by 1/4 unit. Therefore, the edge pitch stimuli represent a temporal analog to the "pitch shift" stimuli (e.g., Schouten et al., 1962) in that the predicted pitch appears as a best fitting parameter in a model that is ideal for a periodic stimulus. Early temporal models for pitch (Meddis and Hewitt, 1991a,b;Cariani and Delgutte, 1996a,b), estimated pitch by finding the first major peak in the ACF, or summary autocorrelation function (SACF), but more robust estimations are obtained from more recent models (Cariani, 2004;Bidelman and Heinz, 2011) that incorporate multiple peaks. Using patterns of multiple peaks can reduce octave errors and make successful pitch predictions for a wider range of stimuli. The edge pitch matching data also require multiple autocorrelation peaks (Fig. 1). Edge pitch experiments can reveal the number of peaks that contribute to pitch perception as a function of frequency range. The more recent models also incorporate more realistic physiology. In order to handle pitch multiplicity-hearing out multiple pitches from double vowels, musical dyads and triads-the autocorrelation process needs to be preceded by cochlear filtering and rectifying neural transduction. Bandpass filtering and half-wave rectification of a temporally detailed waveform avoids cancellations between peaks and valleys within the SACF. The noise band with a sharp spectral edge, as treated in the present paper, is simpler, and useful predictions can be made with an autocorrelation model based only on the average stimulus power spectrum. The only oscillations in the ACF are from the edge itself, and they provide similar information in every off-frequency tonotopic region. For spectral-edge stimuli, the ACF is simple enough that an elementary model, accepting all the peaks without regard for their heights, can be used. Inspired by B ek esy's 1963 report of pitches at both edges of a noise band, Small and Daniloff (1967) studied the pitches of LP and HP noise. They initially considered a sinetone pitch matching experiment, similar to the experiment reported here, but gave it up as "time consuming and difficult for subjects." Instead, they asked subjects to adjust a filter to produce an edge pitch that was an octave above or below a standard noise having an edge. Despite the difference in methods and the very shallow slopes used by Small and Daniloff (1967; 635 dB/octave), there are a number of parallels between their results and ours. Their subjects were unable to hear reliable edge pitches for HP noise with edge frequencies (f e ) below 610 Hz, but subjects had no trouble with LP noise having much lower edge frequencies. That result resembles our experience with low f e , though our HP limit was about two octaves lower than 610 Hz, probably because of our sharper edge. Small and Daniloff (1967) had ten listeners, and five of them were able to attempt octave matches above a 9620-Hz HP edge. Three of them attempted octave matches above a 9660-Hz LP edge. Evidently they too had some listeners, like our listener S, who were capable of hearing edge pitch well above the 5-kHz limit expected for neural timing. 7 Apparently they, too, found HP noise easier than LP noise for high edge frequencies. For both LP and HP noise, Small and Daniloff (1967) found that attempts to match an octave above a standard were too high (matching edge frequency more than a factor of 2 greater than standard edge frequency) and attempts to match an octave below a standard were too low. Given the increasing magnitudes of the relative pitch shifts observed in our experiments as the frequency decreases and the signs of those pitch shifts, we would have predicted their results for HP noise but the opposite to their results for LP noise. Possibly the comparison is frustrated by the well-known octave enlargement (Ward, 1954). Pitches evoked by sharp spectral edges have been studied through pitch matching experiments on periodic complex tones with many strong harmonics (Martens, 1981;Kohlrausch and Houtsma, 1992). Such complex tones include multiple pitch cues-the low pitches of the complex and the pitches of resolved or partially resolved components. The stimulus is more complicated than noise bands. Kohlrausch and Houtsma (1992) found that pitch matching variance monotonically decreased as the fundamental frequency of the complex decreased, increasing the spectral density. By extrapolation, one might expect the smallest variance for noise bands-the ultimate in spectral density. These authors also found that the pitch of a lowpassed complex tone with a 2000-Hz edge was usually matched by a sine tone above 2000 Hz. This is also the prediction of a periodicity analysis of the waveform (Hartmann, 1998). The stark contradiction between that observation and the unambiguous evidence that edge pitches occur below the edge frequency for LP noise, as well as the complexity of the complex tone stimulus, discourages attempts to unify these two edge pitch effects. VIII. SUMMARY Open-set melodic interval identification experiments show that NEPs qualify as musical pitches for both LP and HP noise, although there are pitch shifts away from the edge frequency (f e ). The pitch shifts found by Klein and Hartmann (1981) were similarly found in the experiments reported here using higher quality digital stimulus generation. Specifically, the pitch of a LP noise is below f e and the pitch of a HP noise is above f e . These pitch shifts were helpful data in evaluating models of edge pitch perception: a temporal model, and a place model. A. Temporal model The temporal model of NEP hypothesized that pitch is determined by a characteristic autocorrelation lag, which is a mean of the weighted lags of the peaks of the broadband noise ACF. Parsimony was served by approximating the ACF by a sinc function, corresponding to a rectangular power spectrum. This model makes several predictions in agreement with experiments: (1) The predicted sign of the pitch shift agrees both for LP and for HP noise. (2) The pitch shift magnitude is larger, as a percent of the f e , for lower f e . (3) The pitch shift magnitude is larger for HP than for LP. (4) The pitch prediction as a function of f e has about the right curvature. (5) The window duration, which is the adjustable parameter in the temporal model, is within a reasonable range. (6) The optimum window duration increases with decreasing f e as expected. 8 B. Place model An alternative model for NEP is based on a place theory of pitch perception in which the edge of the excitation pattern is sharpened by lateral inhibition. In this model, tonotopically ordered neurons at some level of the auditory system are inhibited by excitation of neighboring neurons with higher and lower characteristic frequencies. At an edge, the primary excitation of neighbors beyond the edge disappears and so does their ability to inhibit excitation. The result is an enhancement of excitation of neurons near the edge that do receive primary excitation. The enhancement therefore occurs at places having characteristic frequencies below the edge frequency for lowpassed noise and above the edge frequency for HP noise, in agreement with experimental pitch shifts. Predicting the pitch shift requires an estimate of the displacement of the peak of the enhancement away from the edge. If the displacement is a constant fraction of the edge frequency (Shamma, 1985) the predicted pitch shift would be a flat line on a pitch shift plot such as Fig. 3(b). If the displacement is related to neural coordinates as initially established in the cochlea (e.g., Greenwood, 1961), the relative displacement increases for decreasing frequency-a behavior that is consistent with the experimental observations of edge pitch shifts as shown in Fig. 3(b). C. Critical experiments Experiments using low and high edge frequencies were done to try to distinguish between the temporal and place models. Low edge frequency The experiments of Sec. IV showed that as the edge frequency decreased below 150 Hz, the pitch persisted for a LP noise but disappeared for HP noise. As argued in Sec. IV C, this result is consistent with the timing model but not with a place model that assumes asymmetrical auditory filters. High edge frequency The experiments of Sec. VI showed that for high edge frequencies, the pitch was stronger for HP noise than for LP noise. As argued in Sec. VI C, this result is consistent with both timing and place models. As the edge frequency approached 5000 Hz, the pitch disappeared for most listeners, but for at least one listener a pitch persisted up to an edge frequency of 10 000 Hz. The latter result argues against a timing model. The temporal model for NEP requires that neural synchrony be maintained in the frequency region of the edge. If useable neural synchrony disappears as the stimulus frequency passes 5 kHz, then the edge pitch ought to disappear as well. Shamma's 1985 lateral inhibition model also requires synchrony. Experiments with both LP and HP noise found that edgepitch matching deteriorated considerably above 5 kHz for listeners A, B, I, and Z. However, listener S made matches with a standard deviation of less than a semitone for an 8-kHz LP edge and also for an 11-kHz HP edge. Further, although the matches by the other listeners may have been inaccurate for edge frequencies near 5 kHz, their data do not support the conclusion that the edge pitch disappeared entirely. Either temporal synchrony is, at least weakly, maintained at high frequencies in human listeners or listeners are able to exploit place of encoding to hear an edge pitch. Place of excitation as enhanced by lateral inhibition at an edge might provide some high-frequency information for all listeners and lots of information for listener S. This equation makes intuitive sense if we imagine that s n % n/p, in which case we obtain the consistent result However, the calculation ofŝ from Eq. (A2) is counter intuitive in that this equation gives increasing weight to peaks of longer lag (i.e., larger n). Therefore, we abandon the minimization of E 1 in Eq. (A1). Instead we imagine that the lag of each peak beyond the first represents the period of a subharmonic. In the context where the peaks are approximately equally spaced, the nth peak points to a frequency that is the reciprocal of s n /n. Thinking this way, we choose the bestŝ to minimize the error E 2 , ðs n =n ÀŝÞ 2 : The best estimate ofŝ then becomeŝ For the LP sinc-ACF, the lag values of the peaks are given by the approximation in Eq. (2), s n ¼ (n þ 1/4)/f e . This approximation is tested in Appendix B. Therefore, The sum P 1=n is called a "harmonic series" by mathematicians (because of the wavelengths of successive harmonics in a periodic complex tone), and it is approximately equal to ln(N) þ c þ 1/(2 N), where c is the Euler-Mascheroni constant, c % 0.57722. The reciprocal ofŝ is the predicted value of pitch p. Therefore the ratio of the pitch to the edge frequency for LP noise is (3) in the text. HP The ACF for a rectangular HP noise band, a HP , can be computed by starting with the ACF for an infinite band, having power at all frequencies from zero out to infinity (a 1 ), and then subtracting the ACF for a LP rectangular band, a HP ðsÞ ¼ a 1 ðsÞ À a LP ðsÞ ¼ a 1 À sinð2pf e sÞ=ð2pf e sÞ: Function a 1 is a delta function, and its area can be chosen to fulfill the usual normalization requirement for ACFs, a HP (0) ¼ 1. Because of the minus sign in Eq. (A8), a HP (s) has peaks where a LP (s) has valleys. The two functions differ by half a period. For the sinc-ACF, the peaks of a HP (s) occur at s n ¼ (n -1/4)/f e , which is just the same as Eq. (2) except for the minus sign. It follows that the pitch prediction for HP NEP is the same as Eq. (3) [or Eq. (A7)] for LP NEP except for a sign, namely This equation is Eq. (4) in the text. Because it is calculated from the components that are missing from the spectrum instead of the components that are present in the spectrum, the ACF for HP noise, a HP , is less well defined than the ACF for LP noise. The rest of this Appendix is a test to examine a HP , especially the peak locations, with a numerical example. Figure 10 is the normalized autocorrelation for a noise with equal-amplitude, random-phase components from 500 to 20 000 Hz. Like the experimental stimuli, the spectrum was attenuated between 16 and 20 kHz, the duration was 500 ms, and the sample rate was 100 000 sps. Figure 10 shows that the peaks of the envelope agree reasonably with the n -1/4 approximation noted just below Eq. (A8), and that was an essential reason for the test. However, the actual values of autocorrelation are small because the function is normalized by all the components in the band. Restricting the band to the region where neural synchrony occurs or to a single auditory filter would increase the size of the normalized function. FIG. 10. (Color online) Autocorrelation for a HP noise band with equalamplitude, random-phase components from 500 to 20 000 Hz. It is normalized to a value of 1.0 (well off the plot) for a lag of zero. The vertical lines are drawn at lag values given by s n ¼ (n -1/4)/f e , f e ¼ 500 Hz, for 1 n 15 as a test of the n -1/4 approximation for the lag values of the peaks. Close inspection reveals a discrepancy for the leftmost peak, but discrepancies for the other peaks are too small to be seen. Diamonds indicate integer multiples of the period 1/f e . They are one-quarter period away from the vertical lines. 1 The 1981 experiment used only three different digital sound files with edge frequencies in three ranges. Finer gradation of edge frequencies was obtained by changing the sample rate. Because the edge frequencies from different ranges overlapped, small shifts attributable to details of the sound files themselves were observed, but the data from different ranges are combined in Figs. 2 and 3 for simplicity. The different ranges can be seen in the 1981 publication. 2 Lateral inhibition, if it exists anywhere in the auditory system, might occur at any level of the auditory neural pathway. An inhibition model based on the Greenwood formula for the cochlea makes the assumption that tonotopy at the site of the putative lateral inhibition follows the tonotopy in the cochlea. Therefore, place shifts at the relevant site can be represented by displacements along the cochlear partition as measured in millimeters. 3 The experiments by Klein and Hartmann (1981) maintained a broadband noise throughout an experimental trial, including the nominally "silent" intervals. It was thought that including a noise background (without an edge) along with the matching sine tone would benefit the listener by making the matching interval sound more like the target interval. Subsequent experience with edge pitch matching indicated no perceived benefit from the noise background and none was used in the current experiments. 4 The consistency in matches for listener S can be understood as a proclivity to match indistinguishable edges by sine tones near 50 or 60 Hz. For an edge at 100 Hz, 13 of 16 matches were closely spaced and too low (average -884 cents) pointing to 60.0 Hz. At 70 Hz, 11 of 16 matches were too low by an average of -333 cents and four others were an octave above that. These matches pointed to 57.8 Hz. At 50 Hz, 10 of 16 matches were too high by an average of 198 cents, and the other 6 matches were an octave above that. These matches pointed to 56.0 Hz. Therefore, the preponderance of matches (means,60.0,57.8,and 56.0 Hz) were made at about the same frequency for the low edge-frequency range, independent of what the edge frequency actually was. 5 The autocorrelation model for a HP noise predicts that the pitch for a 1200-Hz edge should be higher than for a 600-Hz edge by a factor of 1.96 (30-ms window) or 1.94 (15-ms window) instead of 2.00. Both ratios are considerably higher than the equal-tempered minor 7th at 1.78. If the perceptual octave above 600 Hz is enlarged by 1% per McKinney and Delgutte (1999), i.e., 1212 Hz, a minor 7th could correspond to a slightly larger ratio (1.8), but this ratio is still much smaller than 1.94. 6 Data from the last 12 runs of this initial series showed 11 of 12 matches at f e ¼ 6 kHz within 650 cents of the edge (12 of 12 with octave correction). At 7 kHz, 11 of 12 matches were within 100 cents of the edge frequency (with one octave correction). At 8 kHz, 11 of 12 matches were within 150 cents of 8 kHz. 7 In connection with unexpected results, it may be worth noting the caveat about octave switching in the filter as suggested by Small and Daniloff (1967). 8 The experiment protocol included trial-by-trial feedback to the listeners intended to maintain listener interest. The listeners knew that some pitch shift was an expected result, which would discourage them from adjusting their responses to reduce the shift. Also, the listeners were so skilled at the matching task, and the experiments continued for so many trials, that we think it unlikely that the feedback had a significant effect on the data. Nevertheless, some bias may have occurred, especially because most of the experiment runs were blocked-LP noise or HP noise. If it had occurred, bias would likely have reduced the measured pitch shift. Such reduction would have had the effect of artificially increasing the estimates of the autocorrelation window duration. FIG. 11. (Color online) Sinc function for a LP edge at f e ¼ 100 Hz. Two vertical lines are drawn for each peak, the shorter at the exact peak for the sinc function, the longer at the approximate lag value (n þ 1/4)/f e . The long and short vertical lines become indistinguishable as either the lag increases or the frequency increases (or both).
v3-fos-license
2014-10-01T00:00:00.000Z
2012-02-20T00:00:00.000
14108775
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0031050&type=printable", "pdf_hash": "8ba71f0b18ef6e65e6555670cb1dc23372997e0f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41549", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "834432fb350a78b2272a01d8396e2795d4848a37", "year": 2012 }
pes2o/s2orc
Pneumococcal Antibody Concentrations and Carriage of Pneumococci more than 3 Years after Infant Immunization with a Pneumococcal Conjugate Vaccine Background A 9-valent pneumococcal conjugate vaccine (PCV-9), given in a 3-dose schedule, protected Gambian children against pneumococcal disease and reduced nasopharyngeal carriage of pneumococci of vaccine serotypes. We have studied the effect of a booster or delayed primary dose of 7-valent conjugate vaccine (PCV-7) on antibody and nasopharyngeal carriage of pneumococci 3–4 years after primary vaccination. Methodology/Principal Findings We recruited a subsample of children who had received 3 doses of either PCV-9 or placebo (controls) into this follow-up study. Pre- and post- PCV-7 pneumococcal antibody concentrations to the 9 serotypes in PCV-9 and nasopharyngeal carriage of pneumococci were determined before and at intervals up to 18 months post-PCV-7. We enrolled 282 children at a median age of 45 months (range, 38–52 months); 138 had received 3 doses of PCV-9 in infancy and 144 were controls. Before receiving PCV-7, a high proportion of children had antibody concentrations >0.35 µg/mL to most of the serotypes in PCV-9 (average of 75% in the PCV-9 and 66% in the control group respectively). The geometric mean antibody concentrations in the vaccinated group were significantly higher compared to controls for serotypes 6B, 14, and 23F. Antibody concentrations were significantly increased to serotypes in the PCV-7 vaccine both 6–8 weeks and 16–18 months after PCV-7. Antibodies to serotypes 6B, 9V and 23F were higher in the PCV-9 group than in the control group 6–8 weeks after PCV-7, but only the 6B difference was sustained at 16–18 months. There was no significant difference in nasopharyngeal carriage between the two groups. Conclusions/Significance Pneumococcal antibody concentrations in Gambian children were high 34–48 months after a 3-dose primary infant vaccination series of PCV-9 for serotypes other than serotypes 1 and 18C, and were significantly higher than in control children for 3 of the 9 serotypes. Antibody concentrations increased after PCV-7 and remained raised for at least 18 months. Introduction Streptococcus pneumoniae (the pneumococcus) is estimated to cause nearly one million childhood deaths each year [1]. Most of these deaths occur in developing countries where the pneumococcus is the most frequent cause of childhood pneumonia and where mortality from pneumococcal meningitis is high (around 50%) with many survivors left with severe neurologic disabilities [2,3]. There is a high burden of pneumococcal disease in The Gambia [4,5] where the pneumococcus is the most prevalent bacterial pathogen isolated from children with pneumonia and is responsible for about 50% of cases of pyogenic meningitis [3,4,6]. About 40% of the serogroups responsible for invasive disease in young children in The Gambia are covered by the 7-valent pneumococcal conjugate vaccine (Prevenar R , Pfizer) and about 80% by the 9valent pneumococcal conjugate vaccine used in trials in The Gambia and South Africa [4,5,7,8]. Pneumococcal conjugate vaccines prevent invasive pneumococcal diseases (IPD) both directly and indirectly by reducing transmission [9,10]. The 9-valent pneumococcal conjugate vaccine (PCV-9) given in a 3-dose schedule beginning at 6 weeks of age, with a minimum of 4-week intervals between doses, induced protective levels of anti-pneumococcal antibodies [11] and provided protection against IPD, pneumonia and all-cause mortality in Gambian children up to the end of follow-up at age 30 months [12]. Antibody concentrations with conjugate vaccines decline after primary vaccination. The rate of decline and the persistence of immunologic memory are important parameters in determining the potential need and time for booster vaccination [13]. Gambian children who received primary vaccination with 2 or 3 doses of a 5-valent PCV in infancy showed immunologic memory at 24 months of age [14], but there are few data on declines in antibody concentration or on the persistence of immunologic memory beyond this period in children in developing countries. The currently recommended regimen for PCV in the United States is to follow primary immunization at 2, 4 and 6 months of age with a booster dose in the second year of life [15]. The high prevalence of nasopharyngeal carriage in developing countries such as The Gambia could provide natural boosting such that the kinetics of the antibody response to PCV could differ from that seen in developed countries and make a booster dose unnecessary, with important cost savings for countries with limited resources. To inform international policy on whether there is a need for booster immunization in low-income countries, more information is needed on the longevity of the antibody response following primary immunization in settings where pneumococcal carriage and diseases are common. We have, therefore, investigated the persistence of pneumococcal antibodies more than 3 years after primary vaccination in early infancy in children who had previously participated in the Gambian Pneumococcal Vaccine Trial (PVT) [12]. Setting and recruitment of study participants The subjects who participated in this study had previously taken part in a double blind, placebo-controlled, individually randomized trial of PCV-9 that took place in The Gambia between 2000 and 2004 [12]. This trial enrolled 17,437 children, who received three doses of either PCV-9 (vaccinated group) or placebo (control group). The primary immunization schedule adopted for this trial was vaccination at 6, 10 and 14 weeks of age but due to the rural setting, the median age at receipt of the first dose of vaccine or placebo was 11 weeks (inter quartile range [IQR] 8-16 weeks) and for the third dose it was 24 weeks (IQR 19-32 weeks) [12]. After the trial results were reviewed, Wyeth vaccines kindly donated 7valent PCV vaccine (PCV-7) for all children in the study area in the age cohort that would have been eligible to participate in the trial. A week-long vaccination campaign with PCV-7 was organized by the Gambian Ministry of Health in Upper and Central River Regions in June 2005 for all children aged 2-4 years, and approximately 27,000 children (an estimated 87% of the eligible total) were vaccinated with one dose, which served as booster dose for children who had previously received PCV-9 and as delayed primary immunization for the control group. A subset of participants from the vaccinated group was selected for the current evaluation of antibody persistence and response to booster vaccination with PCV-7, and the impact of delayed primary immunization with PCV-7 was studied in a subset of children in the previous control group. Nasopharyngeal carriage of pneumococci was studied in both groups. A list was generated of subjects aged 3-4 years who had received three doses of PCV-9 or placebo during the PCV-9 trial and who lived near to one of a selected number of health centres to allow for ease of follow up. Children for participation in the followup study were recruited sequentially from this list until the required sample size had been reached. The study participants were not age or sex matched. Separate consent for participation of children selected for this extended study was obtained from parents/guardians of the participants before enrollment. The field, clinical and laboratory investigators were blind to the group of the study participants. The study was approved by the Joint Gambian Government/MRC Ethics Committee. Enrolment, follow up and sampling At enrolment into the follow-up study in late May 2005, a 3 ml venous blood sample and a nasopharyngeal swab were collected. The calcium alginate fibre tip of the applicator swab was cut off, placed in a container of skim milk-tryptone-glucose-glycerin (STGG) transport medium, and held frozen at 270uC until analyzed. Serum was separated and kept frozen at 220uC until analysis. A full dose of PCV-7 (Pfizer) was given during the campaign from 8-15 June 2005. Study participants were seen at 6-8 weeks, 5 months and 16-18 months following booster/ delayed primary PCV-7 vaccination when further samples were taken for determination of antibody concentrations (at 6-8 weeks and 16-18 months) and carriage of pneumococci (all visits) ( Figure 1). Microbiology: Vials containing a tip of a nasopharyngeal swab (NPS) in transport medium were thawed to room temperature and 100 ml of the sample were diluted ten-fold with sterile Tryptone Soya Broth (TSB). 100 ml of broth were plated onto a selective gentamicin blood agar plate, which was incubated for 18-24 hours at 35uC in an atmosphere containing 5% carbon dioxide. Aliquots of isolates in 15% glycerol broth were kept frozen at 270uC for future analysis. Identification of pneumococci was based on cultural morphology, susceptibility to ethylhydrocupreine hydrochloride (optochin) and sodium deoxycholate (bile solubility). Serotyping was performed with capsular and factor-typing sera (Statens Serum Institut, Copenhagen, Denmark), using a latex agglutination assay as described previously [16]. Isolates with equivocal results were confirmed by the Quellung reaction. S. pneumoniae (ATCC 49619) was used as a quality control strain. All laboratory assays were done blinded to the child's study group. Statistical analyses To give a study with 90% power (alpha 0.05) to detect a 30% reduction in carriage of pneumococci of vaccine serotype in infants who had received PCV-9 in infancy at the time of booster immunization, assuming an NP carriage rate of pneumococci of vaccine serotype of 75% based on earlier small studies in nonvaccinated children [14,17] a sample size of 110 children per group was needed. This was increased to allow for losses to followup. Differences in antibody concentrations between the groups were determined by comparing the geometric means, as well as the proportions of children with an antibody concentration of $0.35 mg/mL, the concentration considered to be protective against IPD [18]. Geometric means of the antibody concentrations in the two groups were compared by the two-sample t-test. For categorical variables, groups were compared using the Chi-Squared test, or for small numbers, Fisher's exact test. The Holm's method was used to adjust the P-values for multiple comparisons between groups [19]. Results We recruited 284 participants and obtained adequate blood samples from 282, including 138 previously vaccinated with PCV-9 and 144 controls. The mean age at first visit for the follow-up study was 45 months. Age, gender and area of domicile were similar between the two study groups (data not shown). At the final visit 16-18 months after PCV-7, 113 (82%) and 116 (81%) children from vaccinated and control groups respectively were evaluated. There was no significant difference in the number of children who dropped out of the study between the groups, neither was there any difference in the reasons for failure to participate further in the study (Figure 1). The period that had elapsed between the last dose of PCV-9 or placebo and receipt of PCV-7 ranged from 34 to 48 months. Anti-pneumococcal antibody concentrations before PCV-7 vaccination The baseline antibody concentrations before booster/delayed primary vaccination with PCV-7 were high for most of the PCV-9 serotypes, although lower than those reported 4-weeks after administration of the third dose of PCV-9 in the original trial [11] ( Table 1). The average (range) of the proportions of children who had protective antibody concentrations ($0.35 mg/mL) for each serotype at baseline was 75.0% (47.4-99.3%) and 65.9% (36.8-96.5%) for the vaccinated and control children respectively (Table 2). There was considerable variation in the mean antibody concentration seen for individual polysaccharides (Table 1, Fig. 2) with the lowest concentration (and lowest proportion having protective antibody concentrations) being found for antibodies to serotype 1 in both groups. Serotype 6B, 14 and 23 antibody concentrations were significantly higher in previously vaccinated than in control children (Table 1). Antibody response to booster/delayed primary vaccination with PCV-7 The proportions of children who achieved antibody concentrations $0.35 mg/mL were high 6-8 weeks after booster/delayed primary vaccination with PCV-7, ranging from 97-100% for individual PCV-7 serotypes (Table 2). There were no significant differences in the proportions with ''protective'' antibody concentrations to polysaccharides in PCV-7 (4, 6B, 9V, 14, 18C, 19F and 23F) between groups at 6-8 weeks or at 16-18 months postvaccination, after accounting for multiple significance tests. However, the geometric mean antibody concentration in the PCV-9 group was higher than in the control group for serotypes 6B, 9V and 23F at 6-8 weeks but only for serotype 6B at 16-18 months after boosting ( Fig. 2 and Table 1). Fig. 2 shows the kinetics of the antibody response following booster/delayed primary vaccination with PCV-7. For the PCV-7 serotypes, antibody concentrations increased markedly at 6-8 weeks after vaccination with PCV-7 (Visit 2) but then fell sharply in both groups, though remaining higher than pre-PCV-7 levels at 16-18 months post-vaccination. For serotypes 1 and 5 which are in PCV-9 but not PCV-7, GM concentrations at 6-8 weeks were no higher than pre-boost in the PCV-9 group, but had increased at 16-18 months post-boost; concentrations in the control group increased over time. Nasopharyngeal carriage The proportions of study children who carried pneumococci of each of the PCV-7 serotypes, and serotypes 1 and 5, present in Table 3. The frequency of nasopharyngeal carriage of each individual serotype was too low to allow meaningful individual comparisons, and no child carried types 1 or 5 at any time point post-vaccination. Thus, further comparisons were done for carriage of any PCV-9 serotype, PCV-9-related types, and non-PCV-9 serotypes between the groups. No significant differences were found in the proportions of nasopharyngeal carriers in these categories between the two groups at any time point. The tendency for children in the vaccinated group to carry more non-vaccine serotypes (39%) than children in the control group (28%) remained at 6-8 weeks post-vaccination but not at 5 months post-PCV-7 or later. Discussion A substantial proportion of the children in the Gambian pneumococcal vaccine trial had antibody concentrations considered to be protective against invasive pneumococcal diseases for most of the serotypes investigated at the age of 3-5 years, regardless of whether or not they had received PCV-9, although proportions were a little higher overall for children who had previously been vaccinated with PCV-9 and substantially higher for some serotypes. The relatively high concentrations of antipneumococcal antibodies observed in our study population are similar to those demonstrated previously in PCV vaccinated Gambian children [11,20,21]. The significant differences in Table 1. Geometric means (95% CI) of IgG antibody concentrations (mg/mL) before and after boosting with PCV-7 in the group that previously had placebo (Control) or PCV-9 (Vaccinated). Table 3. Nasopharyngeal carriage of Streptococcus pneumoniae before and after vaccination with a single dose of PCV-7. antibody concentrations prior to vaccination with PCV-7 for serotypes 6B, 14, and 23F between the groups indicate longevity of some vaccine induced antibodies. This may be a result of boosting by natural colonization and occult pneumococcal infection [22]. However, it is of concern that antibody concentrations to serotype 1 polysaccharide were the lowest among those measured, as pneumococci belonging to this serotype are a frequent cause of IPD in The Gambia and in other developing countries. The high antibody concentrations in the control group for serotypes 4, 9V, 14 and 19F probably reflect past carriage with these organisms. High antibody concentrations to serotype 5, which was not commonly found in carriage studies of infants in the PVT [23] nor in this follow-up study, may reflect past short-duration carriage which is less likely to be detected in cross-sectional studies, and/or past occult or symptomatic invasive infection. Previous studies of infants and toddlers in The Gambia have found carriage of serotypes 19F, 6B, 23F and 9V to be the most common among the vaccine-serotypes [23]. Similar vaccine serotypes were found to be the most common in another carriage study that cuts across all age groups including adults prior to national routine PCV-7 vaccination in the Western Region of the Gambia [24]. A high proportion of children with antibody concentrations of $0.2 mg/ mL 5 years after primary vaccination with PCV-9 was described for serotypes 4, 6B, 9V, 18C, and 23F among HIV uninfected South African children [25]; in our study proportions were lower for serotypes 1, 4, 18C and 23F than for other PCV-9-serotypes. These data support the existence of differential host and/or environmental factors influencing the responses to the various serotypes contained within pneumococcal conjugate vaccines. A strong response to PCV-7 at the age of 3-5 years was seen in children in both groups. The proportion of children with antibody concentrations considered to be protective against invasive pneumococcal disease more than 3 years after primary immunization ranged from 55 to 99% in the PCV-9 group and 47 to 96% in controls, but the booster/delayed primary vaccination increased this proportion to .90% for all PCV-7 serotypes in both groups for up to 18 months, except for serotype 23F in controls which had fallen to 85% at 18 months post-PCV-7. These increases in the antibody concentration following booster/delayed primary PCV-7 contribute to consideration of whether or not to include a booster dose in the PCV vaccination schedule in countries where carriage of pneumococci is common. Such a review of vaccination schedules would require a study with more frequent assessment of the kinetics of antibody concentrations to fully assess a potential anamnestic response among previously vaccinated children. We have shown recently that in resource-poor settings, administration of a booster dose of pneumococcal polysaccharide vaccine (PPV-23) following primary immunization with one or two doses of PCV-7 diminished the differences in initial antibody responses and might lower the cost [26]. Further study of optimal pneumococcal vaccination schedules would be helpful [27,28]. The effect of a single dose of PCV-7 on antibody concentrations in the control group also supports the WHO recommendation that when PCV is first introduced into routine childhood immunization programmes a single catch-up dose of PCV-7 may be given to previously unvaccinated children aged 12-24 months and to children aged 2-5 years who are at high risk [27]. No significant differences were found in the proportions of children carrying pneumococci between the children who had received booster/delayed primary vaccination and the control group at any time point. Following booster or delayed primary immunization with PCV-7 there was a small decline in the prevalence of carriage with pneumococci of vaccine serotype as might have been expected, which was not seen for non-vaccine serotypes. The initial sample size calculation for this study was based on the assumption that the overall carriage rate of pneumococci of vaccine serotype in children in the control group would be 75%. Because this figure was only 17% there was minimal power to detect significant differences between groups. The increase in prevalence of carriage of pneumococci of nonvaccine type among children who had received PCV-9 in infancy previously reported [23] persisted to the age of 3-4 years and up to 6 weeks after the booster vaccination, but this difference was minimal at and after 5 months post-PCV-7 vaccination. Our study was not designed to assess correlation between antibody levels and carriage. Another report showed high levels of functional antibody after post-primary PPV-23 vaccination without impact on carriage, although there had appeared to be an effect of the number of doses of conjugate vaccine received on carriage at age 9 months [29]. An earlier study of primary conjugate pneumococcal vaccination had found that higher IgG concentrations led to a decreasing probability of having a new acquisition of pneumococcal carriage of the corresponding serotype, and achieved statistical significance for serotypes 14 and 19F [30]. Among adults, a pneumococcal anticapsular IgG concentration of 5 ug/mL has been shown to correlate with protection against carriage of serotype 14 [31]. The prevalence of carriage before PCV-7 vaccination was lower than has been reported previously in other parts of The Gambia where overall carriage across a community including adults was 72%. It was 97% among children ,1 year old and 93% among babies of ages ,1 month [24,32]. The lower than expected carriage of pneumococci before PCV-7 vaccination was surprising and may be in part related to temporal trends in pneumococcal carriage but also to the increasing age of the study population. In conclusion, there were significantly higher antibody concentrations to 3 of the 9 serotypes in vaccinated children compared to controls approximately 3 years after primary vaccination with PCV-9, and antibody levels in PCV-9 recipients and controls were increased by PCV-7. Carriage of vaccine serotypes was low in both groups and we could not assess adequately the effect of PCV-7 on this endpoint.
v3-fos-license
2018-04-03T04:06:09.115Z
2016-12-23T00:00:00.000
3979726
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.ghspjournal.org/content/ghsp/4/4/542.full.pdf", "pdf_hash": "eefb59b50ff4dbb453d5b606d8b1409345cf3880", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41553", "s2fieldsofstudy": [ "Medicine" ], "sha1": "eefb59b50ff4dbb453d5b606d8b1409345cf3880", "year": 2016 }
pes2o/s2orc
Pilot Research as Advocacy: The Case of Sayana Press in Kinshasa, Democratic Republic of the Congo The pilot study obtained Ministry of Health approval to allow medical and nursing students to provide the injectable contraceptive Sayana Press and other methods in the community, paving the way for other task-shifting pilots including self-injection of Sayana Press with supervision by the students as well as injection by community health workers. BACKGROUND T he purpose of this article is to highlight the potential of pilot research studies to achieve advocacy objectives. Although the concept is not new, there is little in the published literature to indicate its use as a best practice in international family planning. Research is usually viewed as a means of generating relevant data on the topic, but this case study describes a pilot study that served as the catalyst to achieving change in regulations governing family planning service delivery in the Democratic Republic of the Congo (DRC). The pilot study used medical and nursing students to provide the injectable contraceptive Sayana Press (as well as other methods) as a means to increase family planning uptake and the modern contraceptive prevalence rate. Ultimately, the pilot study was designed to pave the way toward subsequent authorization for community health workers to provide Sayana Press. Advocacy addresses different audiences at different levels. At the global level, family planning advocacy aims to increase investments from multilateral and bilateral donors as well as the private sector in a particular area (e.g., family planning in general or for a specific issue such as contraceptive security). Global advocacy efforts also aim to set health and development goals to which countries will aspire, as in the process surrounding the Sustainable Development Goals. At national and subnational levels, advocacy is frequently used to increase government commitment toward family planning objectives, such as a budget line item for contraceptive procurement or removal of tariffs on the import of contraceptives. It may also be directed to changes in policy or regulations that directly affect the delivery of family planning services, including task shifting to enable lower-level health workers such as nurses to perform clinical procedures previously reserved for physicians. 1 The global health community increasingly considers advocacy an essential tool to influence financial and political decisions that support access to and use of voluntary high-quality family planning services. Family planning advocacy toolkits present guidelines for developing communication strategies and materials designed to influence policy decisions, including developing an advocacy strategy; engaging policy makers, health sector leaders, community leaders, and the private sector; working with the news media; and other resources. 2 Best practices indicate the need to present reliable data that frame the issue in terms consistent with national priorities, while presenting the material in simple, easyto-comprehend formats. Advocacy generally employs a combination of evidence and emotional triggers. Advocates seek to gather and analyze existing data (e.g., Demographic and Health Surveys, Multiple Indicator Cluster Surveys, and other countrylevel studies) to inform their strategies, rather than generating their own data. 2 Information alone, however, rarely achieves an advocacy objective. Qualitative research and the stories of those most affected by a specific policy or programmatic barrier usually complement quantitative data. Well-known and highly respected personalities can bring attention to an issue and deliver messages to a larger and wider audience (e.g., Angelina Jolie as a U.N. Ambassador). According to a 2006 survey in sub-Saharan Africa, respondents trusted faith-based organizations more than they trusted their own national governments; religious leaders are therefore uniquely positioned to reach both men and women to promote family planning and healthy reproductive behaviors. 3 Despite available guidance on how to use family planning advocacy to achieve objectives, there is limited documentation on the results of these efforts. According to Smith and colleagues, 4 no studies have specifically investigated decision makers' views on and use of family planning research and advocacy. THE ADVOCACY OBJECTIVE OF THE DRC PILOT STUDY In early 2011 Sayana Press emerged as a promising means of increasing access to modern contraception at the community level in developing countries. 5 Although its formulation (104 mg of depot medroxyprogesterone acetate per 0.65 mL dose) is similar to Depo-Provera, it contains a lower dose and is administered subcutaneously using a single-use syringe with a short needle called the Uniject system, which can be administered by trained community health workers and clients. A World Health Organization (WHO) consultation in 2009 approved the use of injections by community health workers, even before Sayana Press became available, 6 and successful pilots using Depo-Provera have been reported from other countries. [7][8][9] Studies in Senegal and Uganda have explored acceptability and feasibility of introducing Sayana Press using community health workers, 10 and a study in Ethiopia explored attitudes toward self-injection. 11 In the DRC, a regulation limiting the provision of injections to only physicians and nurses represented a major barrier to community-level delivery of Sayana Press. An important exception, however, provided an open door to test an innovative approach-medical and nursing students are allowed to give injections if supervised by a clinical instructor. Thus, in the case study Effective advocacy efforts usually require quantitative data complemented by qualitative research and stories. Sayana Press is similar to Depo-Provera, but contains a lower dose and is administered subcutaneously using a singleuse syringe with a short needle, which can be administered by community health workers. The first objective of the pilot was to obtain approval from the DRC Ministry of Health to allow medical and nursing students to distribute Sayana Press at the community level. presented here, the advocacy objective was to obtain approval from the DRC Ministry of Health to distribute Sayana Press-a new method that was not currently part of the approved method mix-at the community level using medical and nursing students, as a first step toward subsequent testing and eventual authorization for community health workers to provide this method. Not only would this mechanism contribute to increasing access to this new method in the short term, it would also give future doctors and nurses a solid foundation in contraceptive technology and service delivery. PLANNING PHASE Before the pilot began, a series of key activities paved the way to making it a reality: (1) a commitment to community-based distribution of contraception in the national strategic plan for family planning, (2) Commitment to Community-Based Distribution The Multisectoral Strategic Plan for Family Planning in the DRC: 2014-2020 identified community-based distribution as a key strategy for the country to accelerate achievement of its objective of 19% modern contraceptive prevalence use by 2020. This call for expansion of community-based distribution by the family planning stakeholder community was an important first step leading to the pilot. Increased Awareness of Sayana Press In the months leading up to the pilot, Tulane University, the organization responsible for its implementation, sought opportunities to increase awareness of this new contraceptive method and the studies taking place in other sub-Saharan African countries. At the Third National Conference on Repositioning Family Planning in the DRC in December 2014, the researchers who also participated in organizing the conference seized the opportunity to widely diffuse information about Sayana Press and present the pilot experiences in other countries to the community of family planning stakeholders in the DRC. A physician from Senegal leading the Sayana Press initiative in that country gave an overview of the new contraceptive method at one of the early plenary sessions, and a UNFPA consultant working in Burkina Faso led a more clinically oriented session on the method. Seminars were organized in parallel with the conference for the local obstetrics and gynecology, nursing, and midwives societies to further disseminate information about this new method, including the ease of application by non-clinically trained personnel. Support of Key Decision Makers Because of the regulation that only physicians and nurses can give injections in the DRC, it was unclear whether the Ministry of Health would give its approval to pilot test the use of medical and nursing students to give injections at the community level. It was therefore essential to enlist the support of the Ministry of Health, and in particular 2 departments (Directions) that had jurisdiction over the organizations involved in the pilot: the 10th Direction (10ème Direction) which oversees the National Program of Reproductive Health (Programme National la Santé de la Reproduction, or PNSR), and the 6th Direction (6ème Direction), which oversees the training institutes for nursing through the country. The researchers obtained agreement from the director of the 10th Direction that he would chair a meeting of key stakeholders in January 2015. The objectives of the meeting were to (1) present the implementation and study design plans for the pilot introduction of Sayana Press; (2) solicit feedback from stakeholders; (3) encourage an open exchange of opinions on the benefits and challenges of this approach; and (4) obtain buy-in among family planning stakeholders for the pilot. Organizing the pilot as a research study that would assess the benefits and limitations of the approach enabled the decision makers to authorize this innovative approach to service delivery, but on a limited scale; further expansion of the approach would depend on the results of the research. Stakeholders were supportive overall and provided valuable feedback and opinions; however, they called for several changes to the plans, including the inclusion of both urban and rural health zones to make the results more generalizable for subsequent replication. At a follow-up meeting in February 2015, the director of the 10th Direction approved the research pilot. Given that the research team intended to work through local medical and nursing schools, another key decision maker enlisted for support was the 6th Direction, which oversees nursing training institutes throughout the country. The proposed pilot was expected to appeal to the 6th Direction in several ways. Medical and nursing training institutes traditionally use a curriculum that focuses primarily on clinical care in hospitals and health facilities. The proposed activity would provide students with the experience of working at the community level, thus preparing them for a broader array of tasks in the future. Moreover, it would put students in direct contact with clients and enhance their skills in both counseling and service provision. A local NGO, Association de Santé et Dévéloppement (Association for Health and Development), was hired to implement the pilot and entered into discussions with the director of the 6th Direction. The initial inquiries met with considerable enthusiasm, for the reasons noted above. The director's support was so enthusiastic that he offered the NGO affordable office space to oversee the initiative in the same building. Legal Authorization to Distribute Sayana Press At a roundtable for government, donors, and partner organizations in December 2014 (in conjunction with the Third National Conference on Repositioning Family Planning in the DRC), the minister of health publicly announced a 1-year approval (also called a waiver) to allow the distribution of Sayana Press in the DRC. In that same month, the 3rd Direction (responsible for pharmaceutical products) issued the marketing authorization (known as AMM, l'autorisation de mise sur le marché) for a 12-month period. With this authorization in place, Pfizer donated 60,000 doses of Sayana Press in March 2015 for the pilot. IMPLEMENTATION In early 2015, 10 medical and nursing training institutes were selected to participate in the pilot. Each one nominated a member of its clinical faculty to serve as a focal point to supervise the students involved in the pilot. Members of the PNSR and several family planning implementing organizations developed the training curriculum and materials. In April and May 2015, 135 medical and nursing students received 7 days of training on multiple aspects of service delivery: contraceptive technology, management of side effects, eligibility and delivery of 4 methods (condoms, pills, CycleBeads, and Sayana Press), and procedures for referring interested clients to a nearby health center for clinical methods (e.g., intrauterine devices and implants). The students also participated in a 1-day field practicum, in which they gave family planning counseling, screened clients for eligibility, provided the 4 contraceptive methods to interested clients, and made referrals in a real-life community setting. 12 The pilot officially began in July 2015. The delivery of contraceptive methods took several forms: (1) campaign days, in which a group of approximately 15 to 20 medical and nursing students provided counseling and contraception to women from the community who had been informed of the opportunity to get free contraceptive services on a specific day; (2) house-tohouse visits to counsel women and couples on the use of family planning (with delivery of methods to interested, eligible women); and (3) distribution of contraception on campuses or other sites in the community. 12 The medical and nursing students were referred to as communitybased distribution agents (distributeurs à base communautaire, or DBC). When given the choice of 4 methods available on-site and others available through referral to a nearby health facility, approximately one-quarter of clients chose Sayana Press on-site. During the implementation of the pilot and related research, the researchers regularly updated the directors of the 6th and 10th Directions, but did not involve them directly in the routine operations of the pilot. participated, to assess their experience as community-based distributors. The qualitative component consisted of in-depth interviews with 29 key informants: Ministry of Health personnel in decision-making positions, the chief medical officers for selected health zones, nurses in fixed facilities, and staff from the organizations that implemented the pilot. Key findings from the quantitative surveys and qualitative in-depth interviews are summarized below. Full results from the quantitative surveys will be published separately. Acceptors of Sayana Press Among all Sayana Press acceptors, 51.6% had never used contraception, including traditional methods. Overall, their experience with Sayana Press was positive; 87.4% encountered no problems. Just over half (58.5%) felt some pain at the time of the injection, but only 9.7% reported pain afterward and 3.4% had side effects. Among acceptors who attended their follow-up appointment 3 months after the first injection, 92.3% received a second injection. The large majority was satisfied with the counseling and services received from the medical and nursing students. Medical and Nursing Students Six months after implementation began, 92% of students were still participating in the project. Of these, 46.8% were medical students and 53.2% were nursing students. The median age was 22 years old and most of the students (71.8%) were women. More than 90% reported that the community was favorable toward their services. The vast majority expressed satisfaction in serving as community-based distributors, and more than 95% would recommend it to others. Their primary complaint was lack of remuneration, followed by insufficient supervision and contraceptive stock-outs. Key Informants Overall, key informants in decision-making positions (Ministry of Health personnel, chief medical officers for health zones, nurses, and staff from implementing organizations) responded positively to the pilot study and the strategy of using medical and nursing students as community-based distributors. They had not heard of opposition specifically directed toward Sayana Press or the pilot introduction, although there was a low level of opposition to family planning in general. Key informants stressed the need for careful training of communitybased distributors. Zonal health authorities were also amenable to the community-based distribution method and were unaware of opposition at the community level. All favored expansion to other health zones, especially those that are heavily populated. The key informants cited several challenges: scheduling conflicts between students' academic program and the pilot, recurrent issues with contraceptive resupply of the community-based distributors, incomplete reporting of service statistics by the students on the distribution of products, and uneven responsiveness of the focal points in different training institutes. Despite these challenges, the staff involved in implementing the pilot were uniformly supportive of this method of using students to distribute contraception at the community level and encouraged its expansion to other training schools and other provinces. DISSEMINATION OF THE FINDINGS In December 2015, the research team held a 1-day dissemination event at a hotel in Kinshasa with more than 80 participants. The audience for this event included the primary family planning stakeholders: representatives from the PNSR and the Programme National de Santé de l'Adolescent (National Program for Adolescent Health), other Ministry of Health authorities, family planning implementing organizations, military and police, faith-based-organizations, donors (e.g., U.S. Agency for International Development [USAID], UNFPA, WHO), and university researchers, among others. The moderator was a well-known and highly respected figure in the local family planning community, which enabled the director of the 10th Direction to focus on presentations and provide commentary during the event. The program covered a series of topics: Sayana Press as a new method, details about the pilot implementation process, testimonials of focal points (supervisors) from several training institutes, and testimonials of 4 participating medical and nursing students. In later sessions, the research team presented highlights from the surveys of acceptors (on the day of the injection and 3 months later), the survey of students participating in the pilot, and a summary of the key informant interviews. From the tone of the discussion, the majority of the audience seemed amenable to the use of medical and nursing students to deliver Sayana Press. A highlight of the advocacy process was in the final session of the dissemination event on next steps, led by the director of the 10th Direction. Rather than having the director or research team outline possible next steps, the director encouraged stakeholders to recommend possible variations for further testing. The audience collectively volunteered 17 approaches, for example, replicating the model in other provinces, using a similar approach in military and police health zones, having community-based workers (who receive short-term training to perform a specific task) deliver Sayana Press at the community level, piloting self-injection of Sayana Press, and conducting a similar pilot introduction of Implanon NXT (a contraceptive implant preloaded in a disposable applicator) by medical and nursing students, among others. Participants publicly endorsed the use of students as distributors of Sayana Press at the community level and called for the piloting of additional approaches that were very progressive by local standards. The director of the 10th Direction implicitly endorsed the pilot by encouraging the audience to recommend related pilots. Moreover, the final report of the research results for the pilot was issued with his signature. Figure 1 summarizes the sequence of steps that led to the achievement of the advocacy objective: the DRC government approval of community-based provision of Sayana Press by medical and nursing students. Figure 2 illustrates how this type of policy change influences access to contraception and increases contraceptive uptake. RAPID DIFFUSION, REPLICATION, AND TESTING In early 2015 before the pilot, Sayana Press was relatively unknown, and there was no precedent for having medical and nursing students give injections at the community level. In less than a year, the approach gained legitimacy and acceptance. Both the 6th and 10th Directions were anxious to know what plans were under way to expand the use of students as community-based providers of Sayana Press (and other contraceptive methods) and when the next round of pilot introductions would begin. Within 12 months of the results dissemination, multiple activities were under way that built on the original pilot: Institutionalizing the use of medical and nursing students within the 6th Direction. A private donor came forward to fund the institutionalization of the approach through the 6th Direction, which will involve developing a more comprehensive module on contraceptive technology as part of preservice training and making community-level service provision a routine part of the students' training and as part of the health information system. Replicating the approach in another province. In October 2016, 119 nursing students received training as community-based distributors in Matadi, provincial capital of Kongo Central, and began providing Sayana Press, Implanon NXT, pills, male condoms, and CycleBeads at the community level. Recruiting similar cadres of workers to distribute Sayana Press. Several organizations funded by USAID recruited students Increased access to and use of Sayana Press Sources of validation for policy changes: Pilot study report, Secretary General for Health authorization of community-based distribution by medical and nursing students in subsequent pilots, number of students recruited and trained, PMA2020 data. Sayana Press pilot leads to training of additional medical and nursing students Abbreviation: PMA2020, Performance Monitoring and Accountability 2020. The pilot began with an innovative approach and moved from concept to implementation to replication in less than 2 years. Training medical and nursing students to deliver an expanded package of services. A major donor came forward with additional funding to test the effectiveness of students in the provision of integrated maternal and child health and family planning services for first-time mothers ages 15 to 24. This gendertransformative project also incorporates the fathers of the babies as part of the population that would benefit from the intervention. There were other positive outcomes from the Sayana Press pilot. The inclusion of the larger family planning stakeholder community in the initial deliberations over the pilot engendered support for and use of the final results. The positive findings from the pilot encouraged 2 major contraceptive donors-USAID and UNFPA-to procure larger quantities of the product to respond to the potential large demand for Sayana Press generated through other projects. The 2 social marketing projects based in Kinshasa also intensified their promotion of Sayana Press following the pilot. KEY SUCCESS FACTORS There is nothing novel in the concept of doing local research on issues that have been researched elsewhere as a means of obtaining local buy-in for innovative approaches. What is remarkable in this particular pilot is how fast the change took place. Although one cannot say with certainty what triggered the rapid change, several factors appear to have played a role. First, the environment was ripe for innovation in the area of family planning. Since 2012, the DRC government has shown increasing political will toward family planning. 13 The Prime Minister's Office has repeatedly linked the demographic dividend to the country's aspirations to be an emerging nation by 2030. The international donor community has reacted very favorably, both in terms of additional financial support to family planning initiatives and visibility in international fora (e.g., the invitation of the prime minister to address the closing plenary session at the 2016 International Conference on Family Planning in Nusa Dua, Indonesia.) 14 The DRC has often lagged in development initiatives, as reflected by its high maternal and infant mortality rates. Yet in family planning, the DRC is emerging as a regional leader. The momentum around family planning in the DRC created an environment that was ripe for another progressive step in family planning: authorization of the distribution of Sayana Press at the community level. Second, the research component of the pilot allowed for experimentation with the approach on a limited basis without requiring a large-scale policy change. Policy makers could reduce their political liability by withholding authorization on a larger scale, pending results of the pilot. If successful, they had evidence with which to support the expansion of the approach beyond the pilot sites. If unsuccessful, they could withhold approval, either entirely or pending modifications to the design. LESSONS LEARNED Advocacy efforts require tailoring to specific countries because of differences in political, social, legal, and economic contexts. However, certain lessons from this experience in Kinshasa are likely applicable to other advocacy efforts: 1. There was clear political commitment to family planning and to community-based distribution as reflected in the National Multisectoral Strategic Plan for Family Planning: 2014-2020, which called for community-based distribution as a means to increase contraceptive access and thus increase the modern contraceptive prevalence rate to 19% by 2020. 3. Relevant decision makers were identified and enlisted from the start, not only to participate but to take a lead role in shaping the design of the research pilot. 4. The involvement of family planning stakeholders (including policy makers) in developing consensus on the design contributed to the success of this pilot and opened doors to next steps. 5. The research team cleared the necessary legal hurdles (obtaining authorization for the entry of Sayana Press into the local pharmaceutical market) with the support of local officials. 6. The pilot involved 3 Directions within the Ministry of Health, all of whom played a key role in its success. 7. The design lent itself to replication to other provinces and institutionalization within the Ministry of Health. 15 FINAL REFLECTIONS We acknowledge that advocacy has limitations. It relies on a range of expertise to inform objectives and to implement policies and programs. Rarely do we have a counterfactual of what would have happened in the absence of the advocacy initiative. Moreover, serendipitous events can occur that either facilitate or hinder an advocacy effort. As a result, evaluation of the role of advocacy in improving health conditions may not be definitive. For example, Figure 2 points to plausible pathways by which advocacy influences behavioral outcomes among the target population, but it does not demonstrate cause and effect. Curiously, the strength of this pilot was not in the precise findings it obtained but rather the process used in designing, implementing, researching, and disseminating the results publicly to a large group of relevant stakeholders. This being said, it is essential that the research methodology used to support advocacy objectives be of the highest quality, and that results-both positive and negative-be disseminated.
v3-fos-license
2020-11-12T09:08:24.294Z
2020-11-10T00:00:00.000
226303771
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0241506&type=printable", "pdf_hash": "2e197cc17149ae9c283f2a69b153cdef32a763d0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41554", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "8105efa9835f3fd530fed9b550466876fd6da7ba", "year": 2020 }
pes2o/s2orc
Use of long-acting reversible contraception among adolescents and young women in Kenya The Kenya Demographic and Health Survey (KDHS 2014) revealed changing patterns in the contraceptive use of young women aged 15–24, shifting from injectable methods to implants. Long-acting reversible contraception (LARC) is user friendly, long-term, and more effective than other modern methods. It could be a game-changer in dealing with unintended pregnancies and herald a new chapter in the reproductive health and rights of young women. This study determined the factors associated with LARC use among adolescent girls and young women to expand the evidence of its potential as the most effective method of reducing unwanted pregnancies among the cohort. This study analysed secondary data from KDHS 2014 using binary logistic regression. The findings showed a rise in LARC use (18%), with identified predictors of reduced odds being aged 15–19 [OR = 0.735, 95% CI = 0.549–0.984], residence (rural) [OR = 0.674, CI = 0.525–0.865], religion (Protestant/other Christian) [OR = 0.377, CI = 0.168–0.842], married, [OR = 0.746, CI = 0.592–0.940], and region (high contraception) [OR = 0.773, CI = 0.626–0.955], while the number of living children showed increased odds for 1–2 children [OR = 17.624, CI = 9.482–32.756] and 3+ children [OR = 23.531, CI = 11.751–47.119]. This study established the rising popularity of LARC and identified factors that can be addressed to promote it. Its increased uptake could help Kenya achieve the International Conference on Population and Development 25’s first and second commitments on teenage pregnancies and maternal and new-born health, thus promoting the health, wellbeing, educational goals, and rights of this critical cohort. This study can guide the accelerated efforts needed in Kenya’s march towards the five zeros of unmet need for contraception, teenage pregnancies, unsafe abortions, preventable maternal deaths, and preventable neonatal/infant deaths. Introduction In Kenya, teenage pregnancies account for 18% of the school dropouts and deaths of adolescents aged between 16-18 years old.These pregnancies are mostly unintended and are associated with additional negative health outcomes such as sexually transmitted infections/HIV/ AIDS, unsafe abortions, miscarriages, and complications during birth that can leave victims with lifelong health challenges [1].Adolescent girls aged below 19 constitute 20% of the patients who undergo post-abortion services in Kenyan health facilities, as well as 50% of those admitted with severe complications [2]. There are approximately 5 million adolescent girls and young women aged 15-24 in Kenya, which is more than 10% of the latest population figure of 47.6 million [3].Therefore, it is imperative to acknowledge, understand, and respond to their reproductive health needs to minimise unintended pregnancies that compromise quality health services and sometimes lead to unsafe abortions and deaths from pregnancy and childbirth complications, especially for adolescents whose bodies have not matured [4]. Reducing teenage pregnancies from 18% to 12% by 2020 and 10% by 2025 was a Family Planning 2020 (FP2020) commitment for Kenya in 2017, but the targets have changed following commitments made at the International Conference on Population and Development 25 (ICPD25), Nairobi Summit held in 2019.The first ICPD25 commitment is to eliminate adolescent pregnancies in an effort to achieve universal health coverage for quality reproductive health services for adolescents and youth by 2030, and the second is to eliminate preventable maternal and new-born deaths [5].The issue is also addressed in the Sustainable Development Goal number 3, target 3.2. In the last two decades, Kenya has made remarkable progress in its uptake of contraception and has reduced the unmet need for contraception among all women of reproductive age.However, the pace has not been as fast for adolescents and young women, and inequalities are evident [6].Concerted efforts can help reduce this gap [7] and increase the universal health coverages' family planning index from 70% to almost 100% [8]. Long-acting reversible contraception (LARC), which refers to implants and intra-uterine devices (IUDs), was not previously encouraged among adolescents and young women, and only accounted for 2% of use in 2008/09.However, by 2014, the uptake of implants had risen almost tenfold, and had replaced the contraceptive pill and condoms to become the second most popular contraceptive (after injections) for this segment of women in Kenya [9,10].LARC was recommended by the World Health Organization as being safe and suitable for adolescent girls and young women, including nulliparous girls [11], and was among the factors that contributed to its increased usage. The Health Act (2017) reinstated the right of every Kenyan woman to safe, effective, acceptable, and affordable contraception services [12], while many recent family planning policy documents have emphasised the prioritisation of LARC because it is longer acting, safer, more convenient, and highly effective [13].Moreover, FP2020 issued a global statement promoting the expansion of method mix for adolescent girls and young women by including LARC [14]. The 2014 Kenya Demographic and Health Survey (KDHS) reported increased sexual activity among adolescent girls and young women without the use of effective contraception, and that 90% of sexually active adolescents and young women may end up with unintended pregnancies within a year of having unprotected sex [15].Indeed, KDHS 2014 reported that 15% to 40% of adolescents aged 17-19, who should ideally be in school, have started to bear children.This results in increased adverse social consequences, as teenage mothers are more likely to drop out of school and lose out on education, career advancement, and social status [6].LARC can help these young women avoid or delay pregnancies and reduce the incidence of maternal mortality [4], and thus aid the progress towards achieving the ICPD25 commitments 1 and 2 for Kenya. LARC is the most effective contraceptive method (99% effective) and is 100 times more successful than the injection or contraceptive pill combined if used correctly in the first year, and thus reduces the risk of unwanted pregnancy by half [13].Evidence has shown that more than 60% of adolescents and young women would readily utilise it if they were given comprehensive counselling by health providers [13].A recent study in the USA provided evidence that the increased uptake of LARC has drastically reduced unintended pregnancies and abortions [16]. LARC regulates fertility for three to five years and can reduce rapid repeat pregnancies, which adolescents are at higher risk of, and can also deal with the challenge of the incorrect and inconsistent use of contraceptives, which is the major cause of unintended pregnancies [17].LARC can reduce discontinuation, which is common among adolescent girls and young women, because it is long-term and has no challenges in its adherence.It also has high user satisfaction, is not user-dependent, and comes with some non-contraceptive benefits such as reducing menstrual pain/endometriosis and anaemia by raising haemoglobin levels [18].Providing more information on LARC may promote its use among sexually active unmarried young women and consequently reduce unintended pregnancies [19]. Despite the numerous benefits of LARC, young women aged under 25 mostly use short-acting contraception such as pills, condoms, and injections, because service providers offer these methods under what can be called provider bias [20].Provider bias refers to providers that withhold contraceptive information or services regarding some methods-which goes against ethical guidelines and is for reasons unrelated to the medical condition of the client-and thus creates a barrier on informed choice and wider method mix.It is based on concerns about the suitability of a method due to the age, marital status, and parity of a client, and is more pronounced for adolescents and young women [21].Provider bias should be eliminated to reduce the unmet need for contraception and to meet the reproductive goals of contraceptive clients.Health workers should provide all available information and counsel to allow adolescent girls the freedom to choose LARC if they wish to enhance client satisfaction and a continuation of the methods [22]. Adolescent girls also harbour some misconceptions about LARC regarding their short-and long-term side effects, which they believe could negatively affect their fertility in the future.One of the misconceptions associated with IUDs is the increased risk of pelvic inflammatory disease in nulliparous users, but no evidence supports this claim [23].Other concerns include fear of side effects such as weight gain and changes in the menstrual cycle, as well as a delayed return to fertility upon removal of the methods, more so for implants [24].These misconceptions can be addressed with the provision of comprehensive information and counselling on LARC. Much literature exists on modern contraceptive use, but there is a dearth of the same for adolescent girls and young women in Kenya, especially for LARC methods, which have only recently become popular among the group aged 15-24.Performance Monitoring and Accountability 2020 provides periodic briefs on modern contraceptive prevalence in adolescent girls and young women, but the factors that underlie the figures are not well understood [25,26].Studies of the determinants of LARC use among the sub-population are thus needed to both establish the barriers that potential users face and to inform the design of targeted programmes that can improve the accessibility, availability, and acceptability of the methods among these young women [27]. Kenya's current Family Planning Policy, which was articulated in the Costed Implementation Plan (CIP) 2017-2020, advocates for the use of modern methods and LARC because of their efficacy, convenience, ease of use, continuation rates, and long-term nature.The policy recognises that sexually active young women aged 15-24 use less effective methods of contraception because they have little information on LARC.The CIP has the target of increasing the use of modern methods among this age group by 10% for unmarried women (from 49.3%) and married women (from 36.8%)[28].In light of this, this study seeks to answer three critical questions regarding assessment and policy concerns: (1) What are the proportions of adolescent girls and young women using LARC?(2) what factors influence the choice of LARC methods?and (3) are there differences between adolescent girls and young women from lower and higher socio-economic stratums?The main objective was to determine the factors associated with LARC use among adolescents and young women aged 15-24 to expand the evidence for LARC's potential as the most effective method of reducing unwanted pregnancies among the vulnerable cohort. Ethical statement Specific ethical approval is not required for secondary analysis of DHS data but permission to use the data was obtained from ICF Macro.The secondary analysis was done under the original consent provided by participants. Data sources This study used national and secondary data from the KDHS 2014.The KDHS is a national cross-sectional survey that monitors population and health indicators such as household characteristics, fertility, and maternal and child health, and is conducted every five years by the Government of Kenya.In this study, information was collected using three questionnaires for households of women (aged 15-49) and men (aged 15-54).The KDHS 2014 is the fifth demographic health survey in Kenya.It covered 36430 households, from which a total of 31079 women were interviewed, including 11555 women aged 15-24.Data were extracted for this group, and it emerged that 8560 of these women reported not using any method of contraception at the time of the survey; thus, they were excluded from the sample.From the remaining 2995, another 13 did not use a modern method of contraception and were also excluded.The selected sample of 2982 comprised women aged 15-24 who reported current use of any of the methods of modern contraception at the time of the survey, regardless of their marital status.The inclusion criterion was the use of modern contraception, while the non-use of contraception/modern methods resulted in exclusion.Specifically, data were obtained from the contraceptive calendar contained in the questionnaire for all interviewed women aged 15-24 within the individual women recode files. KDHS sampling The KDHS 2014 used a stratified sample drawn from a national master sample, NASSEP V.This contained 5360 clusters split into 4 equal sub-samples that were broken down into 96,251 enumeration areas, and were then split further into households spread across the 47 counties in the country.The counties were stratified into urban and rural stratums.The KDHS 2014 sample targeted 40,300 households from 1612 clusters from around the country.More details on the KDHS' 2014 process of sampling, data collection, and analysis, as well as the variables for which data were collected, are available online [6]. Data variables The dependent/outcome variable in the study was the current method of contraception (V312), which was coded as 1 if using LARC (IUD or implants) and 0 if using another modern method (contraceptive pill, male/female condom, injection, female sterilisation, periodic abstinence, or withdrawal method).It was obtained from two KDHS questions: 'Are you currently doing something or using any method to delay or avoid getting pregnant?' and 'Which method are you using?' Independent variables were selected from household-and female-level characteristics.The household-level variables were wealth status (V190), residence (V025), and region (V024), while the female-level variables were age (V013), education (V106), marital status (V501), religion (V130), number of living children (V218), and desire for children in the future (V605).Most of the variables were recoded to suit the focus of the study.Education was re-coded into none/primary and secondary/higher categories, while wealth status was classified into three tertiles (lower, middle, and higher) to represent the KDHS categories of poorest/poorer, middle, and richer/richest, respectively.Region was presented as eight individual regions coded 0-7, and was also reclassified into two groups labelled high contraceptive use (Central region, Nairobi, and Eastern region) and low contraceptive use (the other five regions) to take into account the targeted family planning interventions based on regional contraceptive prevalence.Marital status was recoded into married or not married from the various categories, desire for children in the future was coded as those who wanted or did not want children, and the number of living children was categorised as none, 1-2 children, and 3+ children.Religion was coded as none/other, Catholic, Protestant/other Christian, and Muslim.Age and residence were retained in their original categories.The demographic health survey contains much information on the demand side for family planning, but little information is available on the supply side.Therefore, no variables were available in the data set to assess supply. Data analysis The first step was to extract a dataset of women aged 15-24 from the larger dataset of all interviewed women aged 15-49 and then to determine the frequencies of the current contraceptive methods used by the women at the time of the survey for inclusion or exclusion into the sample.Next, this study profiled young women based on the characteristics of the selected independent variables using cross-classification analysis.Bivariate analysis was conducted to establish the differentials in the use of LARC based on the different independent variables, and Pearson's Chi-Squared test (χ 2 ) was used to determine statistical significance in the selected variables against the use of LARC.The confidence level was set at 95% and significance at p < 0.05.All the data was weighted using the recommended DHS weighting system for individual women data obtained by weight = v005/1000000. The dependent variable had two categories (LARC and other modern methods), and a regression model was selected to determine whether the independent variables had any effect on the current contraceptive method using binary logistic regression.The outcome of interest for this study was LARC usage, hence it was the reference category in the regression analysis.SPSS Version 22 was used to analyse the data. Characteristics of the study population The first stage of the analysis profiled the study group against the selected variable characteristics, and the results are presented in Table 1. Analysis of the background characteristics revealed that the majority (80.3%) of this study's population were aged 20-24.For the education variable, the number of those who had either no education or primary education was almost equal to those who had secondary education.A similar picture emerged for residence, with 51.6% of the population living rurally.For wealth, the majority were from higher-wealth households (49.1%), while women from low contraception regions were a majority (58.4%).In terms of individual regions, Rift Valley, the Eastern region, and Nairobi had the most female users of modern contraception at 26.1%, 15.1%, and 15.0%, respectively. The majority of women were married or were living with a partner (65.2%), and 70.7% had one or two children, while 9.4% had at least three children.Those with no children constituted 19.9%, while the majority (62.2%) did not want more children in the future.For religion, the Protestant/other Christian category were the majority at 73.4%. Differentials in LARC use Cross-tabulations were performed to show any statistical associations between the variables under study against the two categories of LARC and other modern methods.The results are presented in Table 2. Age showed a strongly significant association with LARC use.LARC use among adolescents (aged 15-19) was low at 12.5%, as expected, while other modern methods accounted for 87.5%.Similarly, the majority (80.7%) of those aged 20-24 used other modern methods, while LARC took a 19.3% share. Education exhibited a significant relationship with LARC use.LARC users with no education or primary education accounted for 19.6%, versus 80.4% for other modern method users, while LARC users with secondary education accounted for 16.4%, versus 83.6% for other modern method users. Residence showed a statistically significant association with LARC use, at 20.1% among urban dwellers and 16.0% among rural dwellers.Urban modern method users accounted for 79.9%, while rural users accounted for 84.0%. Wealth showed no significant association with LARC use.The distribution of LARC users against wealth status accounted for 17.0% for the lower, 18.7% for the middle, and 18.3% for the higher wealth categories.Thus, the proportions of LARC users was about the same across the different wealth tertiles.For the other modern method users, 83.0% were from the lower wealth category, while the middle and higher wealth users had almost equal shares of 81.3% and 81.7%, respectively. Region had a significant influence on LARC use at the individual region level and the aggregated level.There was more LARC use in the low contraception regions (19.4%) than in the high contraceptive regions (16.1%).For LARC use at the individual level, the Coast, Western region, and Nyanza accounted for 28.5%, 27.7%, and 25.2%, respectively.The North-Eastern region had the lowest LARC users in terms of actual numbers. A significant association was established with LARC use and religion.Those with no religion/other or who were Muslim constituted the largest proportion of LARC users at 33.3%, Protestants and other Christians the smallest at 17.0%.Protestants/other Christians and Catholics led the usage of other modern methods at over 80.0% each, while women with no religion and those who were Muslim accounted for 67.0%each. Marital status was a significant factor in the use of LARC, with 19.7% of users being married and 80.3% using other modern methods.In the unmarried category, 14.8% used LARC, while 85.2% used other modern methods.There was slightly higher LARC use among married adolescent girls and young women. The number of living children showed a significant association with LARC use.LARC use was negligible (1.9%) among those with no children (98.1%) who used other modern methods.LARC use for women who had 1-2 children was 21.2% against the use of other modern methods at 78.8%.For those with 3+ children, LARC use accounted for 28.1%, while other modern methods accounted for 71.9%. Statistical significance was also seen in the desire for more children, as 15.2% of those who wanted more children were LARC users, while 84.8% used other modern methods. Determinants of LARC use During this stage of the analysis, all the variables were fitted into the regression model with LARC as the reference category for contraceptive use, the dependent variable.Reference categories for the independent variables were indicated for each variable.In the first model, the region was fitted in individual categories together with the variable of religion.Table 3 presents the results of Model 1. From the regression results in the first model, five variables showed significant associations with LARC use: age, residence, religion, marital status, and number of living children.For the age variable, adolescent girls aged 15-19 had reduced odds of using LARC and were 27% less likely to use it than women aged 20-24 [OR = 0.735, CI = 0.549-0.984].Residence showed a very strongly negative, significant association, as rural young women were 33% less likely to use LARC against modern methods than their urban counterparts [OR = 0.674, CI = 0.525-0.865].Religion was another significant predictor of LARC use.Protestant and other Christian women were about 63% less likely to use LARC than those with no religion/other religion [OR = 0.377, CI = 0.168-0.842]. Marital status had a moderate negative influence on LARC use.Married young women were 26% less likely to use LARC than their counterparts who were not married or living with a partner [OR = 0.746, CI = 0.592-0.940].The number of living children showed the strongest positive relationship among all the independent variables, revealing that the more children a woman has, the more likely she is to choose LARC.Young women with up to 2 living children were about 18 times more likely to choose LARC than those who had no living children A second regression model was fitted where religion was omitted and region was fitted as two aggregated categories (labelled high contraceptive and low contraceptive).The results of Model 2 are presented in Table 4. The results show that four variables (residence, region, marital status, and the number of living children) emerged as predictors of LARC use.Residence had a significant influence on LARC use.Rural women were 38% less likely to use it than their urban counterparts [OR = 0.625, CI = 0.496-0.789].Women from high contraceptive regions were also found to be 23% less likely to do so than those from low contraceptive regions [OR = 0.773, CI = 0.626-0.955]. Marital status exhibited a predictive influence on LARC use, and married women were 27% less likely to use LARC than their unmarried counterparts Discussion This study focused on the use of LARC in comparison to other modern methods of contraception.The results showed that 18% of adolescents and young women used LARC.The results are in agreement with previous studies, as LARC use among the study group only began to increase in the last decade.It was not previously encouraged for this age group, as they had not attained their desired family size.However, LARC has recently been recommended and promoted for adolescents and women with no children, hence the increase in usage [29][30][31]. Factors that showed statistical significance at the cross-tabulation level were age, residence, region (individually and grouped), religion, marital status, the number of living children, and desire for children.Wealth, education, individual regions, and desire for children in the future did not exhibit any influence in the regression analysis, while predictors of LARC use emerged as age, residence, region (aggregated), religion, marital status, and number of living children. For age, as expected, there was more LARC usage among those aged 20-24.This group is more mature and more exposed to sexual relationships, as they could be entering long-term relationships or marriage since the mean age of first marriage is 20 years old in Kenya [6].Age had a significant and direct impact on LARC use because of the varied perceptions of its suitability for women aged 15-24, the indirect effects from exposure and access to contraception information, and LARC's links with a woman's parity and family planning intentions.It is an explanatory variable for contraceptive use in Kenya [9,27]. By residence, more LARC users were found in urban areas, suggesting greater access and exposure to LARC methods.This variable emerged as a strong predictor of LARC use in rural areas, and points to the continued challenge of the availability of LARC services in rural areas, especially for the study group.LARC depends on provider skills as it involves insertions and removals, and these results may point to a shortage of skilled providers in rural areas.On the other hand, provider bias against young women could exist where LARC information and methods are not being provided to them because of their age.A previous national study in Kenya [32] similarly found the probability of using long-term methods to be higher in urban areas, and the results of this present study suggest that inequalities in service provisions remain.Recent studies have also found a higher prevalence of LARC use in urban areas in Eastern Africa [27]. Region also showed a significant influence on LARC use at the aggregated level, but not at the individual level.When individual regions were regressed together with religion, the effects of region were overshadowed, but when religion was controlled for, region was significant at the grouped level of high and low contraception.It was interesting to see that there was more LARC use in regions with low contraception, which might suggest the success of the LARC promotional campaigns in the last decade that have been spearheaded by the government, especially the family planning programme and the launch of FP2020.As a result, there has been increased usage of modern methods in Kenya, especially in areas where contraception is lower, thus increasing the use of LARC and more equity in terms of its access [32].The use of long-term methods was more prevalent in regions of high contraception, while regional differentials were identified by a previous study [33].Religion emerged as an explanatory variable for LARC use.Indeed, religion is deeply rooted in Kenya, and the different beliefs/practices influence the choice of contraception for some women.Other studies have documented the influence of Christianity and the specifically reduced odds of LARC use among Protestant Christians [34]. In the bivariate results on marital status, there was more LARC usage among married women, who were obviously more exposed to the risk of pregnancy and were faced with the choice of either spacing or limiting children.This was expected since married women may intend to start bearing children soon and thus might oppose LARC use, which is long-term and could delay fertility.Instead, they may choose short-acting methods that suit their pregnancy intentions. However, there was greater use of other modern methods among unmarried women, suggesting an increase in sexual relationships and risk of unintended pregnancies among this group.This translates into a potential for greater LARC use among this group as they continue to move away from short-term methods.Marital status was shown to exert a negative influence on LARC in congruence with other studies of young women [35]. For the number of living children, there was, as expected, more LARC use among those with children, especially those with at least three children.This study found that there was more LARC use among those who did not want more children (limiters), suggesting that young women are using methods that meet their reproductive health goals.Women have around four children on average in Kenya, which this study's findings corroborate [6].The number of living children emerged as having a strong positive influence on LARC use.The fact that the odds of using LARC increased with the rise in the number of children may also suggest the successful integration of family planning/maternal child health programmes, for when women use these services, they are exposed to more LARC information and services and can consider meeting their demand for spacing or limiting children.Recent studies in Kenya have documented the success of such integration efforts [36,37]. The fact that wealth showed no significant effect on LARC use at any level of the analysis might suggest success in the supply of contraception, as driven by the FP2020 campaign alongside the Ministry of Health, so that issues of cost have been eliminated and LARC is freely available to all potential users [38].Previous studies in Kenya have found wealth to positively influence contraception use [33,39]. Education showed no significance at any level of the analysis, which is in agreement with other recent studies in Kenya [40].It appears that better information and exposure to contraceptive services associated with increased education has been overshadowed by improved information, access, and availability for all potential users in the promotion of LARC methods.These findings suggest the waning influence of socio-economic factors of education and wealth on LARC use for these young women. The rise in the use of implants may be attributed to the success in the synergy of efforts by the government through the Ministry of Health and other partners to promote the use of LARC in Kenya from around 2010, and more so after the development of the CIP in 2012-2016 [41].The Implant Access programme led by Marie Stopes [42], the Tupange programme funded by the Bill and Melinda Gates Foundation [43], the Tunza Clinic Programme by Population Services International [44], and FP2020 have each worked to generate demand for modern contraception in the areas that were lagging in prevalence.These efforts resulted in the supply of more than 1.8 million implants over about 5 years, and a huge increase in their uptake [42,45].The National Council for Population and Development also relaunched advocacy for family planning by emphasising the small family norm in 2011 and the Population Policy for National Development in 2012.To meet the demand, there is a need to improve the environment for commodity availability and supply.The supply chain shifted from the Ministry of Health to the Kenya Medical Supplies Agency, and private sector and non-governmental organisation commodity suppliers were used to cater to adolescents who receive supplies from pharmacies.Commodity budgets were also increased and were ring-fenced [41]. To address the reproductive health needs of youth, interventions targeting underserved populations such as adolescents and youth were designed, and youth-friendly services were established in health facilities.Community distribution was revitalised through community health workers to widen the programme's reach.The National Adolescent Sexual and Reproductive Health policy was developed by the Ministry of Health in 2015 to give direction to adolescent-targeted programme efforts [41,46]. Implications for Kenya The findings of this study hold some policy and programmatic implications.In general, it seems that adolescent girls and young women are realising the benefits of LARC, which spells some success for youth-targeted programmes.However, over 80% of those who did not want children in the future were not using LARC, and they should be targeted, as LARC provides highly effective protection against unintended pregnancies.More adolescent and youth-targeted programmes should be designed to encourage its uptake among non-users.Hopefully, the programmes can reduce the rates of unwanted pregnancy by reducing non-use, incorrect use, and inconsistent use of contraception, which is common in this group.The rising demand for LARC among adolescent girls and young women suggests the need for enhanced counselling to heighten their knowledge of the methods and to manage their expectations to reduce premature discontinuation.Counselling to reduce misconceptions and fears over the side effects of LARC should be a key part of targeted reproductive health programmes.Providers also need to emphasise to these young clients, especially to those without children, that the methods are reversible and that fertility returns soon after removal. Regular and refresher training is critical for LARC service providers in terms of insertion and removal to improve services in rural areas and to ensure equitable access and availability within urban areas.Training should address possible provider bias by emphasising that LARC methods are not only suitable but recommended for young, unmarried, and nulliparous women. The shift seen in increased LARC use in previous lower contraceptive regions calls for both sustained LARC promotion in those regions and also for reinstating campaigns in regions of higher contraceptive use to protect the earlier gains in LARC use. Conclusion This study establishes that LARC use is rising among adolescent girls and young women.Therefore, there is potential to increase its uptake by addressing the predictors of its use, which were identified herein as age, residence, type of contraceptive region, religion, marital status, and the number of living children.Barriers based on these factors should be addressed, and investments in quality family planning services should be made so that the high rates of unintended pregnancies may be eliminated and adolescent girls and young women can have control over their reproductive and life goals.Therefore, more knowledge of LARC and its benefits is needed among this critical segment of the population. Twenty-five years of the ICPD's programme of action and eight years of the FP2020 have brought about many achievements in terms of increased modern contraceptive uptake in Kenya, but the positive results are only just emerging for adolescent girls and young women.Against the backdrop of the ICPD25's commitments and the final years of the FP2020, accelerated efforts are needed in Kenya's march towards the five zeros of unmet need for contraception, teenage pregnancies, unsafe abortions, preventable maternal deaths, and preventable neonatal/infant deaths.LARC can achieve these for adolescents and young women and thus reduce the adverse social effects of unintended pregnancies, such as lost schooling.LARC should be promoted as a pathway towards better reproductive health for adolescent girls and young women whose numbers and unmet need for contraception can push the momentum for LARC use and modern contraception in Kenya. [OR = 17.624,CI = 9.482-32.756].To affirm this relationship, those with at least 3 living children were shown to be 26 times more likely to use LARC than their counterparts with no living children [OR = 25.531,CI = 11.751-47.119]. [OR = 0.738, CI = 0.589-0.923].The number of living children again emerged as a strong predictor of LARC use.Women with 1-2 children were 17 times more likely to use LARC than those with no children [OR = 17.197,CI = 9.274-31.887],while women with 3+ children were 26 times more likely to use it than nulliparous women [OR = 25.767,CI = 12.967-51.201].
v3-fos-license
2023-02-03T16:03:06.121Z
2023-01-31T00:00:00.000
256540762
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/20/3/2589/pdf?version=1675180235", "pdf_hash": "bc2c4ad792936b7eb424a9c0adbb3b7a5cbfb252", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41558", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "77df07ba494d47d7adfe8430d4dc4cf80bd3a5ca", "year": 2023 }
pes2o/s2orc
Conceptualization and Validation of the Occupation Insecurity Scale (OCIS): Measuring Employees’ Occupation Insecurity Due to Automation Increased use and implementation of automation, accelerated by the COVID-19 pandemic, gives rise to a new phenomenon: occupation insecurity. In this paper, we conceptualize and define occupation insecurity, as well as develop an Occupation Insecurity Scale (OCIS) to measure it. From focus groups, subject-matter expert interviews, and a quantitative pilot study, two dimensions emerged: global occupation insecurity, which refers to employees’ fear that their occupations might disappear, and content occupation insecurity, which addresses employees’ concern that (the tasks of) their occupations might significantly change due to automation. In a survey-study sampling 1373 UK employees, psychometric properties of OCIS were examined in terms of reliability, construct validity, measurement invariance (across gender, age, and occupational position), convergent and divergent validity (with job and career insecurity), external discriminant validity (with organizational future time perspective), external validity (by comparing theoretically secure vs. insecure groups), and external and incremental validity (by examining burnout and work engagement as potential outcomes of occupation insecurity). Overall, OCIS shows good results in terms of reliability and validity. Therefore, OCIS offers an avenue to measure and address occupation insecurity before it can impact employee wellbeing and organizational performance. Introduction The impact of automation on occupations is significant and widespread. According to a widely cited paper by Frey and Osborne [1], 47% of the total U.S. employment is at highrisk of becoming automated within the next one to two decades. Similarly, McKinsey [2] estimated that technology will replace 30% or more tasks in 60% of all jobs. In Belgium, Cedefop estimates that 40% of employees will need to acquire new skills and competencies in order to continue working or to switch to a new occupation. These predictions suggest that a large portion of the workforce will be impacted by automation in the near future. In the context of workplace transformation, previous research has primarily focused on job insecurity, which refers to the subjective fear of losing one's current job (quantitative job insecurity) or valued job characteristics (qualitative job insecurity), such as insurance benefits or paid leave [3,4]. In this paper, we argue that the transformations brought about by automation and digitalization refer to a broader phenomenon, which we term 'occupation insecurity. ' In job insecurity research, a 'job' is defined as work one is paid for at a specific organization (Job, n.d, Merriam-Webster dictionary) [5]. A job includes a certain set of tasks and responsibilities at a specific place of work. Therefore, the term 'job' is context specific. In contrast, an 'occupation' is defined as the profession an individual has been trained in and identifies with [6]. It is a generalized term that covers jobs with similar characteristics. As such, 'occupation' is the umbrella term for job, employment, or business with which an individual earns money. It therefore defines one's role in society. For example, working as an administrative employee at a specific company is a job. One can switch jobs and move to a different organization. However, as more and more administrative tasks are becoming automated, the occupation of administrative worker is increasingly disappearing, potentially forcing individuals to learn a different occupation. Especially in the context of automation, a wider problem than individual job loss emerges: the disappearance of certain occupations, while new occupations may arise. As Frey and Osborne [1] point out, people working in certain occupations will be more likely to lose their jobs than others. In a 2021 survey conducted by PwC among 32,500 workers, 39% believed their job would become obsolete within five years, and 60% were worried that automation is putting many jobs at risk. This is one example in which reference is made to 'jobs' while in fact the term 'occupation' would be more accurate. By making occupations obsolete, those people working within those occupations are threatened by job loss. When speaking about reskilling and upskilling, the underlying meaning is for people to learn a new occupation or to expand their existing skillset to be able to work in a different occupation. Therefore, the threat of automation is wider than the threat of job loss: If someone loses a specific job, the individual can seek a new job elsewhere. If, however, the occupation is disappearing, it threatens the whole livelihood of that person and may create fear in individuals regarding whether or not they will be able to cope with the required changes. We term this phenomenon 'occupation insecurity' and apply the term 'occupation' to refer to one's trained profession in this paper. Occupation insecurity is brought about by labor reallocation due to automation. We therefore define it as people's fears about the future of their occupations due to technological advancements. Therefore, occupation insecurity refers to the uncertainty about the future of one's current profession due to newly automated processes or because other people have better technological knowledge than oneself [7][8][9]. While the world of work has always evolved (e.g., the Industrial Revolution), the pace of change is now faster than ever [10]. A major contributor has been digital transformation, creating jobs that did not exist a decade ago (e.g., app developer or cloud computing specialist), while reducing the number of jobs in other occupations (e.g., manufacturing). Not only low-skilled jobs will be lost. Algorithms for 'Big Data' are now rapidly reducing employees performing non-routine cognitive tasks, such as accounting and paralegal jobs [11]. Regardless of whether one takes an optimistic or pessimistic view on the future of work, the trend that whole occupations are increasingly transforming cannot be stopped, likely increasing people's perception of occupation insecurity. In order to be able to conduct research on this phenomenon, it is necessary that (a) the phenomenon is conceptualized and clearly defined and (b) a measurement tool is developed and empirically validated. Therefore, the aims of this present research are: 1. Formulating a conceptualization and definition of occupation insecurity. 2. Based on this conceptualization, developing a novel and psychometrically sound questionnaire to assess occupation insecurity called Occupation Insecurity Scale (OCIS). 3. Identifying the prevalence of occupation insecurity. The OCIS scale will allow researchers to establish a nomological network of the concept of occupation insecurity and provide practitioners, as well as organizations, with an assessment tool to implement appropriate interventions to support the working population. In terms of consequences of occupation insecurity on an individual level, related research on job insecurity suggests that the perceived threat of occupation insecurity may take a toll on one's health, alter behaviour, and affect attitudes [12][13][14]. On an organizational level, research has demonstrated that many innovative technological implementations fail due to a lack of acceptance by employees, resulting in huge financial losses for companies every year [15]. Thus, the present research is intended to contribute to fostering successful human-computer interaction over replacement, as well as implementing effective public policy measures. An OCIS scale is required to be able to assess and examine the impact of occupation insecurity on both individual (i.e., burnout) and organizational outcomes (i.e., work engagement). Occupation Insecurity in Contrast to Other Constructs Even though there are numerous measures available to tap into different forms of insecurity-related concepts, to the best of our knowledge, there is no scale measuring the concept of occupation insecurity. One well researched concept is job insecurity, defined as 'perceived threat of job loss and the worries related to that threat' [3]. Job insecurity can be distinguished into the perceived threat of losing one's job (quantitative job insecurity) and the prospect of potentially losing valued job aspects (qualitative job insecurity) [4]. In contrast to job insecurity, employees experiencing occupation insecurity perceive a much darker picture: they feel that their skills might become obsolete, and, in the near future, their occupations might not exist at all. For example, if food servers lose their jobs at a specific restaurant, they can apply for other jobs as food servers at different restaurants. However, if restaurants become more automated and much fewer food servers are needed, the food servers may be forced to learn a new occupation. Occupation insecurity is also distinct from other existing insecurity-related concepts: Career insecurity addresses the concern that nowadays few people will have a job for life and the focus is on one's individual career development [16]. The person may not perceive a threat to the occupation as such, but may be uncertain about how to succeed in it. More recent research on the topic has distinguished career insecurity into a larger set of eight dimensions, including career insecurity about unemployment and contractual employment conditions [17]. None of the dimensions, however, directly relate to automation. Employment insecurity refers to the likelihood of being able to remain in paid employment in the present labor market [12]. This concept describes limitations in mobility due to the availability or lack of larger-size enterprises in a country's economy. However, in the psychological literature, this concept is actually better captured under perceived employability, which concerns the individual's likelihood of obtaining and retaining a job in the internal or the external labor market [18]. Thus, employability refers to individuals' perception of whether they are able to keep or obtain a job in the present labor market. Therefore, perceived employability applies to a broader context, while occupation insecurity specifically targets automation. Lastly, the concept of technostress has triggered a lot of research, which Tarafdar et al. [19] described as follows: Technostress is a problem of adaptation that an individual experiences when he or she is unable to cope with, or get used to, ICTs. In the organizational context, technostress is caused by individuals' attempts and struggles to deal with constantly evolving ICTs and the changing physical, social, and cognitive requirements related to their use. (p. 304) Technostress is divided into several dimensions: techno-overload, techno-invasion, techno-complexity, techno-uncertainty, and techno-insecurity [19,20]. Two dimensions that appear somewhat similar to occupation insecurity could be techno-uncertainty and techno-insecurity. Items measuring 'techno-uncertainty' ask participants whether they work in a context of continuous technological changes and upgrades. Thus, the items are rather descriptive in terms of what is happening in the immediate surrounding rather than asking for worries related to those changes. The 'techno-insecurity' sub-scale consists of five items that ask for three different kinds of information: one item asks about threat to job security due to new technologies; another item asks whether constant updates of skills are required to avoid being replaced by automation, and three items ask knowledge sharing with co-workers or feeling threatened by co-workers with better technological skills. With the first item, techno-insecurity taps into perceived job insecurity, not a threat to the overarching occupation. The second item specifically refers to updating skills, and the three remaining items tap into relations with co-workers, which is not part of the hypothesized occupation insecurity construct. It can hence be concluded that techno-insecurity, though in some aspects similar to occupation insecurity, does not measure the same construct. Taken together, all of these insecurity-related concepts do not measure whether people perceive or fear that their whole occupation is at risk of disappearing or significantly changing, which we in this article coin as 'occupation insecurity'. Characteristics of Occupation Insecurity Parallels can be drawn between the characteristics of occupation insecurity and how job insecurity is described in the literature. Firstly, occupation insecurity involves a subjective perception [21]. The same objective situation, for example automation of the work environment, can be interpreted differently by different individuals. It can trigger insecure feelings for some, while objectively the individual would have no reason to be worried. Conversely, others may feel confident about the continuity of their profession, while the profession might be disappearing [1]. In job insecurity research, it has been found that people's subjective perception tends to align with the objective context [22]. This is a potential characteristic of occupation insecurity that will be examined in the present research. Secondly, occupation insecurity refers to a threat to the continued existence of people's occupation. Insecure people might experience a discrepancy between the preferred and the experienced level of security [3]. This characteristic also contains the element of involuntariness. It is not by choice that people are faced with occupation insecurity. Individuals who voluntarily choose to leave their profession do not experience a discrepancy between the preferred and perceived state. Thirdly, as with job insecurity, occupation insecurity is about uncertainty regarding a future situation [23]. At present, the occupation still exists. It is the anticipation of change and the uncertainty regarding what this change will bring that has an impact on the individual perception. Fourthly, people who experience occupation insecurity are concerned about either their whole occupation disappearing, losing important features of their current occupation, or both. In a new or significantly different occupation, workers will take on different and/or new tasks and responsibilities that they will have to adapt to. Consequently, people will have to retrain or upskill to stay relevant in the future job market. This is a major difference with job insecurity, where people can look for a similar position ('job') elsewhere without necessarily having to switch occupations or expanding their skill set. Based on the fourth characteristic, focus group results and the distinction between qualitative and quantitative job insecurity in the literature [4], we hypothesize that occupation insecurity consists of two sub-dimensions: global and content occupation insecurity. Global occupation insecurity is analogous to quantitative job insecurity and refers to people's fear of their whole occupation disappearing. People who experience quantitative job insecurity are concerned about losing their job [3], while people who experience global occupation insecurity are afraid that their whole occupation may disappear. People who experience content occupation insecurity are worried that their tasks and responsibilities needed to perform their occupation may be significantly changing. This idea is derived from the concept of qualitative job insecurity, according to which individuals are concerned about losing valued job characteristics [4]. A major difference between occupation and job insecurity is that one) occupation insecurity is broader, encompassing the whole profession rather than just the individual job, and two) that occupation insecurity is specifically related to automation. Previous Research Attempts One attempt at measuring a concept similar to occupation insecurity was made by Brougham and Haar in 2018 [24]. Specifically, the authors intended to find a measure to capture "the extent to which an employee views the likelihood of Smart Technology, Artificial Intelligence, Robotics and Algorithms [STARA] impacting on their future career prospects". Accordingly, the authors named their concept 'STARA awareness'. However, several aspects with regards to the development of this new concept and respective measurement tool could be improved: Firstly, empirical research methods such as a literature review, expert interviews, focus groups, or similar methods could have been employed. Secondly, the STARA awareness scale is based on the job insecurity scale developed by Armstrong-Stassen [25] only. In order to ensure that the items capture all aspects of STARA awareness, it would have been useful to develop new items or base the items on various scales that measure more than the job insecurity concept. A close inspection of the items suggests that the STARA awareness scale is likely to measure job insecurity rather than STARA as a distinct construct. For example, one item reads "I am personally worried about my future in my organisation due to STARA replacing employees." Being afraid of losing one's job in a specific organization is defined as job insecurity [26]. Thirdly, the authors do not claim that they have adequately validated their scale. In their own words, they tested "STARA awareness to determine whether employees perceive it as a threat to their job/career" (p.240). In their paper, the authors establish reliability, but not validity, of their scale, a failure for which psychological research in general has been heavily criticized in the recent literature [27]. Overall, Brougham and Haar [24] position STARA awareness within the career planning literature, which is also a difference from the contribution occupation insecurity seeks to make. Another measure that by virtue of its name could appear similar to occupation insecurity is the artificial intelligence anxiety scale (AIAS) [28]. AIAS consists of four dimensions: learning, sociotechnical blindness, artificial intelligence (AI) configuration, and job replacement. The first three dimensions therefore tap into aspects different from occupation insecurity: The learning dimension is comprised of questions that ask participants to rate the extent to which learning about AI creates anxiety for them. The sociocultural blindness dimension addresses the potential dangers of misuse of AI. Items of the AI configuration dimension ask participants to indicate whether they find AI intimidating or scary. The final dimension, job replacement, consists of six items. Two of those items ask participants whether they are concerned that humanity might become too dependent on AI and lose their own reasoning skills. One item asks whether individuals are afraid that AI could make society lazier. Another item asks about the possibility of AI replacing humans. Only two items in this dimension ask whether participants are afraid that AI will take away jobs. Therefore, the dimension of job replacement within AIAS is different from OCIS for two main reasons: One, the dimension covers various different facets, only one of which deals with AI taking away jobs, and two, the emphasis is on 'job' replacement. The items do not cover the possibility of the whole occupation disappearing or significantly changing. In contrast to the STARA awareness scale and AIAS, we seek to empirically validate OCIS to demonstrate its distinct properties compared to other insecurity-related concepts. Despite methodological shortcomings, Brougham and Haar's [24] findings indicated that greater technology awareness is negatively related to organizational commitment and career satisfaction and positively related to turnover intentions, cynicism, and depression. Those preliminary findings suggest that employees are aware of technology impacting their jobs and that they feel insecure about those changes with potentially significant implications for the workplace. Objectives of the Present Study The three aims of this study relate to its eight objectives in the following ways: Aims 1 and 2, to refine the definition and to develop items covering the occupation insecurity concept and to test the psychometric properties of the OCIS scale, are addressed in Objectives 1 to 7, and Aim 3, to examine the prevalence of occupation insecurity, is tested in Objective 8. Specifically, the objectives and their respective hypotheses are as follows: One key characteristic of occupation insecurity is people's concern about either their occupation disappearing, the content of their current occupation significantly changing, or both. Initially, an open mind was kept to allow all kinds of possible dimensions to emerge. Based on the first results from the focus groups, the hypothesis was developed that the occupation insecurity concept would consist of two separate dimensions, namely, global and content occupation insecurity. This would be aligned with the job insecurity literature in which the job insecurity concept can be distinguished into quantitative and qualitative job insecurity [4]. Therefore, we hypothesize: Hypothesis 1: Occupation insecurity consists of two distinct dimensions (i.e., global and content occupation insecurity). Objective 2 (Reliability) As the next step, we examine whether the two dimensions of OCIS are reliable: Hypothesis 2: The two sub-dimensions of OCIS, i.e., global and content occupation insecurity, are reliable. Objective 3 (Measurement Invariance) To examine measurement invariance, we chose three demographic variables commonly applied to stratify a sample [29] that are also related to the scale's future practical use in a variety of contexts and across various sample groups: Hypothesis 3: The measurement properties of the scale are invariant across various demographic groups, i.e., gender, age, and occupational position. Objective 4 (Divergent and Convergent Validity) In order to establish the validity of OCIS, it is important to examine it in relation to other insecurity concepts, first and foremost job insecurity due to their shared characteristics, and in addition career insecurity because similar to OCIS, that concept is also future-oriented: Hypothesis 4 : The two dimensions of OCIS (i.e., global and content occupation insecurity) can be distinguished from the two dimensions of job insecurity (i.e., quantitative and qualitative) and career insecurity (Hypothesis 4a; divergent validity), yet OCIS will also be correlated with those constructs (Hypothesis 4b; convergent validity). Objective 5 (External Discriminant Validity) External discriminant validity is established when OCIS has a low or null correlation with a dissimilar and distinct, yet related construct. To test this, we chose organizational future time perspective [29] because we have no theoretical reason to believe that the two concepts should overlap. At the same time, future time perspective and occupation insecurity are related in the sense that both ask participants about their anticipated occupational future. Therefore, if there is a low correlation between occupation insecurity and organizational future time perspective, external discriminant validity is established: Hypothesis 5: OCIS will have a low correlation with organizational future time perspective. Objective 6 (External Validity) In order to establish external and incremental validity, we compare the level of occupation insecurity between employees working in theoretically secure vs. insecure occupations and examine the relationship of occupation insecurity with other theoretically relevant variables (i.e., burnout and work engagement). In order to establish which occupations are objectively considered secure and insecure, we followed the classification by Frey and Osborne [1]. For the secure group, we chose education, as this has a personal, human component that is harder to automate. For the insecure group, we selected administrative and support staff workers, who oftentimes complete repetitive tasks that can be more easily automated. By contrasting those groups, we also examine the fourth characteristic of occupation insecurity whether the subjective perception aligns with the objective context. Hypothesis 6: The objectively insecure group will perceive higher levels of occupation insecurity than the objectively secure group. Objective 7 (External and Incremental Validity) We analyze the consequences of occupation insecurity in terms of burnout and work engagement. Burnout is negative for the individual, as well as the organization, as there is the negative health impact on the employee, which may lead to reduced organizational commitment and performance and time away from work [30]. Regarding work engagement, research suggests that reduced work engagement could, for example, also negatively impact performance and organizational commitment [31]. We propose the cognitive theory of stress and coping [32] to explain how occupation insecurity might impact burnout and work engagement. This theory suggests that there are two appraisal stages. In the primary appraisal stage, individuals evaluate whether a situation is stressful, and in the secondary appraisal stage, they evaluate whether they can cope with it. If individuals perceive that the situation is stressful and that they cannot cope with it, it will lead to negative outcomes, such as higher psychological strain [33]. Applied to occupation insecurity, the stress coping theory would suggest that the evaluation of the occupation as insecure (primary appraisal) would produce perceptions of lack of control (secondary appraisal), which in turn would lead to negative outcomes, such as higher burnout and reduced work engagement. Burnout stems from continuous stress over time and has recently been defined as: a work-related state of exhaustion that occurs among employees, which is characterized by extreme tiredness, reduced ability to regulate cognitive and emotional processes, and mental distancing. These four core dimensions of burnout are accompanied by depressed mood as well as by non-specific psychological and psychosomatic complaints. [33] (p. 4) If individuals are afraid about the continued existence of their occupation (global occupation insecurity) or the changes that automation will bring to their professions (content occupation insecurity), they likely feel helpless, since the impact and pace of automation is to a large extent outside of their control. This, in turn, may place them at a higher risk for burnout. Therefore, we hypothesize: Hypothesis 7 : Global and content occupation insecurity will be positively related to burnout. Furthermore, we expect that through the same mechanism, occupation insecurity will have a negative impact on work engagement. Work engagement includes three dimensions [34]: Vigor is characterized by high levels of energy and mental resilience while working, the willingness to invest effort in one's work, and persistence even in the face of difficulties. Dedication is characterized by a sense of significance, enthusiasm, inspiration, pride, and challenge . . . . The final dimension of engagement, absorption, is characterized by being fully concentrated and deeply engrossed in one's work, whereby time passes quickly and one has difficulties with detaching oneself from work. [34] (pp. [74][75] If individuals are negatively evaluating the situation surrounding occupation insecurity (primary appraisal) and perceive a lack of control to cope with it (secondary appraisal), the result is likely that it drains their energy (i.e., lower vigor), that they perceive less significance in what they are doing if it might soon be replaced by technology (i.e., lower dedication), and their concentration level is reduced (i.e., lower absorption). Therefore, we predict that: Hypothesis 8: Global and content occupation insecurity will be negatively related to work engagement. Furthermore, we examine the incremental validity of global and content occupation insecurity over quantitative and qualitative job insecurity in relation to the theoretically relevant concepts above: Hypothesis 9: After all of the variance accounted for by the two dimensions of job insecurity has been partialled out, global occupation insecurity will explain additional variance above and beyond quantitative job insecurity, and content occupation insecurity will explain additional variance above and beyond qualitative job insecurity. Objective 8 (Prevalence of Occupation Insecurity) Aligned with Aim 3, the overall prevalence of occupation insecurity in our sample is reported. Part 1 Pilot: Conceptualization and Item Formulation of OCIS Part 1 consists of three phases. In the first phase, we conducted focus groups. During this phase, an open mind was kept in order to allow all possible sub-dimensions of OCIS to emerge. This was aided by the fact that the Belgian focus groups were conducted by two researchers blind to the first hypothesis regarding the potential division of OCIS into two sub-dimensions. Following the focus groups, we developed a first, theory-driven set of items based on all relevant workplace insecurity scales identified from the literature. During the second phase, we conducted cognitive interviews with both subject-matter experts and members of the sample population, after which the items were refined. The third phase consisted of a pilot study to test the set of items, followed by further revision. Phase 1. Focus Groups and Theory-Driven Item Generation The purpose of phase 1 was to conceptualize occupation insecurity and develop an initial set of OCIS items. To this end, we conducted focus groups in both the UK and Belgium. Method In the UK, four groups of two to four people each were interviewed in February 2019. The study was approved by the Departmental Research Ethics Committee (DREC) of the Institute of Population Ageing of the University of Oxford (UK). Out of the 11 participants total, seven were staff members of the University of Oxford, and four participants were employed by the Oxford University Press. This sample was chosen because, while education is supposed to undergo significant changes due to technological developments, experts predict that this branch will be less affected than the media industry [1]. Participants employed at the university included IT analysists and lecturers, who are generally considered less at risk of automation. At the Oxford University Press, participants worked in tax, supply chain management, and administration, all of which are generally considered more automatable. The average age was 50 years old, ranging from 24-77 years. About 88% of participants were male. In Belgium, a total of three focus groups took place, also specifically targeting participants from occupations that are hypothesized to be more versus less susceptible to automation. Ethical approval was obtained from KU Leuven (Belgium) under file number G-2019 11 1855. The group that, according to the literature, has a high probability of disappearing consisted of administrative and white-collar workers [1,35]. This group was comprised of seven individuals: two librarians, two administrative employees, two bookkeepers, and one shop assistant. The other group of occupations with a hypothetical lower probability of being automated [35] also consisted of seven participants, of which two were engineers, two were IT specialists, two were nurses, and one was a psychologist. In addition, a third focus group was conducted with participants whose occupation had already disappeared in the 1970s, namely, with 10 ex-miners. In the Belgian focus groups, 75% of participants were male. In addition to the focus groups, we gathered existing, empirically validated workplace insecurity scales to use as a base for theory-driven item generation. Procedure. The focus groups lasted about one hour, during which participants were asked to define occupation insecurity (and its components) in contrast to other forms of workplace insecurity. Furthermore, they were asked whether they believed that people (directly or indirectly, e.g., by not recommending their occupation to others) perceive occupation insecurity. They were further requested to rate the susceptibility of their occupations to automation. They were subsequently informed about experts' ratings and asked to comment. Research has indicated that more than 80% of all themes are discoverable within two to three focus groups [36]. Conducting between three and six focus groups is likely to uncover 90% of all emerging themes. Based on those findings, we opted to conduct a total of three and four focus groups, respectively, per country, after which it appeared that data saturation had indeed been reached. In order to gather a comprehensive list of existing and validated workplace insecurity scales, we conducted literature searches with respective keywords and consulted experts in the field (min. five years since first publication in the research area). Results From the focus groups, the following key characteristics of occupation insecurity emerged: element of uncertainty; worry/fear about the future; expected changes of tasks, and/or the whole occupation becoming obsolete. Based on the focus group interviews and those key characteristics, two sub-dimensions of occupation insecurity became evident: one for overall uncertainty regarding the future of occupations due to automation (dubbed global occupation insecurity) and one for uncertainty related to tasks changing due to automation (dubbed content occupation insecurity). Specifically, the following definitions emerged: Global occupation insecurity: individuals' perceived probability and/or fear of their whole occupation disappearing. Content occupation insecurity: individuals' perceived probability and/or fear of their occupation becoming significantly different (in terms of tasks) even if the occupation as a whole might not disappear. In terms of existing workplace insecurity measurement tools, a total of 34 insecurityrelated scales were identified (see Appendix A). Taken together, the findings from the focus groups and existing workplace insecurity scales served as a basis for the initial 26 OCIS items. In an iterative process, members of the research group, in addition to two Master's students, generated novel items based on the focus group interviews and existing scales. Those items were subsequently discussed and further amended within the team, until all members agreed on the set of 26 items for further testing. In Phase 2, the 26 initial OCIS items were refined with the help of three subjectmatter experts (PhD holders and academics active in the field of work and organizational psychology) and two employees. Ethical approval was obtained from the Sub-Committee on Research Ethics and Safety of the Research Committee (ref. no. EC066/1920) of Lingnan University (Hong Kong). This sample was chosen to obtain insights into the clarity and content of the items from both experts and prospective participants. Procedure. For the cognitive interviews, we applied the verbal proving technique as described by Willis [37], using both concurrent and retrospective probing. Examples of concurrent probing questions include, "What does the term 'occupation' mean to you?" (interpretation probe) or "How did you derive at that answer?" (general probe). Retrospective probing questions included, "Does it need more instructions or an introduction in the beginning?" and "What do you think about the answer categories?" A detailed instruction sheet was written containing standardized questions based on the focus of this study. According to this procedure, the interviewer asks the survey question, followed by the participant's answer. The interviewer then asks for other, specific information relevant to the question, or to the specific answer given. This technique is a combination of scripted and spontaneous probes to allow for procedural flexibility. Results The interviewer made notes on the participants' feedback and comments on each item. Results of the cognitive interviews were summarized in an Excel table and discussed within the research group. The questionnaire items were modified accordingly. Method Participants and procedure. For the quantitative pilot survey, employees in Flanders (Belgium) were invited between autumn 2019 and spring 2020 to complete an online survey on occupation insecurity. The research was approved by the Social Ethics Committee (file no. G-2019 11 1855) of KU Leuven (Belgium). As part of the convenience sampling strategy, the survey was posted on Facebook, LinkedIn, and other social media. It was always emphasized that participation was voluntary and anonymous. In this way, 203 questionnaires were collected, of which 167 questionnaires contained sufficient data to be included in the final analysis. Specifically, only the questionnaires completed up to and including the occupation insecurity items were included, as this was the central concept in this study. Measures. All 26 items of the occupation insecurity questionnaire were scored on a 5-point Likert scale from (1) "strongly disagree" to (5) "strongly agree". The items were first developed in English and then translated to Dutch following the 'translation-back translation' method [38]. Results The items were analyzed with a principal component analysis rotated by the Varimax method with Kaiser normalization. The principal component analysis showed that occupation insecurity could be divided into two factors as expected, namely, global occupation insecurity and content job insecurity. Following this analysis, the questionnaire was reduced to 14 items. For the final selection of items, those with weak or double factor loadings were omitted (Appendix B). In addition, we selected items based on content to avoid duplication. After this quantitative pilot, both the global and content occupation insecurity subscales were further reduced to seven items each. From separate principal component analyses on the items of the shortened scales, it was concluded that all items loaded high on their respective factors. The global occupation insecurity scale had a Cronbach's alpha of 0.90, and the content occupation insecurity scale had one of 0.84. Both scales were thus reliable. For global occupation insecurity, the mean was 1.77 (SD 0.65), and for content occupation insecurity, the mean was somewhat higher, with 2.83 (SD 0.68). Global and content occupation insecurity were positively correlated (r = 0.522, p < 0.01). This implies that individuals who are concerned about the survival of their occupation are also more concerned about the survival of subjectively important occupational characteristics. Following this quantitative pilot, the items for OCIS were reassessed among the researchers one more time and amended into the final scale (see Supplementary Materials for the final scale). For global occupation insecurity, item 6 ("I am worried that my occupation will become less significant in the future with the advancement of technology.") was reworded into "I am worried that my occupation will not be needed anymore in the future due to the advancement of technology". This change was made to make the item less ambiguous because the words "less significant" could be interpreted differently by various participants. Additionally, to ensure clarity, specific time frames were added to items 7 and 9, defining short-term as one to two years and long-term as five to ten years. Furthermore, item 12 was dropped since it was positively worded in contrast to the other remaining items and could thereby lead to confusion. The final global occupation insecurity scale thus contained six items. For the content occupation insecurity scale, a clarifying addition was made to item 22: "I will need to perform tasks in my occupation in the future, for which I am not well trained at the moment". In item 23, the word "job" was replaced with "occupational" to avoid confusion of terminology ("I am certain that my occupational responsibilities will change significantly due to technology before my retirement."). Item 20 was dropped since it was the only positively worded item with the potential for confusion. Items 24 and 25 were also dropped since they had the lowest factor loadings. Instead, a new item was added to strengthen the training component of the scale: "I need additional training in technology in order to be able to continue working in my occupation." The final content occupation insecurity scale contained five items. Part 2 Main Study: Psychometric Properties of OCIS In Part 2, the three aims with their respective eight objectives of this study were addressed after collecting new data. Participants The data collection agency Respondi (https://www.respondi.com/EN/) was commissioned to collect survey data from employees in the UK between 1 and 20 December 2020. The Sub-Committee on Research Ethics and Safety of the Research Committee (ref. no. EC066/1920) of Lingnan University (Hong Kong) approved this study. The goal was to spread the sample across the key demographics age, gender, and geographical region to ensure representativeness. Furthermore, Respondi was tasked with collecting an additional sample targeting a high-risk (i.e., administrative and support staff) and low-risk group (i.e., education staff) group for automation. Participants were provided with an informed consent form, in which it was emphasized that participation was voluntary, and that anonymity and confidentiality would be guaranteed. In total, 1453 complete questionnaires were collected. Response time and straightlining behaviour (i.e., participants ticked the same answer to most statements) were checked. Based on the number of questions in the survey, it was estimated that a response time under 8 min would be unreasonable. In addition, a variance across all survey items (excluding demographics) below 1 was considered straightlining behaviour. Taken together, applying these two criteria led to the removal of 80 participants. Measures For the following measures, all items were rated on a 5-point scale ranging from 1 = "strongly disagree" to 5 = "strongly agree". Occupation insecurity was measured with the newly developed OCIS scale (see Supplementary Materials). The final scale consists of six items for global and five items for content occupation insecurity. A sample item for global occupation insecurity is "I am worried that my occupation will not be needed anymore in the future due to the advancement of technology". Content occupation insecurity is measured with, for example, the item: "I expect that my occupation will undergo significant changes due to technological developments". To ensure that participants were aware of the difference between an "occupation" and a "job", we added instructions with double-check questions to the participants before presenting the items (detailed instructions available in Supplementary Materials). Reliability for both sub-scales was good, with Cronbach's alpha for global occupation insecurity being 0.94 and that of content occupation insecurity being 0.83. Quantitative job insecurity was assessed with the 4-item scale validated by Vander Elst et al. [39]. A sample item is, "Chances are, I will soon lose my job". Reliability was very good with a Cronbach's alpha of 0.91. Qualitative job insecurity was measured with four items validated by Fischmann et al. [40]. An example item is "I feel insecure about the characteristics and conditions of my job in the future". Cronbach's alpha of 0.91 also indicated high reliability for this scale. Career insecurity was assessed with the 4-item scale developed by Höge et al. [41], an example of which is: "It is difficult for me to plan my professional future". Reliability was acceptable with a Cronbach's alpha of 0.73. Future time perspective was measured with the three-item sub-scale "focus on opportunities" of the overall scale specifically assessing future time perspective in relation to one's occupation developed by Zacher [29]. A sample item was, "Many opportunities await me in my occupational future." The Cronbach's alpha of this scale was 0.91. Burnout was measured with a preliminary version of the 12-item short Burnout Assessment Tool (BAT) scale [33]. An example item is, "At work, I feel mentally exhausted." Reliability of this scale was high with a Cronbach's alpha of 0.91. Work engagement was assessed with the UWES-3 scale [31]. This scale contains one item each for vigor, dedication, and absorption. Data Analysis The analyses were performed in MPlus 8.8 and SPSS 28. In order to address Objective 1 and Hypothesis 1 that occupation insecurity consists of two distinct dimensions, factorial validity of OCIS was assessed using confirmatory factor analysis (CFA) with MLM maximum likelihood parameter estimation. Two models were tested by means of CFA. Model 1 was a one-factor model in which all items load on one general occupation insecurity factor. Model 2 adhered to our expectations that a two-factor model with global and content occupation insecurity as separate factors would fit the data better. The following goodness-of-fit-indices and respective cut-offs were used to evaluate model fit: Chi-square (χ 2 ) comparative fit index (CFI) exceeding 0.90, Tucker-Lewis index (TLI) also exceeding 0.90, and the root mean square error of approximation (RMSEA) being less than or equal to 0.06 [42]. Additionally, the average variance extracted (AVE, adequate if > 0.50) was calculated [43]. Reliability (Objective 2, Hypothesis 2) was evaluated by assessing the internal consistency (Cronbach's alpha coefficients ≥ 0.70) and composite reliability (CR, adequate, if ≥ 0.60) score of each subscale. For Objective 3 and Hypothesis 3, measurement equivalence analyses were conducted to show that the measurement properties of the scale were invariant across various demographic groups (i.e., gender, age, and occupational position). Configural invariance is the lowest level of invariance and allows one to examine whether the overall factor structure of the scale fits well for all sample groups [44]. Metric invariance is still considered weak. This indicates that each scale item similarly loads onto the specified latent factor with similar magnitude across groups. Scalar invariance is considered strong, and it tests whether item intercepts are equivalent across groups. Convergent and divergent validity vis-à-vis (quantitative and qualitative) job and career insecurity (Objective 4, Hypotheses 4a and b) was established using CFA. To this end, two models were tested: Model 3 had all five factors loading onto one overall factor. Model 4 was a five-factor model aligned with our expectations. The same goodness-of-fit indices were used as for the first two models to establish factorial validity. In order to demonstrate external discriminant validity by analyzing the relationship of OCIS with organizational future time perspective (Objective 5), the correlation between the two scales was examined. Additionally, it was established whether the square root of the AVE of global and content occupation insecurity, respectively, is greater than the individual correlation between those constructs and future time perspective. For Objective 6 and Hypothesis 6, t-tests were conducted to analyze external validity by comparing the level of occupation insecurity between employees working in theoretically secure vs. insecure occupations. External and incremental validity (Objective 7) were analyzed in the following ways by addressing the respective hypotheses: Hypothesis 7 expected a positive relationship between OCIS and burnout, and Hypothesis 8 predicted a negative relationship with work engagement. These hypotheses were analyzed with regression analyses controlling for age, gender, and occupational position (dummy-coded). Incremental validity above and beyond the variance accounted for by (quantitative and qualitative) job insecurity was analyzed using stepwise regression. In step one, the same control variables as for the previous analyses were included, followed by quantitative and qualitative job insecurity in step two. In step three, global and content occupation insecurity were added. For Objective 8, sample means and percentages were calculated to document the prevalence of occupation insecurity. Construct Validity For construct validity, we tested the hypothesis that occupation insecurity consists of the two distinct dimensions of global and content occupation insecurity (Hypothesis 1). The results from the CFA are presented in Table 1. Model 1, in which both dimensions loaded onto one factor, did not fit the data well, with both CFI and TLI below 0.90 and RMSEA above 0.06 (see Table 2). Model 2, allowing for two factors, fits the data better, with both CFI and TLI above 0.90. However, the RMSEA value was 0.066 and was thereby just above the recommended cut-off. When inspecting the modification indices, it became apparent that allowing the error terms of the two items related to training (C4 and C5 in the final scale, see Supplementary Materials) to correlate would improve model fit. Given that the content of these two items overlaps, an adjusted model (Model 2a) was also tested. Through this re-specification, the model fit improved with all goodness-of-fit indicators showing good results. Loadings on the global factor ranged from 0.79-0.89 and on the content factor from 0.56-0.76. Both factors were correlated 0.70. When examining the AVE, the recommended cut-off of 0.50 was exceeded for both global (AVE = 0.69) and content occupation insecurity (AVE = 0.53). Overall, Hypothesis 1 regarding the theoretically assumed distinction between global and content occupation insecurity has been confirmed. Note. χ 2 = chi-square, S-Bχ 2 = Satorra-Bentler scaling factor for chi-square; df = degrees of freedom; CFI = comparative fit index; TLI = Tucker-Lewis index; RMSEA = root mean square error of approximation; ∆χ 2 = difference in chi-square; ∆df = difference in the degrees of freedom, p = p-value; JI = quantitative and qualitative job insecurity; IO = occupation insecurity; CI = career insecurity. Reliability To establish reliability, Cronbach's alpha and CR were examined. In terms of Cronbach's alpha, both sub-dimensions of OCIS exceeded the recommended cut-off of 0.70 (global occupation insecurity = 0.94 and content occupation insecurity = 0.83). For CR, results showed that both global (CR = 0.93) and content occupation insecurity (CR = 0.85) exceeded the 0.60 cut-off value. Therefore, Hypothesis 2 was confirmed. Measurement Invariance Based on Brown [45], using the chi-square differences to determine measurement invariance is considered too conservative. Therefore, we have examined the change in CFI instead. If the change in the CFI is <0.01, then the next higher level of invariance is supported. According to this criteria, scalar invariance was supported across gender (configural: 0.960; metric: 0.959, scalar: 0.958), age (configural: 0.958; metric: 0.957; scalar: 0.948), and occupational position (configural: 0.946; metric: 0.945; scalar: 0.944). Thus, we conclude that the measurement properties of OCIS are invariant across gender, age, and occupational position, and that Hypothesis 3 is confirmed. Convergent and Divergent Validity The results for convergent validity are reported in Table 3. Model 3, in which all five factors (quantitative job insecurity, qualitative job insecurity, career insecurity, global occupation insecurity, and content occupation insecurity) load onto one factor does not fit the data well. Model 4, on the other hand, shows good model fit on all fit indices. Loadings across all factors ranged from 0.49-0.94. Correlations between factors ranged from 0.46-0.76. Therefore, Hypothesis 4a was confirmed, stating that the two dimensions of OCIS (i.e., global and content occupation insecurity) are distinct from the two dimensions of job insecurity (i.e., quantitative and qualitative) and career insecurity. Hypothesis 4b also predicted a correlation between OCIS and those constructs, which was supported by the results. (see Table 3). External Discriminant Validity Hypothesis 5 predicted that the theoretically unrelated construct of future time perspective would have a low correlation with OCIS. Indeed, in the case of global occupation insecurity, the result was insignificant (r(1371) = −0.02, p = 0.50) and, in the case of content occupation insecurity, the correlation was very small (r(1371) = 0.07, p < 0.01). To evaluate whether OCIS measures global and content occupation insecurity separately from future time perspective, the guidelines proposed by Fornell and Larcker [46] were applied. According to their criterion, discriminant validity can be demonstrated when the square root of the AVE by a construct (here global and content occupation insecurity, respectively) is greater than the correlation between the construct and the other construct under examination (here future time perspective). For global occupation insecurity, the square root of the AVE was 0.83, which exceeded the correlation with future time perspective of −0.02. Regarding content occupation insecurity, the square root of the AVE was 0.73, which also exceeded the correlation with future time perspective of 0.07. Thus, Hypothesis 5 was confirmed. External and Incremental Validity We analyzed consequences of occupation insecurity in terms of burnout and work engagement. In Hypothesis 7, we anticipated that global and content occupation insecurity would be positively related to burnout. Results are summarized in Table 4. The effects of both dimensions were significant for employee burnout after controlling for occupational position, age, and gender, and after controlling for each other (third column). Interestingly, global occupation insecurity was more strongly related to burnout than content occupation insecurity when both dimensions were simultaneously entered into the analysis. Similarly, as can be seen from Table 4, global and content occupation insecurity were significant for work engagement (Hypothesis 8). Here, content occupation insecurity was no longer significantly related to work engagement after controlling for global occupation insecurity. Thus, both Hypotheses 7 and 8 were confirmed when both dimensions were analyzed separately, as hypothesized. Global occupation insecurity, however, seemed to be more important than content occupation insecurity when analyzing burnout and work engagement. Change in R 2 0.07 ** 0.12 ** 0.13 ** 0.01 ** 0.03 ** 0.03 ** Note: All coefficients are standardized. Results show the second step of the linear regression. OP = occupational position; OI = occupation insecurity; BO = burnout, WE = work engagement; COI = content occupation insecurity; GOI = global occupation insecurity; df = degrees of freedom; * p < 0.05; ** p < 0.01. In Hypothesis 9, we predicted that global and content occupation insecurity would explain additional variance in the relationship with burnout and work engagement after the variance accounted for by quantitative and qualitative job insecurity had been partialled out. As can be seen in Table 5, global and content occupation insecurity are able to explain additional variance in burnout in step 3 above and beyond quantitative and qualitative job insecurity included in step 2. As can be seen from the significance level, that increment in explained variance is driven by global rather than content occupation insecurity. For work engagement, neither global nor content occupation insecurity predicted additional variance above quantitative and qualitative job insecurity. Table 5. Stepwise linear regression results to examine incremental validity. Burnout Work Engagement Prevalence of Occupation Insecurity For Aim 3, we analyzed the prevalence of both global and occupation insecurity in the main UK sample (excluding the additional sample of high-vs. low-risk groups). This sample was mostly representative in terms of gender, region, and age. Thus, results give a tentative indication about the prevalence in the country. Since the younger generation was slightly underrepresented in the sample and our findings show that occupation insecurity tends to be higher among that age group, the results reported here might underestimate the true value of both global and content occupation insecurity. Table 6 contains the means, the standard deviations, and percentages of the number of participants who scored lower, equal, or higher than the mid-point three on the final scale. A score of three could be considered "neither occupationally secure, nor occupationally insecure". A score lower than three indicated "occupation security" and a score higher than three indicated "occupation insecurity". A total of 17.2% of participants scored higher than three on global occupation insecurity. For content occupation insecurity, about 45.3% of participants selected a score higher than three. Thus, almost half of the employees were concerned about the tasks and content of their occupations significantly changing due to automation. There was, therefore, more uncertainty about changes to the occupation than about the continued existence of it as such. All but three participants who experienced global occupation insecurity (>3.00) also showed content occupation insecurity (>3.00). The reverse relationship was less straightforward: some individuals experienced content occupation insecurity (>3.00), but not global occupation insecurity (<3.00). Note. Score < 3 refers to low perception of occupation insecurity; score = 3 refers to neither secure nor insecure; score > 3 refers to high occupation insecurity. Discussion In the Future of Jobs Report 2020, the World Economic Forum (WEF) shared the prediction that automation, accelerated by the COVID-19 pandemic, will significantly displace jobs. Within the next five years, the WEF expects that about 85 million jobs will be lost, while 97 million new roles may emerge. According to the report, this shift will require 50% of all employees to re-and upskill. Taken together, technological advancements and the COVID-19 pandemic are set to create a 'double-disruption' that is likely to transform jobs, tasks, and skills by as early as 2025 [10]. These changes appear to give rise to occupation insecurity as a new phenomenon. Results from this study are aligned with findings from job insecurity research, showing that insecurity in the workplace impacts burnout and work engagement, among other negative consequences [12][13][14]. For organizations, the implication thereof is reduced employee performance as well as failure in the implementation of new technologies. Organizations need to transform in order to stay relevant in the market, yet recent research by the Boston Consulting Group (BCG) found that only about 30% of digital transformation processes are successful, citing insecurity and reluctance of employees to adopt the new technologies as a major contributing factor [47]. In order to address employees' worry about the future of their occupations due to new technologies, an official conceptualization as well as a valid measurement tool are required. This study set out to provide both and achieve three aims, namely, (1) to conceptualize and define occupation insecurity, (2) to develop and validate an OCIS scale to measure the phenomenon, and (3) to identify the prevalence of occupation insecurity. Conceptualization of OCIS The first aim of the study was to provide a comprehensive understanding and definition of occupation insecurity. In contrast to a 'job', which concerns a specific role within a certain organization, an 'occupation' is defined as the profession an individual has been trained in and identifies with [6]. 'Occupation' is the umbrella term for job, employment, or business with which an individual earns money. In order to conceptualize occupation insecurity focus groups and cognitive interviews were conducted with both subject-matter experts and employees. From these focus groups and interviews, the following definition of occupation insecurity emerged: Occupation insecurity refers to people's fears about the future of their occupations due to technological advancements. The study further revealed that this overarching concept of occupation insecurity can be divided into two sub-dimensions: global and content occupation insecurity. Global occupation insecurity refers to people's fear of their whole occupation disappearing. This type of insecurity includes worries that the individual's entire line of work will become irrelevant and will not be needed in the future. On the other hand, content occupation insecurity addresses people's worry that their tasks and responsibilities may be significantly changing. This type of insecurity includes concerns that certain aspects of the individual's occupation may become automated or outsourced, leaving them with less fulfilling responsibilities, or with tasks for which they have not been adequately trained. Overall, in line with the goals of this study, a clear and concise definition of occupation insecurity as a concept is provided. Furthermore, two key sub-dimensions are identified, namely, global and content occupation insecurity, which capture specific concerns of individuals regarding their occupational future in the context of technological advancements. Development and Psychometric Evaluation of OCIS Our second aim was to develop and validate an occupation insecurity scale (OCIS). This scale and accompanying information can be downloaded from the website www. occupationinsecurity.com. The final OCIS measure consists of 11 items, which cover the two sub-dimensions global (six items) and content (five items) occupation insecurity. Our predictions in terms of the validity of OCIS were mostly confirmed by the results: first, OCIS showed construct validity in terms of the two distinct sub-dimensions of global and content occupation insecurity. Second, both sub-scales had good reliability. Third, measurement invariance was confirmed across age, gender, and occupational position. Fourth, convergent and divergent validity with career and (quantitative and qualitative) job insecurity was established. Fifth, external discriminant validity with future time perspective was established. Sixth, external validity was demonstrated, as the objectively more insecure participants perceived higher levels of occupation insecurity than the objectively secure group. Seventh, external and incremental validity were partially confirmed. As expected, OCIS had a significant positive relationship with burnout and a significant negative relationship with work engagement. The global, but not the content, dimension of the scale also showed incremental validity above and beyond job insecurity for burnout. For work engagement, neither global nor content occupation insecurity explained additional variance above and beyond quantitative and qualitative job insecurity. The conclusion that could be drawn from this result is that OCIS appears to add more to the negative than to the positive side. This finding is aligned with the Job Demands-Resources (JD-R) model [48]. Research has shown that demands tend to impact more on burnout than on work engagement [49]. Thus, our results can be explained by the model, though incremental validity of OCIS remains to be further examined. Overall, our findings support OCIS as a valid and reliable measure. Our third aim was to examine the prevalence of occupation insecurity. Since our sample was mostly representative in terms of age, gender, and geographical region of the UK, tentative conclusions can be drawn regarding the prevalence of global and occupation insecurity in the country. Results showed that about 16.5% of employees experienced global occupation insecurity. For content occupation insecurity, the percentages were almost triple (46.7%). The means for both global and content occupation insecurity are comparable to the means typically found in job insecurity research [50][51][52]. Specifically, quantitative job insecurity tends to produce lower means than qualitative job insecurity. Likewise, the mean for global occupation insecurity was lower than for content occupation insecurity, further supporting the notion that more employees are impacted by content than by global occupation insecurity. Experiencing global occupation insecurity was, however, more strongly associated with impaired wellbeing than experiencing content occupation insecurity. This mirrors the assumption that quantitative job insecurity is more severe in consequences than qualitative job insecurity, as more would be lost when one becomes unemployed compared to when one becomes uncertain regarding the future of specific job characteristics [23]. Limitations and Suggestions for Future Research For this study, we would like to point out the following limitations and suggestions for future research. As the data were cross-sectional, a longitudinal follow-up would be recommended in order to establish causal relationships. We aimed to achieve a sample as representative of the UK population as possible. Yet, since the younger generation was underrepresented while that age group tends to be mostly affected by occupation insecurity, the true population values are likely slightly higher than those reported here. Future efforts in gathering representative data in the UK and additional countries would be highly recommended. Overall, the results are promising in terms of the validity of OCIS, yet this is only a first preliminary step in the validation process. More elaborate testing with additional samples is required, especially regarding the further examination of incremental validity. The scale was developed and tested in Belgium and the UK. Further validation across different countries and languages is required. In this study, we focused on burnout and work engagement as two outcomes relevant to both the individual and the organization. The relationship of OCIS with additional variables, such as job demands and resources, and the impact of personality and job performance, remains to be examined. With additional research, the long-term goal will be to establish the nomological network of OCIS. Since this study provides the respective scale to measure occupation insecurity, this tool can now be used to follow up on current events such as the pandemic and how increased usage of automation affects employees. OCIS opens up the possibility for screening and determining risk groups to support affected individuals and inform policy change. Ultimately, once OCIS has been applied to evaluate the presence and extent of occupation insecurity, interventions need to be developed and empirically validated. Given that organizations need to continue to innovate and incorporate modern technology to stay relevant, preventing occupation insecurity appears unfeasible. Yet, measures can be taken to appropriately address it to prevent negative consequences such as burnout or reduced work engagement, which will benefit both the employee and the organization. Drawing on suggestions to combat job insecurity [53], four strategies could be applied, which are all designed to increase employees' perceptions of subjective control over the situation. Research has demonstrated that experiencing control over the future of one's employment can buffer negative stress reactions [54]. The four potential strategies are: (1) allowing workers to participate in the change process and giving them a voice, (2) increasing employability, (3) enhancing justice perceptions, and (4) increasing communication. Regarding the first strategy, participative decision making has been shown to be a low-cost measure to increase health, job satisfaction, and reduce absenteeism in the light of job insecurity [55]. Similar positive effects were found when employees were allowed to directly participate in decision making processes through seminars and collaborative action plans [56]. For the second strategy, providing training and opportunities for skill development, the organization will benefit from the enhanced skill set of their employees, and it is a measure that has been effective in reducing the negative consequences of job insecurity [54]. In terms of the third strategy, increasing perceived fairness in the change and transformation process has produced numerous positive results, such as increased performance [57], affective commitment, satisfaction with the organization, and reduced turnover intention [58]. Lastly, it is highly relevant to inform workers regarding changes and automation-related needs as well as respective competences to acquire. Research has shown that clearly communicating future plans within an organization can effectively reduce feelings of insecurity. This can be achieved through open and timely communication, which leads to a greater sense of predictability and control for employees. Furthermore, this type of communication can also contribute to employees feeling valued and respected by management [53,59,60]. Therefore, it is recommended to research these strategies as potential interventions for occupation insecurity. Conclusions In this study, we have defined and conceptualized the novel phenomenon of occupation insecurity. Specifically, we defined occupation insecurity as people's fears about the future of their occupations due to technological advancements. We further found that occupation insecurity can be divided into global and content occupation insecurity. Global occupation insecurity is defined as individuals' perceived probability and/or fear of their whole occupation disappearing. In contrast, content occupation insecurity is defined as individuals' perceived probability and/or fear of their occupation becoming significantly different (in terms of tasks) even if the occupation as a whole might not disappear. In a further step, we developed the OCIS scale (www.occupationinsecurity.com) to enable the measurement of occupation insecurity and provided preliminary evidence for its validity. In order to enable workplace transformations while ensuring that employees successfully shift into their new roles, applying the OCIS scale and measuring employees' level of occupation insecurity is a first essential step in ensuring organizational success and individual readiness for the future world of work. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
v3-fos-license
2021-09-04T05:18:09.977Z
2021-08-20T00:00:00.000
237398462
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://peerj.com/articles/11957.pdf", "pdf_hash": "3cff889f7113e8ead40111d3f759afd6ff336595", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41560", "s2fieldsofstudy": [ "Geology", "Biology" ], "sha1": "3cff889f7113e8ead40111d3f759afd6ff336595", "year": 2021 }
pes2o/s2orc
A possible brachiosaurid (Dinosauria, Sauropoda) from the mid-Cretaceous of northeastern China Brachiosauridae is a lineage of titanosauriform sauropods that includes some of the most iconic non-avian dinosaurs. Undisputed brachiosaurid fossils are known from the Late Jurassic through the Early Cretaceous of North America, Africa, and Europe, but proposed occurrences outside this range have proven controversial. Despite occasional suggestions that brachiosaurids dispersed into Asia, to date no fossils have provided convincing evidence for a pan-Laurasian distribution for the clade, and the failure to discover brachiosaurid fossils in the well-sampled sauropod-bearing horizons of the Early Cretaceous of Asia has been taken to evidence their genuine absence from the continent. Here we report on an isolated sauropod maxilla from the middle Cretaceous (Albian–Cenomanian) Longjing Formation of the Yanji basin of northeast China. Although the specimen preserves limited morphological information, it exhibits axially twisted dentition, a shared derived trait otherwise known only in brachiosaurids. Referral of the specimen to the Brachiosauridae receives support from phylogenetic analysis under both equal and implied weights parsimony, providing the most convincing evidence to date that brachiosaurids dispersed into Asia at some point in their evolutionary history. Inclusion in our phylogenetic analyses of an isolated sauropod dentary from the same site, for which an association with the maxilla is possible but uncertain, does not substantively alter these results. We consider several paleobiogeographic scenarios that could account for the occurrence of a middle Cretaceous Asian brachiosaurid, including dispersal from either North America or Europe during the Early Cretaceous. The identification of a brachiosaurid in the Longshan fauna, and the paleobiogeographic histories that could account for its presence there, are hypotheses that can be tested with continued study and excavation of fossils from the Longjing Formation. INTRODUCTION Brachiosauridae is a clade of titanosauriform sauropods and one of the most iconic groups of non-avian dinosaurs, with well-known exemplars that include the Late Jurassic taxa Giraffatitan and Brachiosaurus. Although the in-group membership and inter-relationships of the clade remain a subject of continued debate (e.g., D'Emic, 2012; Mannion et al., 2013;Mannion, Allain & Moine, 2017;Carballido et al., 2015Carballido et al., , 2020D'Emic, Foreman & Jud, 2016;Royo-Torres et al., 2017), Brachiosauridae and slightly less inclusive subclades are readily diagnosed by a suite of characteristics from across the skeleton that are rare or absent among other lineages of sauropods (Wilson & Sereno, 1998;D'Emic, 2012;Mannion et al., 2013;Mannion, Allain & Moine, 2017). These include bauplan-defining traits, such as an elongate humerus that nearly equals or exceeds the length of the femur, as well as more subtle features of the skull and post-cranial skeleton, including axially twisted maxillary dentition, a small contribution of the ischium to the acetabulum, and a relatively broad proximal end of metacarpal III. In our 2016 expeditions to the Longshan Beds of the Longjing Formation in Yanji City, Jilin Province, northeastern China, we discovered a mid-Cretaceous (Albian-Cenomanian) terrestrial fauna that has produced more than two hundred vertebrate fossils, including dinosaurians, crocodyliforms, and testudines. Sauropod dinosaurs represent the dominant group in the Longshan fauna with more than 60 bones belonging to at least 14 individuals discovered so far. Most of these specimens were collected in the course of controlled excavations by our team, but some additional fossils were retrieved from heaps of excavated sediment from a nearby construction site. Among the latter are numerous dinosaurian and other unidentified teeth, a relatively complete crocodylian specimen, and an isolated sauropod maxilla and partial dentary. Because they were collected by our team after their exhumation, it is unknown whether the maxilla and dentary were preserved in association, and we conservatively consider them to belong to separate individuals. Whereas other sauropod fossils excavated from the Longshan beds either lack brachiosaurid synapomorphies (e.g., middle dorsal vertebrae with long, dorsoventrally short transverse processes; ratio of anteroposterior length of proximal plate of ischium to ischial proximodistal length <0.25) or bear features not known to occur in brachiosaurids (e.g. subcylindrical tooth crowns; bifurcated anterior dorsal neural spines), the isolated sauropod maxilla exhibits a mosaic combination of morphological features that suggest brachiosaurid affinities. Here we describe the morphology of the isolated maxilla, report on phylogenetic analyses that support its referral to Brachiosauridae, and discuss paleobiogeographic scenarios that could account for the occurrence of a middle Cretaceous Asian brachiosaurid. We also describe the isolated sauropod dentary from the same site, and discuss the effects that treating this specimen as belonging to the same individual as the isolated maxilla has on our results. Material. YJDM 00008, a partial left maxilla with dentition in situ. Locality and horizon. The Longshan locality (42 52′10.0″N, 129 29′28.1″E) is located south of Yanji City, Jilin Province (Fig. 1). The beds at the Longshan locality are part of the lower portion of the Longjing Formation, which conformably overlies the Dalazi Formation Zhong et al., 2021). Paleontological and radiochronological data indicate an Albian to Cenomanian age for the Longjing Formation . The fossil-bearing site from which YJDM 00008 was recovered lies a short distance above a tuff layer from near base of the Longjing Formation that has recently been dated to 101.039 ± 0.061 Ma Zhong et al., 2021); this finding is consistent with other U-Pb radiochronological dating of the uppermost part of the Dalazi Formation to 105.14 ± 0.37 (Zhong et al., 2021). Thus, the Longshan section likely includes the Albian/ Cenomanian boundary, and most of the Longjing Formation can be considered Cenomanian in age (Zhong et al., 2021). Description and comparisons Description of YJDM 00008 was facilitated by X-ray computed tomography scanning of the specimen. The scan was performed using the 450 kV industrial X-ray computed tomography scanner (developed by the Institute of High Energy Physics, Chinese Academy of Sciences (CAS)) at the Key Laboratory of Vertebrate Evolution and Human Origins, CAS. The specimen was scanned with a beam energy of 430 kV and a flux of 1.5 mA at a resolution of 160 um per pixel using a 360 rotation with a step size of 0.25 . A total of 1,440 projections were reconstructed in a 2,048 × 2,048 matrix of 2,048 slices using a two-dimensional reconstruction software developed by the Institute of High Energy Physics, CAS (Wang et al., 2019). Data were output in the raw file format and imported into Mimics v.19.0 (Materialise, 2015, Leuven, Belgium) and Dragonfly v.2021.1.0.977 (Object Research Systems, Inc, 2021 for analysis and visualization. Raw CT scan data for YJDM 00008 is available on MorphoSource (https://www.morphosource.org/ concern/media/000361358?locale=en) to qualified researchers. Axial twisting of the maxillary dentition in YJDM 00008 (see below) was visualized and measured digitally, as in D' Emic & Carrano (2020). To ensure accurate measurement of these angles for each tooth, the X, Y, and Z viewing planes were re-oriented in Dragonfly v.2021.1.0.977 so as to align with the mesiodistal, apicobasal, and labiolingual axes of the tooth, and the angle of twisting was measured across the entirety of the tooth crown. This approach was also used to confirm the absence of axially twisted dentition in several other eusauropod taxa for which CT scan data were available (Bellusaurus IVPP V17768.1; an indeterminate diplodocine USNM 2672; Camarasaurus CM 11338; Euhelopus PMU 24705/ 1a-b; an undescribed mamenchisaurid skull IVPP V27936; Sarmientosaurus MDT-PV 2). Anatomical terminology for major components of the maxilla follows Wilson et al. (2016). We also coin a new term, internal antorbital fossa, for the fossa on the maxillary portion of the antorbital cavity that spans the medial surface of the narial process and the dorsal surface of the main body of the maxilla. Phylogenetic definitions used in this study are given in Table 1. Maxilla. YJDM 00008 comprises a partial left maxilla with some replacement teeth in situ . The preserved length of the maxilla is 28 cm, and we estimate the complete maxilla to have been about 30 cm long along its ventral margin. The maxilla is thus shorter than in Brachiosaurus and Giraffatitan (about 40 cm long; Janensch, 1935;Carpenter & Tidwell, 1998;D'Emic & Carrano, 2020) but longer than the maxilla of Euhelopus (approximately 18 cm long; Poropat & Kear, 2013), suggesting that the skull of YJDM 00008 is intermediate in size between Euhelopus and Giraffatitan. The partial left maxilla of YJDM 00008 can be broadly divided into two parts: a relatively thick, dentigerous ventral portion, and a more delicately constructed dorsal portion. The former part of the maxilla is largely intact, except for the missing posteriormost and ventral parts of the maxillary body, including that part that would have articulated with the jugal (Figs. 2-3). The dorsal part of the maxilla is more fragmentarily preserved, and is missing the narial (=ascending; posterodorsal) process and the lateral surface of the maxilla in the region of the antorbital fenestra, the margins of which are not intact. The dorsal part of the maxilla has also suffered some taphonomic distortion, Diplodocimorpha Calvo & Salgado (1995) Diplodocus, Rebbachisaurus, their most recent common ancestor, and all of its descendents Taylor & Naish (2005) Macronaria Wilson & Sereno (1998) The most inclusive clade that includes Saltasaurus loricatus but excludes Diplodocus longus Wilson & Sereno (1998) Titanosauriformes Salgado, Coria & Calvo (1997) The least inclusive clade including Brachiosaurus altithorax and Saltasaurus loricatus Salgado, Coria & Calvo (1997) Brachiosauridae Riggs (1904) The most inclusive clade that includes Brachiosaurus altithorax but excludes Saltasaurus loricatus Wilson & Sereno (1998) Somphospondyli Wilson & Sereno (1998) The most inclusive clade that includes Saltasaurus loricatus but excludes Brachiosaurus altithorax Wilson & Sereno (1998) The least inclusive clade containing Malawisaurus dixeyi and Saltasaurus loricatus Wilson & Upchurch (2003); Upchurch, Barrett & Dodson (2004) such that this region bows outward and overhangs the lateral surface of the ventral, dentigerous portion of the maxilla. Externally, the dorsal portion of the maxilla exhibits a slight concavity, bounded anteroventrally by a crescentic rim, that demarcates the anterior end of the narial fossa. At its anterior extreme, the narial fossa is pierced by a large foramen that we interpret to be the anterior maxillary foramen. An anterolaterally positioned narial fossa is also seen in Camarasaurus (Madsen, McIntosh & Berman, 1995), Euhelopus (Poropat & Kear, 2013) and Brachiosauridae (e.g. Brachiosaurus, Carpenter & Tidwell, 1998;Giraffatitan, Janensch, 1935;Europasaurus, Sander et al., 2006;Marpmann et al., 2015;Abydosaurus, Chure et al., 2010), unlike in late-branching titanosauriforms and diplodocoids, in which the naris and the narial fossa are more posterodorsally positioned on the maxilla and located on the top of the skull (Curry Rogers & Forster, 2004;Whitlock, 2011a;Zaher et al., 2011;Tschopp & Mateus, 2017). Anterodorsally, the maxilla bears an elongate sulcus that would have accommodated the narial (=ascending) process of the premaxilla. A stout anteromedial (=premaxillary; anterodorsal) process projects from the maxilla immediately ventromedial to this sulcus. The anteromedial process articulated with the premaxilla, and in life would have received a posteromedially-directed process from the latter bone, for which it bears a groove on its dorsal surface. At the base of the anteromedial process is a semi-circular notch that corresponds to the maxillary half of the subnarial foramen , the other half of which would have been provided by a complementary notch in the premaxilla. The subnarial foramen appears to have been mediolaterally oriented and visible in lateral view, as in diplodocoids, late-branching titanosauriforms (Wilson et al., 2016), and Euhelopus (Poropat & Kear, 2013) but unlike the dorsal orientation of this foramen in neosauropods like Camarasaurus (CM 11338; Madsen, McIntosh & Berman, 1995) and Giraffatitan (Janensch, 1935;Madsen, McIntosh & Berman, 1995). Although the posterior end of the maxillary main body is incomplete, it is clear that the specimen lacks the strongly tapering, posteriorly directed jugal process of some Abbreviation: amp, anteromedial process; ?a.pal, ?articular surface for the palatine; g.pm, groove for articulation with the premaxilla ; inaof, internal antorbital fossa; jp, jugal process; rf, replacement foramen; rt, replacement teeth; sf, subnarial foramen. Scale bar equals 5 cm. Full-size  DOI: 10.7717/peerj.11957/ fig-3 late-branching titanosauriforms (e.g., Rapetosaurus, Curry Rogers & Forster, 2004;Tapuiasaurus, Wilson et al., 2016). Instead, the specimen bears the plesiomorphically blocky posterior end of the maxilla that characterizes taxa such as Giraffatitan (MB. R.2180.2; Janensch, 1935), Euhelopus (Wilson & Upchurch, 2009;Poropat & Kear, 2013), and Sarmientosaurus (Martínez et al., 2016). Dorsally, the posterior end of the maxilla is marked by a trough, which provided entry into the dorsal alveolar canal for the maxillary vessels and dorsal alveolar nerve (White, 1958;Porter & Witmer, 2020). Posterior to the level of the last alveolus, the lateral wall of the dorsal alveolar canal has broken away, exposing the interior of the canal in lateral view. The presence of a preantorbital foramen/ fenestra cannot be confirmed, as the relevant portion of the dorsal alveolar canal that would have given rise to it ventrolaterally is missing. It is noteworthy, however, that a broad, shallow fossa embays the lateral surface of the maxilla immediately ventral to the broken dorsal alveolar canal, as such a fossa is present in some taxa with well-developed preantorbital openings (e.g., Giraffatitan MB.R.2180.2; Tapuiasaurus Wilson et al., 2016). The presence of a continuous, plate-like wall of bone along the length of the palatal shelf (preserved intact or otherwise evidenced by a broken edge) suggests that the preantorbital opening, if present, was separated from the antorbital cavity, in contrast to the condition in various diplodocids (e.g., Galeamopus Tschopp & Mateus, 2017) and titanosaurians (e.g., Nemegtosaurus Wilson, 2005) in which the preantorbital opening is broadly continuous with the antorbital cavity. On the medial surface of the maxilla, at the junction of its dorsal and ventral portions, is the internal antorbital fossa. This fossa is bounded ventrally by the palatal shelf and anteriorly and laterally by that portion of the maxilla that floors the narial fossa externally and gives rise to the narial process. The latter part of the maxilla is thin-walled and plate-like where it meets the palatal shelf. The lateral wall of the internal antorbital fossa The anterior margin of the internal antorbital fossa extends close to the anterior one-third of the maxillary tooth row, which is also seen in Euhelopus (Wilson & Upchurch, 2009;Poropat & Kear, 2013). In Bellusaurus (Moore et al., 2018), Camarasaurus (Madsen, McIntosh & Berman, 1995), Brachiosauridae such as Brachiosaurus, Giraffatitan, and Abydosaurus (Janensch, 1935;Carpenter & Tidwell, 1998;Chure et al., 2010) and possibly Rapetosaurus (Curry Rogers & Forster, 2004), the anterior margin of the internal antorbital fossa only extends to roughly half the length of the tooth row. The medial view of the maxilla is poorly described or hard to observe in other taxa, and thus the relative anterior extent of the internal antorbital fossa is difficult to characterize more broadly. At the posteromedial end of the palatal shelf there is a rough area which might be the contact surface for the palatine. The CT scans show that two replacement teeth are present in each tooth socket (Figs. 5B-5D), as in Bellusaurus (Moore et al., 2018) and Brachiosaurus (D'Emic & Carrano, 2020). The younger generation of replacement teeth is distodorsal to and overlapped labially by the more mature generation. The crowns of the younger generation of replacement teeth are oriented mesioventrally (Figs. 5B-5D). Some other neosauropods exhibit greater numbers of replacement teeth. Among macronarians, Camarasaurus and the 'Río Negro titanosaur' possess three replacement teeth per alveolus (Coria & Chiappe, 2001;D'Emic et al., 2013). This condition differs from that of Diplodocoidea, which present a high tooth replacement rate and more generations of replacement teeth (e.g., five in Diplodocus; 10 in Nigersaurus) (Sereno & Wilson, 2005;D'Emic et al., 2013). Material. YJDM 00006, a fragmentary right dentary. Systematic Paleontology Locality and horizon. As for YJDM 00008 (see above). Description and comparisons YJDM 0006 comprises a fragmentary right dentary missing much of its anterior, dentigerous ramus (Fig. 7). The preserved portion bears four alveoli and corresponding replacement foramina. The dentary bifurcates posteriorly into posterodorsal and posteroventral processes. A roughened area for reception of the surangular marks the lateral surface of the posterodorsal process. A forked posteroventral process-a feature that characterizes Tapuiasaurus (Wilson et al., 2016) and brachiosaurids other than Europasaurus (Janensch, 1935;Chure et al., 2010;Marpmann et al., 2015;D'Emic & Carrano, 2020) and that results from the development of a small accessory process on the posteroventral process-appears to be absent in YJDM 0006. However, because the posterior portion of the posteroventral process and part of its dorsal margin are missing, it remains possible that the accessory process was present but relatively posteriorly positioned. Table 2 Replacement teeth measurements. All measurements were taken digitally in Dragonfly v.2021.1.0.977 on the oldest generation of replacement tooth within a given alveolus. Because it was not possible to observe textural differences of the enamel that distinguish the root from the crown, measurements of apicobasal crown length are necessarily approximations that may slightly overestimate this length. Rt, replacement tooth of a given alveolus. Rt2 Rt3 Apicobasal crown length ( All functional teeth are missing, but three replacement teeth are preserved in situ and visible externally. One of the replacement teeth is in the last alveolus and the other two are in the penultimate alveolus, indicating at least two generations of replacement teeth. The dentary tooth crowns are parallel-sided, taper apically, and have a D-shaped crosssection, as in YJDM 00008 and in macronarians plesiomorphically. It is not possible to discern whether the dentary teeth bore denticles. The mesiodistal diameters of the dentary tooth crowns are notably smaller than those of roughly corresponding maxillary teeth in YJDM 00008: for posterior positions in each element, the dentary tooth is half as wide as the maxillary tooth (approximately 4 mm in the last two dentary replacement teeth vs. 8.43 mm in the tenth maxillary replacement tooth). An unequally-sized upper and lower dentition is a widely distributed feature among neosauropods (Chure et al., 2010;Mannion et al., 2013), including diplodocoids (e.g., Diplodocus Holland, 1924), the late-branching brachiosaurid Abydosaurus (Chure et al., 2010), and various somphospondylans (e.g. Euhelopus, Poropat & Kear, 2013;Sarmientosaurus, Martínez et al., 2016;Nemegtosaurus Wilson, 2005;Tapuiasaurus, Wilson et al., 2016). Thus, if the maxilla and dentary are hypothesized to belong to a single individual, then their disparate dentitions could be consistent with a wide array of phylogenetic positions within Neosauropoda. However, this possibility should be tempered by two cautionary points. First, although an unequally-sized upper and lower dentition occurs throughout Neosauropoda, this feature is nonetheless sparsely known, as it can only be confirmed in specimens with sufficient single-individual cranial material. Thus, its true distribution across Neosauropoda remains uncertain. Second, unerupted replacement teeth were still in the process of developing when the animal(s) died, and their measured mesiodistal diameters may not reflect the size ratios of the fully erupted, functional dentition, especially if the teeth being compared are measured at different stages of growth. This also bears consideration when noting that the size disparity between the maxillary and dentary replacement teeth of the Yanji cranial material is somewhat greater than that observed in various other taxa for which an association of upper and lower jaws is certain. Wilson, 2005). Under the assumption that the maxilla and dentary belong to a single individual, the relatively large disparity in upper and lower tooth size in the Yanji cranial material could indicate that complementary replacement teeth are imperfect size proxies for mature teeth, or that this animal had a potentially autapomorphic degree of tooth size disparity. Alternatively, the dentary may belong to a smaller-bodied individual than the one represented by the maxilla, and may represent a different taxon. The available information does not allow us to distinguish between these possibilities. Phylogenetic materials and methods We tested the phylogenetic affinities of the Yanji cranial material using a morphological character matrix based on that of Poropat et al. (2021). Although we conservatively consider the maxilla (YJDM 00008) and dentary (YJDM 00006) to belong to separate individuals, we tested the effect that including these two specimens together as a single operational taxonomic unit (OTU) has on the results of our phylogenetic analyses. The maxilla (YJDM 00008) could be scored for 17 (3%) of the 552 characters in the Poropat et al. (2021) matrix. Inclusion of the dentary in the same OTU allowed two additional characters to be scored: character 103, concerning the forked posteroventral process of the dentary (scored as absent), and character 107, concerning unequally sized diameters of upper and lower dentition (scored as present). Although it is possible that a forked posteroventral process was present in the Yanji dentary (see above), we scored this feature as absent (as opposed to '?') because such a score should provide a more stringent test of the possible brachiosaurid affinities of the Yanji cranial material, given that brachiosaurids other than Europasaurus possess this process. In addition to the Yanji cranial material, we also added to the Poropat et al. (2021) matrix recently redescribed brachiosaurid cranial material from the Late Jurassic Morrison Formation of Garden Park, Colorado, USA (USNM 5730; Marsh, 1891;Carpenter & Tidwell, 1998;D'Emic & Carrano, 2020). In a series of preliminary analyses (not shown) in which USNM 5730 and the Brachiosaurus OTU of Poropat et al. (2021) were scored separately, USNM 5730 was consistently recovered as a brachiosaurid, under both equal and extended implied weights parsimony analysis and with and without the inclusion of the Yanji cranial material. In accord with the proposed existence of a single brachiosaurid species in the Morrison Formation (D'Emic & Carrano, 2020), we included USNM 5730 in the Brachiosaurus altithorax hypodigm for the phylogenetic analyses conducted here. The final data matrix consisted of 552 characters scored for 126 OTUs (Supplemental I ) and was subjected to both equal weights (EW) and extended implied weights (EIW) parsimony analysis (Goloboff, 2014) in TNT v.1.5 (Goloboff & Catalano, 2016). We ran two separate versions of these analyses: one in which the maxilla was the sole representative of the Yanji cranial material, and another in which the dentary was included alongside the maxilla in a single OTU. Character ordering, taxon sampling, and down-weighting of homoplasy followed Poropat et al. (2021). Eighteen characters (11,14,15,27,40,51,104,122,147,148,195,205,259,297,426,435,472,510) were treated as ordered. Ten unstable taxa (Astrophocaudia, Australodocus, Brontomerus, Fukuititan, Fusuisaurus, Liubangosaurus, Malarguesaurus, Mongolosaurus, Ruyangosaurus, and the 'Cloverly titanosauriform') were excluded a priori from the EW parsimony analysis; two of these (Ruyangosaurus and the 'Cloverly titanosauriform') were re-instated as active taxa for the EIW analysis. In the latter analysis, we applied a concavity constant (k) of nine. For both EW and EIW analyses, we used 'New Technology' search algorithms to identify the set of most parsimonious trees (MPTs). Fifty search replications were used as a starting point for each hit, and were run until the best score was hit 10 times, using random and constraint sectorial searches under default settings, five ratchet iterations and five rounds of tree fusing per replicate ('xmult = replications 50 hits 10 css rss ratchet 5 fuse 5'). The initial set of MPTs recovered by the analysis was subjected to an additional round of tree bisection and reconnection (TBR) branch swapping to exhaustively sample all equal-length trees. Alternative placements of the Yanji cranial material were identified using the resols command. Character support was assessed in TNT using the apo command and in Mesquite 3.61 (Maddison & Maddison, 2019). Parsimony analysis under EIW produced 2,520 trees of 115.80653 steps. The strict consensus of these trees resolves the Yanji maxilla as a brachiosaurid. Although the composition and early branching pattern of Brachiosauridae differ between the EW and EIW analyses, in both sets of MPTs the Yanji maxilla is part of a well-nested group comprising Soriatitan, Venenosaurus, Cedarosaurus, and Abydosaurus (Fig. 9). Maxilla+dentary OTU. Inclusion of the dentary in the Yanji OTU did not affect tree length for the EW parsimony analysis, but produced many more MPTs (more than one million). Unlike the maxilla-only analysis, the maxilla+dentary OTU is not found to be a brachiosaurid (at least not in the one million MPTs that we collected), and is instead only recovered as either a non-diplodocimorph diplodocoid or an early-branching euhelopodid. Parsimony analysis under EIW produced 1,890 trees of 115.84075 steps. The strict consensus of these trees is identical to that of the maxilla-only EIW analysis, except that the Yanji cranial material is recovered as an earlier-branching brachiosaurid, in one of three positions: as sister to Vouivria, one node stem-ward of Vouivria, or one node apical to Vouivria. This earlier-branching position results from scoring a forked posteroventral process of the dentary as absent, as this feature is a synapomorphy of the clade that includes Brachiosaurus, Giraffatitan, and Abydosaurus. DISCUSSION Previous evidence for Asian brachiosaurids Fossil evidence has occasionally been advanced to suggest the presence of brachiosaurids in the Late Jurassic or Early Cretaceous of Asia, but these hypothesized occurrences have either not held up to subsequent scrutiny, or at best provide only equivocal support for Asian brachiosaurids. Based on pre-cladistic morphological comparisons emphasizing tooth crown shape, the Late Jurassic (Oxfordian) sauropod Bellusaurus, from the Shishugou Formation of northwest China, was initially assigned to its own subfamily (Bellusaurinae) within the Brachiosauridae, then considered part of the superfamily Bothrosauropodidea (Dong, 1990). Subsequent work has failed to support brachiosaurid kinship for Bellusaurus. Although the taxon may potentially represent a neosauropod (e.g., Upchurch, Barrett & Dodson, 2004;Carballido & Sander, 2014;Moore et al., 2018Moore et al., , 2020; but see, e.g., Wilson & Upchurch, 2009;Mo, 2013;Mannion et al., 2019b), no analysis has ever recovered Bellusaurus as a brachiosaurid, and Bellusaurus lacks many of the synapomorphies that unite Brachiosauridae and its subclades, including twisted maxillary dentition. Similarities to Bellusaurus led Ye, Gao & Jiang (2005) to assign the Late Jurassic Daanosaurus, from the upper beds of the Shaximiao Formation, to the Brachiosauridae, within the subfamily Bellusaurinae. Daanosaurus has yet to be included in a phylogenetic analysis capable of testing its potential relationship to brachiosaurids; the only phylogenetic analysis to date to have included Daanosaurus exclusively sampled Middle-Late Jurassic Chinese sauropods, finding the taxon to be closely related to Mamenchisaurus (Li et al., 2011). The authors of this study did not report the matrix or the methods used in their analysis, and thus the character data in support of their phylogenetic conclusions are unclear. While the relationships of Daanosaurus remain obscure, none of the available evidence indicates a close relationship to brachiosaurids. Several characteristics (e.g., opisthocoelous posterior dorsal vertebrae; a tab-like interruption of the prezygodiapophyseal lamina in middle-posterior cervical vertebrae) suggest that Daanosaurus may be a mamenchisaurid (AJ Moore, 2015, personal observation;Mannion et al., 2013;Moore et al., 2020), although macronarian affinities have also been proposed (D'Emic, 2012). An isolated tooth from the Early Cretaceous (Barremian-Aptian) Jinju Formation of South Korea was cited as the first evidence for Asian brachiosaurids on the basis of a chisel-like wear facet on its lingual surface (Lim, Martin & Baek, 2001). Subsequent consideration of the specimen by Barrett et al. (2002) disputed the presence of this form of wear facet and rejected its referral to Brachiosauridae, but concurred that the element likely belongs to an early-branching titanosauriform. Other isolated sauropod teeth from the Berriasian-Hauterivian ( Barrett et al., 2002) and the Barremian (Saegusa & Tomida, 2011) of Japan exhibit a mosaic of features that has been considered potentially consistent with, but not diagnostic for, brachiosaurid affinities, although it should be noted that neither these teeth, nor the isolated tooth from the Jinju Formation, have been described as exhibiting axial twisting, the only unambiguous synapomorphy of brachiosaurid dentition. Thus, all previous fossil evidence has fallen shy of demonstrating the presence of brachiosaurids in Asia. As we elaborate in the following section, we consider YJDM 00008 to provide the most compelling evidence to date of an Asian brachiosaurid, while acknowledging that the fragmentary nature of the specimen requires that this hypothesis be treated cautiously, pending future discoveries in the Longjing Formation. Phylogenetic affinities of the Yanji maxilla In the discussion that follows, we focus on the results of the maxilla-only phylogenetic analyses. Although the results of the maxilla-only and maxilla+dentary analyses are not radically different, we nonetheless favor the former over the latter, for two reasons. First, because the maxilla and dentary were retrieved post-exhumation from heaps of excavated sediment that also included other vertebrate specimens (see above), any evidence for association between these two elements has been lost. The most conservative approach, therefore, is to treat them separately. Second, although inclusion of the dentary alongside the maxilla in a single OTU allows two additional characters to be scored, the scores for these characters are somewhat speculative, as discussed above. Given this uncertainty, scoring these characters is useful primarily to the extent that it incorporates the maximum possible character conflict that may exist for the OTU, and thus more stringently tests the hypothesis that the Yanji cranial material belongs to a brachiosaurid-a hypothesis that otherwise rests on a single character state (see below). Under EIW parsimony, our preferred mode of phylogenetic inference, the Yanji cranial material is found to be a brachiosaurid with or without inclusion of the dentary, indicating that the potential character conflict introduced by the dentary does not overwhelm the support for brachiosaurid affinities that is afforded by the maxilla. We thus focus on the results of our maxilla-only analyses, while acknowledging the inherent limitations of analyzing fragmentary fossils. Near the end of this section, we discuss additional caveats that attend interpretation of YJDM 00008 as a brachiosaurid. Both the EW and EIW parsimony analyses agree that the Yanji maxilla belongs to a neosauropod. This identification is supported by the presence in YJDM 00008 of parallel-sided dentition (character 108), a feature that is resolved as a synapomorphy of Neosauropoda (EW) or Neosauropoda + (Camarasaurus + Lourinhasaurus) (EIW). The EW parsimony analysis provides equivocal support for the Yanji taxon as a brachiosaurid, a non-diplodocimorph diplodocoid, or a euhelopodid (Fig. 8). Character support for the latter two positions is limited to a single, homoplastically distributed feature: possession of a laterally-visible subnarial foramen (character 75). A laterally-visible subnarial foramen reflects the absence of a markedly depressed narial fossa and is plesiomorphic for Eusauropoda, present in Shunosaurus and secondarily reacquired in Euhelopus, lithostrotians other than Malawisaurus (= Nemegtosaurus, Rapetosaurus, and Tapuiasaurus), and either Diplodocimorpha or Diplodocoidea (depending on whether character optimization is assumed to occur under delayed or accelerated transformation, respectively). While the lateral exposure of the subnarial foramen suggests possible diplodocoid affinities for YJDM 00008, numerous features, mostly of the dentition, exclude the specimen from Diplodocimorpha. These include a relatively smooth dentigerous portion of the lateral surface of the maxilla (character 288; this region is marked by deep, dorsoventrally elongate vascular grooves in diplodocimorphs and Nemegtosaurus); a Slenderness Index of <4.0 (character 11), D-shaped mid-crown cross-sections (character 109; these are cylindrical in diplodocimorphs and Titanosauria), tooth crowns with concave lingual surfaces (character 110; these are convex in diplodocimorphs, Titanosauria, Abydosaurus, and Phuwiangosaurus), an apicobasally-oriented lingual ridge (character 111; this is only very weakly developed in YJDM 00008 and is absent in Jobaria, diplodocimorphs, some brachiosaurids, and most somphospondylans), and fewer than three replacement teeth per alveolus (character 453). The absence of cranial material known for Amphicoelias or either species of Haplocanthosaurus allows the Yanji maxilla to be recovered in all possible positions available to a non-diplodocimorph diplodocoid (Fig. 8). Such a hypothesis for the Yanji maxilla would extend the temporal range of non-diplodocimorph diplodocoids by approximately 45 million years, and would indicate that a heretofore unsampled lineage of diplodocoids survived into the middle Cretaceous. Until recently, evidence for Asian diplodocoids was scant and controversial (Upchurch & Mannion, 2009;Whitlock, D'Emic & Wilson, 2011;Xu et al., 2018). The discovery of the early Middle Jurassic dicraeosaurid Lingwulong from China, the first definitive Asian diplodocoid and the oldest known neosauropod, indicates that diplodocoids dispersed into or originated from East Asia while Pangaea was a contiguous landmass , and may presage future discoveries of the group in Asia. Nevertheless, the lack of more compelling diplodocoid/diplodocimorph synapomorphies in the maxilla and dentition of YJDM 00008, the extreme temporal and phylogenetic remove between YJDM 00008 and Lingwulong, and the paucity of convincing evidence for diplodocoids in the Early Cretaceous of Asia make referral of YJDM 00008 to Diplodocoidea unlikely. A hypothesis of euhelopodid affinities for the Yanji maxilla is more consistent with the known spatiotemporal ranges of neosauropod dinosaurs. Whereas no undisputed diplodocoids are presently known in the Early Cretaceous of Asia (Upchurch & Mannion, 2009;Whitlock, D'Emic & Wilson, 2011;Xu et al., 2018), numerous non-titanosaurian somphospondylan taxa have been recovered from this interval, with members of the Euhelopodidae-an East Asian radiation of somphospondylans-being particularly well-represented (D'Emic, 2012;Mannion et al., 2013Mannion et al., , 2019a. Like the hypothesis of diplodocoid kinship, however, support for a position at the base of Euhelopodidae relies solely on the presence of a laterally-visible subnarial foramen, a homoplastically distributed feature that is thus far known only for the eponymous Euhelopus among euhelopodids. Recent comparative anatomical and phylogenetic work has called into question the macronarian affinities of Euhelopus (Moore et al., 2020), suggesting that phylogenetic results relying solely on features shared with that taxon should perhaps be treated cautiously. A consideration of the evolutionary scenarios implied by competing topological positions of YJDM 00008 leads us to favor brachiosaurid affinities for the specimen. The EIW parsimony analysis and a subset of the MPTs from the EW analysis indicate that the Yanji taxon is a well-nested brachiosaurid. Support for brachiosaurid affinities for YJDM 00008 rests on a single feature-the presence of axially twisted maxillary teeth (character 114; Figs. 6-7)-which, under EW parsimony analysis, provides no more or less support for brachiosaurid affinities than a laterally visible subnarial foramen does for diplodocoid and euhelopodid kinship. Unlike a laterally visible subnarial foramen, however, twisted maxillary dentition is a characteristic that otherwise lacks homoplasy within Eusauropoda, and has been universally recovered as an unambiguous synapomorphy (sensu Tschopp, Mateus & Benson, 2015) of Brachiosauridae or a slightly less inclusive clade by previous authors (e.g., D'Emic, 2012; Mannion et al., 2013;Mannion, Allain & Moine, 2017;D'Emic, Foreman & Jud, 2016;Carballido et al., 2020). The high consistency of this character (CI = 1 in all previous analyses) accounts for why the EIW parsimony analysis favors only brachiosaurid affinities for YJDM 00008: parsimony under EIW weights characters in proportion to the homoplasy they incur on the trees being compared, and thus treats brachiosaurid kinship for YJDM 00008 as more parsimonious than either diplodocoid or euhelopodid affinities because such a relationship avoids homoplasy in a character that is otherwise perfectly hierarchical (i.e. twisted maxillary dentition), at the expense of adding a step to an unavoidably homoplasious character (i.e. laterally-visible subnarial foramen). We agree with the epistemological arguments in favor of such trade-offs (Goloboff, 1993), and in light of recent simulations showing that EIW outperforms EW parsimony (Goloboff, Torres & Arias, 2017), prefer the former over the latter as a mode of phylogenetic inference. In the absence of compelling character conflict with other brachiosaurids or evidence for a wider distribution of strongly (30-45 ) twisted dentition outside of Brachiosauridae, we thus consider the available data to be most consistent with the hypothesis that YJDM 00008 is a brachiosaurid, diagnosed by a laterally visible subnarial foramen. The nested position of YJDM 00008 among Cedarosaurus, Venenosaurus, Soriatitan, and Abydosaurus is supported by the absence of denticles in the dentition (character 113; observable only in the latter two taxa and YJDM 00008). Most eusauropods later-branching than Jobaria lack denticles. However, marginal enamel tuberosities were reacquired in brachiosaurids, where they are present in a grade that includes Europasaurus, Vouivria, Brachiosaurus, and Giraffatitan, and were secondarily lost in the subclade to which YJDM 00008 belongs. It should be noted, however, that at least some brachiosaurids, as well as some other sauropod taxa, appear to exhibit an uneven distribution of denticles between the upper and lower jaws. Replacement teeth preserved in the maxilla of Brachiosaurus lack denticles, whereas at least some of those in the dentary bear denticles on their mesial edge (D'Emic & Carrano, 2020), a pattern that also characterizes Bellusaurus (Moore et al., 2018) and Abrosaurus (Ouyang, 1989). Preservation of the visible replacement teeth in the Yanji dentary (YJDM 00006) is insufficient to determine whether denticles are present. Thus, it remains possible that the Yanji sauropod(s) bore denticles on the dentary teeth, though such a finding would not perturb support for brachiosaurid affinities. The close relationship between YJDM 00008 and several late-branching brachiosaurids may also find support from the very weak development of an apicobasally oriented lingual ridge (character 111) in the teeth of YJDM 00008. This ridge is plesiomorphic for eusauropods (Barrett et al., 2002;Mannion et al., 2013) and is present in brachiosaurids such as Vouivria (Mannion, Allain & Moine, 2017) and Giraffatitan (Janensch, 1935-36), but is absent in Jobaria, Diplodocoidea/Diplodocimorpha, most somphospondylans, and the brachiosaurid subclade that includes Abydosaurus and Soriatitan. While the presence of a lingual ridge in YJDM 00008 excludes it in all MPTs from the Abydosaurus + Soriatitan clade, its subtle development in the specimen is potentially consistent with the progressive evolutionary loss of the lingual ridge in a subset of brachiosaurids. Our interpretation of YJDM 00008 as a brachiosaurid is tempered by two important caveats. First, while current evidence indicates that axially twisted maxillary dentition is an unambiguous synapomorphy of a subclade of Brachiosauridae, very little is known about maxillary evolution in non-titanosaurian somphospondylans (and next to nothing, if Euhelopus lies outside of Macronaria; Moore et al., 2020). This knowledge gap leaves open the possibility that strongly twisted maxillary teeth in fact characterize a more inclusive grade of titanosauriforms or macronarians than the presently available fossil evidence would suggest. Slight axial twisting has been noted for the maxillary teeth of Europasaurus (Marpmann et al., 2015)-a taxon whose brachiosaurid and titanosauriform kinship remains a topic of controversy (Mannion, Allain & Moine, 2017;Carballido et al., 2020)as well as for a handful of non-brachiosaurid titanosauriforms, including isolated teeth of Astrophocaudia (D'Emic, 2013) and distal maxillary teeth of Tapuiasaurus (Wilson et al., 2016). Considered together, these observations suggest that twisted dentition may be more broadly distributed within Macronaria than is presently appreciated, and underscore that additional materials from early-branching somphospondylans are needed in order to robustly test whether marked axial twisting (~30-45 ) of the maxillary dentition indeed constitutes an unambiguous brachiosaurid synapomorphy. Second, we have yet to identify any other clear evidence for brachiosaurids in the Longshan fauna, although it should be noted that our initial observations on the other sauropod fossils from the Longshan beds of the Longjing Formation are still very preliminary (being based on only a subset of the total material that has been excavated) and have not been incorporated into a phylogenetic analysis. Morphological details of these sauropod fossils are instead more consistent with a euhelopodid or early-branching titanosaurian identity, as indicated by such features as subcylindrical tooth crowns (character 109; a titanosaurian synapomorphy, also present in diplodocimorphs), bifurcated postaxial cervical and anterior dorsal neural spines (character 132; widely distributed in non-titanosauriform eusauropods, and present in most euhelopodids, early-branching titanosaurians, and Opisthocoelicaudia), and a scapula lacking a subtriangular process at both the posteroventral corner of the acromion and the anteroventral edge of the scapular blade (characters 215 and 216; both processes are absent in Jiangshanosaurus and Huabeisaurus, among other eusauropods). Although the phylogenetic affinities of other sauropod material from the Longshan beds do not bear directly on the identity of YJDM 00008, the possibility that the latter belongs to a brachiosaurid may become unlikely if all other material from the Longshan beds is eventually shown to belong to a single somphospondylan taxon. Ultimately, additional evidence for or against the presence of a brachiosaurid in the Longshan fauna, and other details on the taxonomic diversity of this assemblage, await further study of excavated specimens and future excavation in the Longjing Formation. Paleobiogeographic implications of Asian brachiosaurids Assuming brachiosaurid affinities for YJDM 00008, at least two scenarios can be posited to explain the occurrence of a middle Cretaceous Asian brachiosaurid. The first proposal interprets the presence of a brachiosaurid in the Longjing Formation as resulting from dispersal of a lineage of brachiosaurids into East Asia at some point in the Early Cretaceous (or possibly the Late Jurassic). The results of our maxilla-only phylogenetic analysis are most consistent with a close relationship between YJDM 00008 and North American brachiosaurids, and hence a North American origin for the lineage that gave rise to YJDM 00008. As discussed above, however, the character data supporting this inference are very limited, and the relationships of YJDM 00008 among brachiosaurids (or perhaps neosauropods more broadly) are likely to change with future discoveries. Here, we briefly consider alternative dispersal routes available to either North American or European ancestors of YJDM 00008; consideration of the latter possibility is warranted based on the presence of the Spanish brachiosaurid Soriatitan in the polytomy to which YJDM 00008 belongs, as well as other evidence for apparent interchange between the sauropod faunas of Europe and Asia in the Early Cretaceous (see below). Current information is consistent with either North America or Europe as a potential source of Asian emigrants in the Early Cretaceous (Poropat et al., 2016;Xu et al., 2018; and references therein). Considerable biogeographic and phylogenetic evidence indicates a close relationship between Asian and North American faunas in the middle Cretaceous (e.g., Russell, 1993;Cifelli et al., 1997;Chinnery-Allgeier & Kirkland, 2010;D'Emic, Wilson & Thompson, 2010;Zanno & Makovicky, 2011;Farke et al., 2014;Brikiatis, 2016;Dunhill et al., 2016;Poropat et al., 2016;Ding et al., 2020). Trans-European dispersal cannot be ruled out as an explanation for faunal similarities between Asia and North America (e.g., Chinnery-Allgeier & Kirkland, 2010;Brikiatis, 2016;Ding et al., 2020); indeed, recent quantitative analyses of dinosaurian biogeography have emphasized Europe as a likely gateway between Asia, North America, and other landmasses in the Early Cretaceous (Dunhill et al., 2016;Ding et al., 2020), although Zanno & Makovicky (2011) argued that trans-European dispersal between Asia and North America at this time would have been complicated by the periodic development of various geographic barriers. An alternative hypothesis entails emplacement of a Bering land bridge between Asia and North America for at least part of the Albian (Russell, 1993;Cifelli et al., 1997;Zanno & Makovicky, 2011;Poropat et al., 2016). A direct Beringian connection has been invoked to explain apparent late Early Cretaceous dispersal events for tyrannosauroids (e.g., Zanno & Makovicky, 2011), therizinosaurians (e.g., Zanno, 2010), and neoceratopsians (e.g., Farke et al., 2014), among other vertebrate groups (but see Brikiatis, 2016 for an alternative view). Uncertainty about the timing and duration of a late Early Cretaceous Bering land bridge and the importance of Europe as an intermediate between North America and Asia notwithstanding (Brikiatis, 2016), the balance of evidence suggests that a Beringian connection existed within a timeframe that could explain the arrival of brachiosaurids in East Asia from North America by the Albian/Cenomanian boundary. A European origin for Asian brachiosaurids is also possible, and receives support from biogeographic and paleogeographic studies. Taxonomic surveys and empirical paleobiogeographic analyses indicate substantial faunal exchange between Europe and Asia in the Early Cretaceous (e.g., Russell, 1993;Upchurch, Hunn & Norman, 2002;Chinnery-Allgeier & Kirkland, 2010;Dunhill et al., 2016;Ding et al., 2020). Periodic establishment of a Russian Basin/Turgai marine barrier would have impeded terrestrial dispersal between Europe and Central Asia in the late Berriasian-early Hauterivian and early Albian, but otherwise connections between these landmasses are thought to have existed for much of the Early Cretaceous (Poropat et al., 2016 and references therein), providing potential routes for an ancestral population of European brachiosaurids to disperse into East Asia. This scenario is consistent with other fossil evidence that indicates commingling of Asian and European sauropod faunas in the Early Cretaceous. Isolated teeth from the Barremian of Spain bearing a distolingual boss-a feature that is otherwise known only in some East Asian sauropods, including the Berriasian-Hauterivian Euhelopus (Wiman, 1929;Wilson, 2002;Barrett & Wang, 2007;Suteethorn et al., 2013;Moore et al., 2020)-would seem to suggest that a subclade of euhelopodids spread across both Asia and Europe in the Early Cretaceous (Canudo et al., 2002). Recently, the discovery of an isolated anterior caudal vertebra of a rebbachisaurid in the Turonian Bissekty Formation of Uzbekistan, as well as possible rebbachisaurid teeth from the same formation, have been interpreted as evidence for dispersal of European rebbachisaurids into Central Asia sometime between the Barremian and Turonian (Averianov & Sues, 2021). It should be noted, however, that the morphological basis for identifying the Bissekty Formation anterior caudal vertebra as a rebbachisaurid has been critically challenged by a reappraisal of the specimen by Lerzo, Carballido & Gallina (2021), who rejected rebbachisaurid affinities and provided evidence in support of a titanosaurian identity, a hypothesis also previously favored by Sues et al. (2015) and Averianov & Sues (2017). Regardless of the affinities of the Bissekty Formation specimen, the presence of a brachiosaurid in the Longjing Formation can be explained by the existence of plausible dispersal routes connecting East Asia to both Europe and North America during much of the Early Cretaceous. The second biogeographic scenario suggests that brachiosaurids and other major neosauropod lineages were widely distributed across Pangaea, including East Asia, before the separation of Laurasia from Gondwana in the latter half of the Middle Jurassic and the isolation of East Asia from the rest of Laurasia from the Callovian-Tithonian (Poropat et al., 2016, Xu et al., 2018, andreferences therein). In this scenario, the occurrence of YJDM 00008 in the middle Cretaceous of northeast China reflects the persistence of brachiosaurids in Asia from the Middle Jurassic through the Early Cretaceous. The heretofore unrecognized presence of brachiosaurids in the region during this time would thus reflect biased sampling of the fossil record. Such a scenario seems unlikely, given that substantial prospecting in Middle-Late Jurassic and Early Cretaceous (particularly Barremian-Albian) strata of China has yielded a rich sauropod record (118 collections containing sauropod specimens, according to the Fossilworks Database, April 15, 2021) that, to date, appears to be wholly devoid of brachiosaurids. Nevertheless, the possibility that sampling biases have obscured the presence of an early-arriving lineage of Asian brachiosaurids should not be dismissed out of hand. Indeed, pervasive sampling artifacts may be necessary to explain the apparent absence of undisputed neosauropods from the well-sampled, sauropod-rich Middle-Late Jurassic horizons of the Junggar and Sichuan basins, given the recent discovery of the dicraeosaurid Lingwulong in older strata of north central China . Possible explanations for the scarcity of neosauropods (including brachiosaurids) in the Middle-Late Jurassic and of brachiosaurids in the Early Cretaceous of Asia include low abundance or diversity of these groups in their ecosystems, and failure to sample the preferred habitats in which these groups were more abundant (Whitlock, 2011b;Xu et al., 2018). These explanations have been proposed to account for the relatively low occurrence of brachiosaurids in dinosaur-bearing localities of the Morrison Formation (D'Emic & Carrano, 2020). Thus, irrespective of the series of events that might have brought a lineage of brachiosaurids to Asia, their extreme rarity in currently sampled Early Cretaceous dinosaur-bearing horizons may reflect the concerted effects of an overall low abundance and poor sampling of preferred habitats. CONCLUSIONS The recent discovery of a fossil-rich horizon near the base of the Albian-Cenomanian Longjing Formation has yielded numerous dinosaurian and other terrestrial vertebrate specimens, including an isolated maxilla of a neosauropod. Although fragmentary, this specimen preserves a striking morphology-axially twisted dentition-that is otherwise present only in brachiosaurids. Referral of YJDM 00008 to Brachiosauridae receives support from phylogenetic analysis under both equal and implied weights parsimony, providing the most convincing evidence to date that brachiosaurids dispersed into Asia at some point in their evolutionary history. Consideration of a possibly associated partial dentary (YJDM 00006) from the same site does not impact this conclusion. Several paleobiogeographic scenarios could account for the occurrence of a middle Cretaceous Asian brachiosaurid, including dispersal from either North America or Europe during the Early Cretaceous. These hypotheses can be tested by continued study of excavated specimens from the Longshan locality and future excavation in the Longjing Formation. Field Study Permissions The following information was supplied relating to field study approvals (i.e., approving body and any reference numbers): Field sites permission was approved by Yanji Paleontological Research Centre (project name: Yanji Dinosaur Fossils Excavation Research Project Cooperation Agreement). Data Availability The following information was supplied regarding data availability: The raw data of CT scan is available at MorphoSource: DOI 10.17602/M2/M361358. Supplemental Information Supplemental information for this article can be found online at http://dx.doi.org/10.7717/ peerj.11957#supplemental-information.
v3-fos-license
2020-09-10T10:16:35.517Z
2020-09-01T00:00:00.000
221626300
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/20/18/5093/pdf", "pdf_hash": "1b749fcb414af58d0fbc26a9ba38efdc00276138", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41561", "s2fieldsofstudy": [ "Business", "Computer Science" ], "sha1": "a006dab49eff4e102e1edeff21db27c4a1b70491", "year": 2020 }
pes2o/s2orc
Distributed Node Scheduling with Adjustable Weight Factor for Ad-hoc Networks In this paper, a novel distributed scheduling scheme for an ad-hoc network is proposed. Specifically, the throughput and the delay of packets with different importance are flexibly adjusted by quantifying the importance as weight factors. In this scheme, each node is equipped with two queues, one for packets with high importance and the other for packets with low importance. The proposed scheduling scheme consists of two procedures: intra-node slot reallocation and inter-node reallocation. In the intra-node slot reallocation, self-fairness is adopted as a key metric, which is a composite of the quantified weight factors and traffic loads. This intra-node slot reallocation improves the throughput and the delay performance. Subsequently, through an inter-node reallocation algorithm adopted from LocalVoting (slot exchange among queues having the same importance), the fairness of traffics with the same importance is enhanced. Thorough simulations were conducted under various traffic load and weight factor settings. The simulation results show that the proposed algorithm can adjust packet delivery performance according to a predefined weight factor. Moreover, compared with conventional algorithms, the proposed algorithm achieves better performance in throughput and delay. The low average delay while attaining the high throughput ensures the excellent performance of the proposed algorithm. In addition, for environmental monitoring, it is necessary to send emergency disaster information, such as an earthquake alert, to a destination node with very high priority [10]. The nodes of an ad-hoc network consume a lot of energy in sensing data and processing high-priority packet. However, in many situations, it is difficult to replace or recharge the battery of the nodes. Accordingly, it is important to increase energy efficiency and to enhance overall network lifetime through clustering, transmission power control, and efficient network information exchange [11][12][13][14][15][16]. Fairness and load balancing among nodes also have a great influence on the battery lifetime and the connectivity of the entire network. However, low fairness among nodes due to inefficient resource allocation causes increased packet collisions and packet retransmission to some nodes, and these detrimental effects reduce the battery lifetime. Meanwhile, some other nodes will be allocated an unnecessarily much amount of resources, resulting in severe inefficiency for the entire network. Hence, resource allocation for an ad-hoc network is a very important and challenging issue. Fairness measurements can be categorized into qualitative and quantitative methods, depending on whether the fairness can be quantified. Qualitative methods cannot quantify fairness to an actual value, but they can judge whether a resource allocation algorithm achieves a fair allocation. Maximum-minimum fairness [17,18] and proportional fairness [19] are qualitative methods. Maximum-minimum fairness aims to achieve a max-min state, where the resources allocated to a node can no longer be increased without reducing the resources allocated to neighboring nodes. Proportional fair scheduling maximizes the log utility of the whole network by preferentially scheduling nodes with the highest ratios of currently achievable rates to long-term throughput. Measuring the fairness of an entire network is also an important issue. Jain's fairness index [20] is a quantitative fairness measurement method, however, it cannot measure the fairness of nodes to which a weight factor is assigned. In this paper, a distributed scheduling algorithm, which takes weight factors and traffic load into account, is proposed. In the proposed algorithm, self-fairness [21] is adopted for resource reallocation. Increment of self-fairness means that resources are fairly allocated to nodes proportionally to the weight of each node. Therefore, even in the distributed scheduling which supports packets with different importance, if the slot allocation for each node is adjusted to the direction of increasing self-fairness, the overall performance of the network can be significantly increased. Moreover, the proposed algorithm adjusts throughput and delay based on the assigned weight factor rather than an absolute distinction between high-priority packets and low-priority packets. The contribution of this work is summarized as follows: • A novel distributed scheduling scheme for an ad-hoc network is proposed, where both the loadbalancing among neighboring nodes and the preferential processing for high importance packets are considered. • An intra-node slot reallocation algorithm is proposed. Each node is equipped with multiple queues, and this algorithm re-arranges the slot allocation between the queues inside a node. Moreover, this algorithm enables a flexible adjustment of throughput and delay, reflecting assigned weight factors. • Self-fairness for packets with unequal importance is introduced. This metric incorporates both the weight factor and traffic load. The metric plays an important role in achieving a fairness among the packets with the same weight factor and in supporting service differentiation among packets with different weight factors. It is validated that the proposed scheduling scheme substantially increases the performance of the network. • It is confirmed that the proposed node scheduling outperforms the absolute priority-based scheduling scheme in terms of delay and throughput. This result is supported by thorough simulation studies accommodating various operation scenarios. The remainder of this paper is organized as follows: Section 2 describes the various distributed resource allocation medium access control (MAC) protocols proposed in the literature. Section 3 describes the proposed algorithm. In Section 4, the performance of the proposed algorithm is analyzed based on an extensive simulation study, and, finally, Section 5 presents some observational conclusions. Related Works In [22], the authors proposed a distributed randomized (DRAND) time division multiple access (TDMA) scheduling algorithm, which is a distributed version of the randomized (RAND) time slot scheduling algorithm [23]. DRAND operates in a round-by-round manner and it does not require time synchronizations on the round boundaries, resulting in energy consumption reduction. In this scheme, there are four states for each node: IDLE, REQUEST, GRANT, and RELEASE. Each node is assigned a slot that does not cause a collision within the 2-hop neighboring nodes by sending a state message to the neighboring nodes. The basic idea of the deterministic distributed TDMA (DD-TDMA) [24] is that each node collects information from its neighboring nodes to determine slot allocations. DD-TDMA is superior to DRAND in terms of running time and message complexity. This feature increases energy efficiency because DD-TDMA does not need to wait for a GRANT message, which is transmitted as a response of REQUEST message and it contains a slot allocation permission for unused slots. However, DRAND and DD-TDMA do not consider load balancing and fairness among the nodes. Algorithms for allocating resources based on the states of networks and nodes were proposed in [25][26][27][28]. In [25], a load balancing algorithm for TDMA-based node scheduling was proposed. This scheme makes the traffic load semi-equal and improves fairness in terms of delay. In adaptive topology and load-aware scheduling (ATLAS) [26], nodes determine the amount of resources to be allocated through resource allocation (REACT) algorithms, where each node auctions and bids on time slots. Each node acts as both an auctioneer and a bidder at the same time. During each auction, an auctioneer updates an offer (maximum available capacity) and a bidder updates a claim (capacity to bid in an auction). Through this procedure, resources are allocated to the nodes in a maximum-minimum manner [17]. In [27], an algorithm consisting of two sub-algorithms was proposed. The first is a fair flow vector scheduling algorithm (FFVSA) aiming to improve fairness and optimize slot allocation by considering the active flow requirements of a network. FFVSA uses a greedy collision vector method that has less complexity than the genetic algorithm. The second is a load balanced fair flow vector scheduling algorithm (LB-FFVSA), which increases the fairness of the amount of allocated resources among nodes. In [28], the fairness among nodes was improved in terms of energy consumption through an upgraded version of DRAND. Energy-Topology (E-T) factor was adopted as a criterion for allocating time slots, and E-T-DRAND algorithm was proposed to request time slots. Instead of the randomized approach of DRAND, E-T-DRAND algorithm provides high priority to the nodes with high energy consumption and low residual energy due to the large number of neighboring nodes. E-T-DRAND balances the energy consumption among nodes and enhances scheduling efficiency. Each node determines the number of slots to be reallocated using the number of packets accumulated in the queue of its 1-hop neighboring nodes and the number of allocated slots for these nodes. The slot reallocation procedure must check whether a slot is shared by nodes within 2-hop distance. As a result, the load between nodes becomes semi-equal, and the nodal delay is reduced. In [29][30][31][32][33], scheduling schemes considering priority were proposed. In [29], for the purpose of reducing delay of emergency data, energy and load balanced priority queue algorithm (ELBPQA) was proposed. In this scheme, four different priority levels are defined according to the position of a node in a network. In [30], the highest priority is given to real-time traffic, and the other priority levels are given to non-real time traffics. In order to reduce the end-to-end delay, the packets with the highest priority are processed in a preemptive manner. In [31], priority-and activity-based QoS MAC (PAQMAC) was proposed. In this scheme, the active time of traffic is dynamically allocated according to priority. Specifically, by adopting a distributed channel access scheme, the packet with high priority have reduced back-off and wait times. In [32], I-MAC protocol, which combines carrier sense multiple access (CSMA) and TDMA schemes, was proposed to increase the slot allocation for nodes with high priority. I-MAC consists of a set-up phase and a transmission phase. The set-up phase consists of neighbor discovery, TDMA time-slot allocation using a distributed neighborhood information-based (DNIB) algorithm, local framing for reuse of time slots, and global synchronization for transmission. Nodes with high priority reduce back-off time to increase the opportunity of winning slot allocation, and nodes with the same priority compete for slot allocation. This scheme reduces the energy consumption of nodes with high priority. In [33], a QoS-aware media access control (Q-MAC) protocol composed of both intra-node and inter-node scheduling was proposed. Intra-node scheduling determines the priority of packets arriving at the queue of a node. Priority is determined according to the importance of packets and the number of hops to a destination node. Q-MAC consists of five queues, where a queue called an instant queue transmits packets as soon as they arrive. The remaining queues transmit packets following the maximum-minimum fairness principle. Inter-node scheduling is a scheme of data transmission among nodes sharing the same channel. A power conservation MACAW (PC-MACAW) protocol based on the multiple access with collision avoidance protocol for Wireless LANs (MACAW) is applied to schedule data transmission. Q-MAC guarantees QoS through dynamic priority assignment; however, latency can be increased due to heavy computational complexity [34]. A comparative analysis of the protocols mentioned in this section is summarized in Table 1. It is largely classified into with and without prioritizations. In the load-balancing classification, "High" means the clear load-balancing by adopting max-min fairness criterion; "Medium" is an indirect load-balancing method by adjusting idle time and access time; and "Low" is the case where the loadbalancing method and its effects are not clearly addressed. In the weight factor classification, "No" is strict priority without quantitative values, and PAQMAC and Q-MAC assign quantitative weight values to packets. One of the representative fairness measurement methods is Jain's fairness index, which is a value range (0, 1), and the closer it is to 1 the fairer it is [20]. Jain's fairness index can measure the fairness of an entire system in a relatively simple way, but it cannot measure the fairness of nodes to which a weight factor is assigned. In [21], the authors proposed a quantitative fairness measurement method applicable to scheduling algorithms with unequal weight factors. Proposed Node Scheduling with Weight Factor Instead of conventional absolute priority-based scheduling, an adjustable and flexible scheduling scheme is proposed. This scheme reallocates slots by taking the weights assigned to the queues of nodes into account. Specifically, intra-node scheduling, which reallocates slots between the queues for high-and low-importance packets, is introduced. Then, it is followed by inter-node scheduling adopted from [25], which reallocates slots among neighboring nodes to increase the fairness measured in terms of traffic load. The proposed algorithm consists of three steps: (1) free time slot allocation, which is a process of allocating the initialized slots (unallocated empty slots) to packets; (2) the intra-node slot reallocation algorithm, which exchanges slots between the queues of a node with different importance values using self-fairness; and (3) the inter-node slot reallocation among 1-hop neighbors using a load balancing algorithm (slot exchange between queues with the same importance). The procedure of this algorithm is depicted in Figure 1. In the proposed algorithm, self-fairness applies to two different queues of each node. Hence, each node has two self-fairness values for its two queues ( and ). The self-fairness value for of node is denoted by ( , ) and defined as it is presented in Equations (1)-(3) [21]: where ( , ) is the ratio of resources allocated to at node to the sum of resource allocated to and at 1-hop neighboring nodes, is a set of 1-hop neighbor nodes of node , ( , ) is the weight assigned to of node , and ( , ) is the sum of the weights of 1-hop neighboring nodes. When the weight is high, more slots are allocated to increase the inverse-load, resulting in a fairer resource allocation. By setting ( , ) > ( , ) , more important packets are allocated more slots than less important packets. Accordingly, ( , ) is a quantitative value for of node , indicating whether the load of is high or low considering the weight assigned. Therefore, it is used as an index to compare the fairness of slot allocation with unequal weight factor. When ( , ) = 1, the allocation is in the fairest state. When the amount of slots allocated is small compared to the assigned weight factor, Inter-node slot reallocation for high importance packets for low importance packets Slot # Slot # Node 3 The inside of nodes All the nodes have two types of queues for storing packets of different importance. Q H and Q L are queues for high-and low-importance packets, respectively. Q A , A ∈ {H, L} represent Q H or Q L according to the indicator A, respectively. In the following, A is used as an indicator representing importance. The number of slots required to transmit all the packets at Q A of node i at frame time t is represented by q (A,i) t , and the number of slots assigned to Q A of node i at frame time t for packet transmission is represented by p Assuming that the packet and the slot sizes are the same, q is the inverse load of Q A and expressed as Free time slot allocation requires REQUEST and RELEASE messages exchanges, as in DRAND. The number of packets to be transmitted by node i is q can be allocated slots that are not reserved by the nodes within 2-hop distance. Note that the nodes within 2-hop distance cannot reuse time slot to avoid packet collisions and this reuse can be prevented by slot reallocation between 1-hop nodes. Node i allocates as many as q In the intra-node slot reallocation, a self-fairness index is used to reallocate packets between Q H and Q L of each node. Self-fairness is a measure of how fairly an amount of "resources" is assigned to a particular node by considering the weight assigned to that node. In this measurement, the resource can be bandwidth, time slots, etc. The proposed algorithm uses inverse load X (A,i) t as a resource for self-fairness measurement. In the proposed algorithm, self-fairness applies to two different queues of each node. Hence, each node has two self-fairness values for its two queues (Q H and Q L ). The self-fairness value for Q A of node i is denoted by F (A,i) t and defined as it is presented in Equations (1)-(3) [21]: is the ratio of resources allocated to Q A at node i to the sum of resource allocated to Q H and Q L at 1-hop neighboring nodes, N i is a set of 1-hop neighbor nodes of node i, r (A,i) is the weight assigned to Q A of node i, and r Tot is the sum of the weights of 1-hop neighboring nodes. When the weight is high, more slots are allocated to increase the inverse-load, resulting in a fairer resource allocation. By setting r (H,i) > r (L,i) , more important packets are allocated more slots than less important packets. Accordingly, F (A,i) t is a quantitative value for Q A of node i, indicating whether the load of Q A is high or low considering the weight assigned. Therefore, it is used as an index to compare the fairness of slot allocation with unequal weight factor. When F (A,i) t = 1, the allocation is in the fairest state. When the amount of slots allocated is small compared to the assigned weight factor, F 1]. In this case, it is necessary to gain more slots from the other queue. In the opposite case, if too many slots are allocated, F (A,i) t < 1 can be satisfied, and Q A must release its own slots. When a slot is gained, p are the expected self-fairness values calculated assuming that slots are reallocated. It is assume that Q H gains a slot from Q L , hence,F At every frame, slots are reallocated until self-fairness can no longer be improved. Note that the fairness index 1 is the fairest state. Consequently, the Euclidean distance between the fairest status ) combination is introduced as a metric representing a target fairness, as it is presented in Equation (4): Now, the expected Euclidean distanceF i t from the expected fairness (F After the intra-node slot reallocation algorithm, the inter-node slots reallocation [25] follows. At this time, the slot exchange does not consider the weights of Q H and Q L any more because these exchanges take place among the queues with the same importance. Node i's Q A computes u (A,i) t to determine how many slots to reallocate with a 1-hop neighboring node as it is presented in Equation (5) [25]: This increases the equality of the inverse-load of the same importance among node i and its 1-hop neighboring nodes. These processes are performed for all nodes in a node-by-node manner. The same intra-node and inter-node slot reallocations are repeated in the next frame. Performance Evaluation A network simulator [35] implemented in Java was used for performance analysis of the proposed algorithm. No isolated nodes are assumed, i.e., all the nodes have at least a single 1-hop neighbor node. Accordingly, in establishing a connection, any two nodes can be connected with each other through multi-hop links. The connections are established using arbitrarily chosen pairs of a source node and a destination node, and high-and low-importance connections generate high-and low-importance packets, respectively. In the following, high-and low-importance packets are denoted by Pkt H and Pkt L , respectively. For the performance analysis, the throughput, delay, and fairness are measured by changing the connection creation ratio (between Pkt H and Pkt L ) and the weight factor setting. Then, the proposed algorithm is compared with the absolute priority-based algorithm in which Pkt H preempts time slots when allocating free time slots. Note that the absolute priority algorithm adopts only the inter-node slot reallocation algorithm, not the intra-node slot one. The generation ratios of high-and low-importance connections are denoted by α, 1 − α ∈ [0, 1]. The weight factor setting in Q A is denoted by r A . Assuming that Q H and Q L of all nodes have the same weight settings as r H and r L , respectively, the node index i can be dropped from the weight factors. The weight factors are set as: r H , r L ∈ [0, 10] and r H + r L = 10. The performance of the proposed scheme was measured in two scenarios. Table 2 lists the parameters setting for each scenario. In the first scenario, a fixed number of connections are created at the starting epoch of the simulation, the packets of the connections are generated at fixed time intervals, and the number of packets generated for each connection is the same. In the second scenario, connections are created based on Poisson processes. Unlike the first scenario, the number of packets generated per connection follows a Poisson distribution. The arrival rate λ determines the connection creation interval. The duration of each connection follows an exponential distribution of parameter µ, which determines the number of packets generated in each connection. The packets are generated at a fixed interval, as in the first scenario. Each connection is closed if all the packets arrive at its destination node. Because the connections are continually generated, in the second scenario, the simulation duration is specified at the beginning of the simulation. For both scenarios, the final measurement is the average over 1000 independent simulations. In the first scenario, the performance of the proposed algorithm was analyzed with the increasing total number of connections and the various settings of the weight factor and α. The total number of created connections is the sum of the high-and low-importance connections. Throughput, packet delivery ratio, 1-hop delay, and fairness are measured and compared with those of absolute priority-based scheduling. Throughput refers to the number of all packets arriving at a destination node during the simulation. However, in the first scenario, since the number of generated connections is determined at the beginning of the simulation, the throughput measured when all packets arrive at a destination node will be simply the product of N c (number of connections) and N p (number of generated packets per connection). Therefore, throughput is measured not at the end of the simulation but at a predefined time T, which is large enough for the transmission of packets in the network to be in a steady state. The packet delivery ratio means the proportion of received packets to the packets sent. The 1-hop delay is measured as the average of ((the time when a packet is dequeued) minus (the time when a packet is enqueued)). The results of the absolute priority-based algorithm are marked as Preempt.Pkt H and Preempt.Pkt L . Figures 2-6 show the results of the first scenario. Figure 2 depicts the throughputs with the increasing total number of connections, various weight factors, and α = 0.3. When the number of connections is small, most packets are delivered to the destination nodes until the predefined time T because the network is not heavily loaded. For this reason, in Figure 2a,b, when the number of connections is 50, the throughput of Pkt H is lower than that of Pkt L because the number of Pkt H is lower than Pkt L . In most cases, if the number of connections increases, the throughput of Pkt H is higher than that of Pkt L . However, in Figure 2b, when the weight factors are r H = 7 and r L = 3, the throughput of Pkt L is higher than that of Pkt H , even when the number of connections increases. Note that the proposed algorithm considers not only the weight factors but the traffic load as well; hence, even when r L < r H , the throughput of Pkt L is higher than that of Pkt H in the entire range of N c . The service differentiation between Pkt H and Pkt L is more evidently shown in Figure 2c,d. As shown in these figures, over all the range of the number of connections, the packet delivery ratio of Pkt H is higher than Pkt L . Specifically, Figure 2b with r H = 7, r L = 3 can be compared with Figure 2d with r H = 7, r L = 3. In this case, Figure 2b shows that the throughput of Pkt L is higher than Pkt H . However, Figure 2d shows that the packet delivery ratio of Pkt H is still twice as high as that of Pkt L . This result clearly shows that the proposed scheme preferentially processes packets reflecting the weight factors. When the absolute priority-based algorithm is applied, as the number of Pkt H to be transmitted increases owing to the increment of the number of connections, the opportunity for Pkt L slot allocation decreases, resulting in a further decrease in the throughput of Pkt L . In Figure 3, throughputs are measured when r H ·α = r L ·(1 − α) is satisfied under the condition of increasing number of connections. Figure 3 shows the characteristics of the proposed algorithm by considering both the weight factor and traffic load. When r H ·α = r L ·(1 − α) is satisfied, it is confirmed that the throughputs of Pkt H and Pkt L have similar values and converge to a single value, as shown in Figure 3. In Figure 3, throughputs are measured when ⋅ = ⋅ (1 − ) is satisfied under the condition of increasing number of connections. Figure 3 shows the characteristics of the proposed algorithm by considering both the weight factor and traffic load. When ⋅ = ⋅ (1 − ) is satisfied, it is confirmed that the throughputs of Pkt and Pkt have similar values and converge to a single value, as shown in Figure 3. As shown in Figures 2 and 3, the sums of the throughputs of Pkt and Pkt are similar when is the same, even though and the weight factors are different. This is because even when the number of allocated slots of pkt and Pkt are changed by and the weight factors during the process of reallocation, the number of allocated slots in the entire network does not change. Therefore, there is a tradeoff between the throughputs of Pkt and Pkt depending on the weight factors. From As shown in Figures 2 and 3, the sums of the throughputs of Pkt and Pkt are similar when is the same, even though and the weight factors are different. This is because even when the number of allocated slots of pkt and Pkt are changed by and the weight factors during the process of reallocation, the number of allocated slots in the entire network does not change. Therefore, there is a tradeoff between the throughputs of Pkt and Pkt depending on the weight factors. From As shown in Figures 2 and 3, the sums of the throughputs of Pkt H and Pkt L are similar when N c is the same, even though α and the weight factors are different. This is because even when the number of allocated slots of Pkt H and Pkt L are changed by α and the weight factors during the process of reallocation, the number of allocated slots in the entire network does not change. Therefore, there is a tradeoff between the throughputs of Pkt H and Pkt L depending on the weight factors. From Figures 2 and 3, it is confirmed that an appropriate weight factor setting is necessary to adjust the throughputs of Pkt H and Pkt L for various network situations with different α. Figure 4 shows 1-hop delay with various weight factors and α with the increasing total number of connections. Similar to in Figures 2 and 3, when the number of connections is small, all the generated packets can be delivered to destination nodes, resulting in nearly no difference in the delay between Pkt H and Pkt L . However, as the number of connections increases, the delays of both Pkt H and Pkt L increase, and the delay difference between Pkt H and Pkt L becomes conspicuous. Compared to the absolute priority-based algorithm, the delay gap between Pkt H and Pkt L of the proposed algorithm is relatively small. In the case of r H = 7 and r L = 3 shown in Figure 4a, when N c is 500, the delay of Pkt L is twice that of Pkt H . On the other hand, the delay of Preempt. Pkt L is more than 6 times the delay of Preempt.Pkt H . The delay of Pkt H increases compared to Preempt.Pkt H , but the delay of Pkt L decreases much more than Preempt.Pkt L . In particular, when r H = 9, r L = 1, and N c = 500 in Figure 4b, the delay of Pkt H increases by approximately 500 time slots compared to Preempt.Pkt H , but the delay of Pkt L decreases by approximately 3000 time slots compared to Preempt.Pkt L , and it is a noticeable improvement. The average sum delay of Pkt H and Pkt L is reduced by 20% compared to that of Preempt.Pkt H and Preempt.Pkt L . This means that, compared to the absolute priority-based algorithm, the proposed algorithm achieves the higher performance. Moreover, the proposed algorithm can achieve the same delay performance with Preempt.Pkt H by throttling Pkt L , i.e., with r H = 10 and r L = 0. When α = 0.5, the number of Pkt H to be transmitted increases and the delay of Pkt H , at the same N c , increases compared to the case of α = 0.3. In the whole range of N c , the delay of Pkt H in Figure 4b is higher than that of Pkt H in Figure 4a. In addition, Pkt H 's delay when r H = 7 in Figure 4a and that when r H = 9 in Figure 4b are similar. Sensors 2020, 20, x FOR PEER REVIEW 11 of 17 Figures 2 and 3, it is confirmed that an appropriate weight factor setting is necessary to adjust the throughputs of Pkt and Pkt for various network situations with different . Figure 4 shows 1-hop delay with various weight factors and with the increasing total number of connections. Similar to in Figures 2 and 3, when the number of connections is small, all the generated packets can be delivered to destination nodes, resulting in nearly no difference in the delay between Pkt and Pkt . However, as the number of connections increases, the delays of both Pkt and Pkt increase, and the delay difference between Pkt and Pkt becomes conspicuous. Compared to the absolute priority-based algorithm, the delay gap between Pkt and Pkt of the proposed algorithm is relatively small. In the case of = 7 and = 3 shown in Figure 4a, when is 500, the delay of Pkt is twice that of Pkt . On the other hand, the delay of Preempt. Pkt is more than 6 times the delay of Preempt. Pkt . The delay of Pkt increases compared to Preempt.Pkt , but the delay of Pkt decreases much more than Preempt.Pkt . In particular, when = 9, = 1, and = 500 in Figure 4b, the delay of Pkt increases by approximately 500 time slots compared to Preempt. Pkt , but the delay of Pkt decreases by approximately 3000 time slots compared to Preempt. Pkt , and it is a noticeable improvement. The average sum delay of Pkt and Pkt is reduced by 20% compared to that of Preempt. Pkt and Preempt. Pkt . This means that, compared to the absolute priority-based algorithm, the proposed algorithm achieves the higher performance. Moreover, the proposed algorithm can achieve the same delay performance with Preempt. Pkt by throttling Pkt , i.e., with = 10 and = 0. When = 0.5, the number of Pkt to be transmitted increases and the delay of Pkt , at the same , increases compared to the case of = 0.3. In the whole range of , the delay of Pkt in Figure 4b is higher than that of Pkt in Figure 4a. In addition, Pkt 's delay when = 7 in Figure 4a and that when = 9 in Figure 4b are similar. In Figures 2 and 4, for Pkt H , the higher r H is, the better the performances of throughput and delay are. The decrement in r L due to the increased r H leads to the worse performance of throughput and delay of Pkt L . The larger the difference between the values of r H and r L , the larger the performance gap between the throughput and delay of Pkt H and Pkt L . This confirms that Pkt H and Pkt L are flexibly adjusted based on the values of the weight factor in various network situations. In Figure 5, the proposed scheduling scheme is compared with DRAND, LocalVoting, and Q-MAC. Q-MAC was developed for CSMA/CA and the packets with high weight value had a relatively high probability of accessing channel. For comparison, Q-MAC was modified to be applicable to TDMA. Specifically, the slots of Q-MAC are initialized according to the weight values, and the inter-node reallocation of LocalVoting is followed. As shown in Figure 5a, the delay of Pkt H is better than both DRAND and LocalVoting, and slightly worse than Q-MAC with Pkt H . Even Pkt L shows the better performance than DRAND and slightly worse than LocalVoting. Specifically, the delay of DRAND is twice longer than Pkt L and four times longer than Pkt H . LocalVoting shows the performance better than DRAND through the neighbor-aware load balancing. However, the proposed scheme of Pkt H still outperforms LocalVoting. The delay of Pkt H is 1.8 times smaller than LocalVoting. In Figure 5b, the average delay of the proposed scheme shows the best performance. Q-MAC and LocalVoting show the similar performance with each other. In Figure 5c, the throughput of the proposed scheme with Pkt H lower than Q-MAC with Pkt H . However, the throughput of the proposed scheme with Pkt L is higher than Q-MAC with Pkt L . Note that the throughput of LocalVoting in Figure 5c is the sum of its Pkt H and Pkt L . In Figure 5d, the proposed scheme achieves the highest throughput. In Figure 5b,d, it is ensured that the proposed scheme possesses the excellent performance in slot allocation because it achieves the highest throughput and the lowest delay. Sensors 2020, 20, x FOR PEER REVIEW 12 of 17 In Figures 2 and 4, for Pkt , the higher is, the better the performances of throughput and delay are. The decrement in due to the increased leads to the worse performance of throughput and delay of Pkt . The larger the difference between the values of and , the larger the performance gap between the throughput and delay of Pkt and Pkt . This confirms that Pkt and Pkt are flexibly adjusted based on the values of the weight factor in various network situations. In Figure 5, the proposed scheduling scheme is compared with DRAND, LocalVoting, and Q-MAC. Q-MAC was developed for CSMA/CA and the packets with high weight value had a relatively high probability of accessing channel. For comparison, Q-MAC was modified to be applicable to TDMA. Specifically, the slots of Q-MAC are initialized according to the weight values, and the internode reallocation of LocalVoting is followed. As shown in Figure 5a, the delay of Pkt is better than both DRAND and LocalVoting, and slightly worse than Q-MAC with Pkt . Even Pkt shows the better performance than DRAND and slightly worse than LocalVoting. Specifically, the delay of DRAND is twice longer than Pkt and four times longer than Pkt . LocalVoting shows the performance better than DRAND through the neighbor-aware load balancing. However, the proposed scheme of Pkt still outperforms LocalVoting. The delay of Pkt is 1.8 times smaller than LocalVoting. In Figure 5b, the average delay of the proposed scheme shows the best performance. Q-MAC and LocalVoting show the similar performance with each other. In Figure 5c, the throughput of the proposed scheme with Pkt lower than Q-MAC with Pkt . However, the throughput of the proposed scheme with Pkt is higher than Q-MAC with Pkt . Note that the throughput of LocalVoting in Figure 5c is the sum of its Pkt and Pkt . In Figure 5d, the proposed scheme achieves the highest throughput. In Figure 5b,d, it is ensured that the proposed scheme possesses the excellent performance in slot allocation because it achieves the highest throughput and the lowest delay. [20] of Pkt H and Pkt L with and without the proposed algorithm. In this figure, in terms of Γ (A,i) , Jain's fairness index shows how fairly resources are allocated among the queues of the same importance. Γ (A,i) is the ratio of the accumulative number of packets transmitted from a queue to the number of accumulated packets in a queue until T, which can be expressed as Equation (6). Similar to the throughput measurement, at the end of the simulation, all packet delivery is completed; accordingly, Jain's fairness is calculated at time T. In this analysis, α = 0.3 and r H = 7, r L = 3 are considered. When the number of connections is small, the fairness index is high regardless of the adoption of the proposed algorithm because the Γ (A,i) of most nodes becomes close to 1. For the absolute priority-based algorithm, as the number of connections increases, only a few nodes are allocated slots for Preempt.Pkt L . Since most nodes cannot transmit Preempt.Pkt L , the fairness of Preempt.Pkt L is very low. In contrast, when the intra-node slot reallocation of the proposed algorithm is adopted, time slots proportional to r L are allocated to Q L , and this results in an increase in the fairness index. As a result, the fairness performance of Pkt L is significantly increased compared to that of Pkt H when the intra-node slot exchange algorithm is applied. Sensors 2020, 20, x FOR PEER REVIEW 13 of 17 Figure 6 compares Jain's fairness [20] of Pkt and Pkt with and without the proposed algorithm. In this figure, in terms of ( , ) , Jain's fairness index shows how fairly resources are allocated among the queues of the same importance. ( , ) is the ratio of the accumulative number of packets transmitted from a queue to the number of accumulated packets in a queue until , which can be expressed as Equation (6). Similar to the throughput measurement, at the end of the simulation, all packet delivery is completed; accordingly, Jain's fairness is calculated at time . In this analysis, = 0.3 and = 7, = 3 are considered. When the number of connections is small, the fairness index is high regardless of the adoption of the proposed algorithm because the ( , ) of most nodes becomes close to 1. For the absolute priority-based algorithm, as the number of connections increases, only a few nodes are allocated slots for Preempt.Pkt . Since most nodes cannot transmit Preempt.Pkt , the fairness of Preempt.Pkt is very low. In contrast, when the intra-node slot reallocation of the proposed algorithm is adopted, time slots proportional to are allocated to , and this results in an increase in the fairness index. As a result, the fairness performance of Pkt is significantly increased compared to that of Pkt when the intra-node slot exchange algorithm is applied. Figure 7 shows the delay and throughput performance of the second scenario with the increasing Poisson arrival rate . In Figure 7a,b, because = 0.5 is applied, the numbers of Pkt and Pkt are similar. Although the connection creation interval and the number of packets generated for each connection are varied, Figure 7 shows similar performances to those of the first scenario. The larger the difference between and , the greater the performance gap between Pkt and Pkt . For instance, in Figure 7a, when the arrival rate is 0.01 time − units and the weight factors are = 7 and = 3, the Pkt delay is approximately 1.5 times longer than the Pkt delay. However, when the weight factors are = 9 and = 1, Pkt delay is over two times Pkt delay. When the arrival rate is low, the connection creation interval is long, and the number of connections created during the entire simulation is small. As shown in Figure 7a,b, when the arrival rates are as low as 0.001 and 0.002 time − units , there is only a slight difference in delay and throughput between Pkt and Pkt regardless of the weight factor setting. Figure 7c shows the throughput when the number of Pkt is larger than that of Pkt , by setting = 0.3. The result of Figure 7c is very similar to that of Figure 2a when ranges between 100 and 500. In particular, if ⋅ = ⋅ (1 − ) is satisfied by setting = 7, = 3, the throughputs of Pkt and Pkt converge to a constant value. However, note that is set as 0.3, i.e., 70% of the generated packets are Pkt and the remaining 30% is Pkt . Even in this asymmetric packet generation scenario, Pkt achieves the higher throughput than Pkt . Accordingly, this clearly shows that the service differentiation between Pkt and Pkt is attained. Figure 7 shows the delay and throughput performance of the second scenario with the increasing Poisson arrival rate λ. In Figure 7a,b, because α = 0.5 is applied, the numbers of Pkt H and Pkt L are similar. Although the connection creation interval and the number of packets generated for each connection are varied, Figure 7 shows similar performances to those of the first scenario. The larger the difference between r H and r L , the greater the performance gap between Pkt H and Pkt L . For instance, in Figure 7a, when the arrival rate is 0.01 time − units −1 and the weight factors are r H = 7 and r L = 3, the Pkt L delay is approximately 1.5 times longer than the Pkt H delay. However, when the weight factors are r H = 9 and r L = 1, Pkt L delay is over two times Pkt H delay. When the arrival rate is low, the connection creation interval is long, and the number of connections created during the entire simulation is small. As shown in Figure 7a,b, when the arrival rates are as low as 0.001 and 0.002 time − units −1 , there is only a slight difference in delay and throughput between Pkt H and Pkt L regardless of the weight factor setting. Figure 7c shows the throughput when the number of Pkt L is larger than that of Pkt H , by setting α = 0.3. The result of Figure 7c is very similar to that of Figure 2a when N c ranges between 100 and 500. In particular, if r H ·α = r L ·(1 − α) is satisfied by setting r H = 7, r L = 3, the throughputs of Pkt H and Pkt L converge to a constant value. However, note that α is set as 0.3, i.e., 70% of the generated packets are Pkt L and the remaining 30% is Pkt H . Even in this asymmetric packet generation scenario, Pkt H achieves the higher throughput than Pkt L . Accordingly, this clearly shows that the service differentiation between Pkt H and Pkt L is attained. Conclusions In this paper, a novel distributed node scheduling algorithm for an ad-hoc network was proposed. This scheme flexibly adjusts time slot allocations according to weight factor and traffic load. From thorough simulation studies under various environments, the performance differentiation reflecting weight factor setting was validated. It was confirmed that, as the weight of the high importance packets increases, the delay decreases and the throughput at the same time Conclusions In this paper, a novel distributed node scheduling algorithm for an ad-hoc network was proposed. This scheme flexibly adjusts time slot allocations according to weight factor and traffic load. From thorough simulation studies under various environments, the performance differentiation reflecting weight factor setting was validated. It was confirmed that, as the weight of the high importance packets increases, the delay decreases and the throughput at the same time increases. Because the proposed algorithm considers both the weight factors and traffic loads, even the throughput and delay for the same weight factors can be adjusted separately according to the connection creation ratios with different importance. Through comparison with other distributed node scheduling algorithms, the advantages of the proposed algorithm were validated. Specifically, it supports load balancing with neighboring nodes and preferential processing of important data. In addition, compared to the conventional absolute priority-based algorithm, the proposed algorithm shows performance improvement in terms of throughput, delay, and fairness for low-importance packets. Moreover, the performance comparison with other scheduling scheme ensures the excellent performance of the proposed scheme because it achieves the highest throughput and the lowest delay. These results verify that both the service differentiation and performance improvement can be achieved through an appropriate weight factor setting.
v3-fos-license
2019-04-23T13:23:52.844Z
2019-03-13T00:00:00.000
127866400
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2019/6276250", "pdf_hash": "90b4a16ade74786e6498eff39ab921ee937f9cd2", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41563", "s2fieldsofstudy": [ "Economics" ], "sha1": "046265583905fa4130ef3c25d4c5a05da833cb2a", "year": 2019 }
pes2o/s2orc
Estimation of Ask and Bid Prices for Geometric Asian Options Traditional derivative pricing theories usually focus on the risk-neutral price or the equilibrium price. However, in highly competitive financial markets, we observed two prices which are called bid and ask prices; then the unique risk-neutral price fails to hold. In this paper, within the framework of conic finance, we provide a useful approach to evaluate the ask and bid prices of geometric Asian options and obtain the explicit formulas for the ask and bid prices. Finally, numerical examples show that the higher the market liquidity parameter γ, the wider the spread and hence the less the liquidity. Introduction Asian options give the holder a payoff that depends on the average price of the underlying over some prescribed period.This averaging of the underlying can bring about two significant advantages: one is to reduce the risk of manipulating the underlying asset and the other is that it costs less than standard American options and European options (see Wilmott [1], chap.25).Asian options are actively traded in both exchanges and over-the-counter markets.In Black-Scholes framework, the study of exotic options has attracted the attention of many scholars. Although there are many methods, such as the PDE method, Martingale method, Monte Carlo simulation, and the binomial tree, to solve option price, an efficient method has not been found yet.The most important factor is that the real market has many uncertain factors.As we all know, in traditional financial mathematics, the foundations of the option pricing theory are built on the paradigm of frictionless and competitive markets.However, in the real market, the risk elimination is typically unattainable and not available.Furthermore, we observe two prices, one for buying from the market called the ask price and another for selling to the market called the bid price.Hence, in the real market, we can no longer depend on the unique risk-neutral price (or the law of one price or equilibrium price). There are diversity of theoretical approaches to estimating ask and bid prices.Barles and Soner [2], Cvitanić and Karatzas [3], Constantinides [4], Lo et al. [5], and Jouini and Kallal [6] attempted to study spreads which included transaction costs of trading in liquid markets.Easley and O'Hara [7] and Han and Shino [8] study price formation in securities markets.Copeland and Galai [9] discuss the effects of information on the bid-ask spread.Glosten and Milgrom [10] focus on the effects of heterogeneously informed traders on market makers.In [11][12][13][14] the researcher have carried out inventory costs and order processing of liquidity providers.In [15][16][17][18], statistical studies are used to model bid-ask spread.However, these models are not effective enough to explain the magnitude of the spreads observed in the markets.A new theory is built up by Cherny and Madan [19,20], referred to as the conic finance theory.In the conic finance framework, the market acts as a passive counterparty to all transactions, buying at the ask price and selling at the bid price.The spread between bid-ask prices is a measure of illiquidity. Although there are a number of literatures based on conic finance theory, they focus on credit risk [21,22], design of portfolio [23,24], and hedging of financial and insurance risks [25,26].To the best of our knowledge, there is no literature research on valuation of ask and bid prices for geometric Asian option.In this paper, within the framework of conic finance, we lead to the explicit formulas for the ask and bid prices of geometric Asian option. The content of this paper is organized as follows.In Section 2 we introduce the risk-neutral price for the geometric Asian option.Within the framework of conic finance, Section 3 is devoted to estimating the bid-ask prices for geometric Asian option.And we obtain the explicit formulas for the bid-ask prices.In Section 4, we present numerical results for the bid-ask prices.Finally, we finish our paper by concluding remarks in the last section. Geometric Asian Option under the Law of One Price In this section, we start with a brief description of the geometric Asian option model presented in [27].Under a probability space (Ω, F, P), Kemna and Vorst [27] set up the pricing model in which the underlying asset follows the geometric Brownian motion: where is the risk-free rate and is volatility.And these parameters are usually assumed to be constant.Let be the maturity date and [ 0 , ] be the final time interval over which the average value of the stock is calculated.Let ( , ) be the option price with the underlying price , time to maturity , and the strike price .Then, the price of the geometric Asian option under the risk neutral measure P, at time , may be represented as follows: where is the geometric average of the underlying asset prices during the time to the maturity .Kemna and Vorst [27] introduce a process defined by to represent the geometric average of underlying asset until the time .And the discrete case equation (3) can be written as follows: In both continuous case (3) and discrete case (4), the variable is log-normally distributed so that its expectation and variance values may be calculated explicitly.For the continuous case (3) the log-normal distribution is Then, the price of geometric Asian call option at time 0 is given as where Estimation of Bid-Ask Prices Formula In this section, within the framework of conic finance, we derive the explicit formulas for the bid-ask prices of geometric Asian options.We first present a brief description of conic finance theory.Then we present our main conclusion in the next section. 3.1.Conic Finance Theory.Conic finance is a brand-new quantitative finance theory, which originates from the work by Cherny and Madan [20] and Madan and Cherny [19].The key to the foundations of the conic finance is an underlying concept of acceptable risks in the economy.Markets are modeled as counterparty accepting at nonnegative stochastic cash flow that have an acceptability level .The theory assumes that price depends on the direction of trade and there are two prices, one for buying from the market called the ask price () and one for selling to the market called the bid price ().The difference between both prices gives rise to the bid-ask spread observed in financial markets. Let ∞ fl ∞ (Ω, F, P) be the space of all essentially bounded.Madan and Cherny [19] derive these bid and ask prices from the theory of acceptability indices (see [20]), which are functions : ∞ → [0, ∞).In particular, they call a net cash flow, or trade, X ∈ ∞ acceptable at an acceptability level if and only if ( X) ≥ .Suppose that the market maker sell a cash flow , for which driven by competition he charges a minimal price of .Nevertheless, the emerging remaining cash flow − ought to be acceptable at level .Hence, this price would be the ask price of .So the minimal price is given by where a family of sets of probability measures ( ) ≥0 is equivalent to the initial probability measure of P. When the market maker buys for a price , it is − that must be acceptable at level and the maximal price is As proposed by Madan and Cherny [20], parameter family of distortion functions can be used to formulate an operational index of acceptability.The index () is characterized as or where is an nonnegative stochastic variable, () is the distribution function of , and ( ) ≥0 is a pointwise increasing family of concave distortion functions. Because the distortion function plays a crucial part in the realization of explicit bid and ask prices, Cherny and Madan [20] conclude a series of potential distortion function that can be used.In the following definition, we first give some particular distortion function that has been used extensively in the literature. Definition 1 (distortion function). A function Definition 2 (Wang transform [28,29]).Let Φ denote the standard normal cumulative distribution function and let be a nonnegative constant.Then a distortion function is called the Wang transform. Definition 3 (the Maxminvar distortion function [20]).The concave distortion function is given by Definition 4 (the Minmaxvar distortion function [20]).The concave distortion function is given by From a family of concave distortion functions ( ) ≥0 and the properties of the distortion expectation (11), Cherny and Madan [19] lead to the following formulas of bid-ask prices. so that the minimum value of leads to the ask price: Analogously, the maximum of leads to the bid price: we obtain Under a nonadditive probability using Choquet expectation which introduced by Choquet in [30], the bid-ask prices ( 16)-( 18) may also be presented in the following definition. Definition 5 (single-period bid-ask prices [19]).Let ( ) ≥0 be a pointwise increasing family of concave distortion functions and is the market liquidity level.Then, the bid price of a discounted cash flow ∈ ∞ is given by and its ask price is In particular, for = 0, the bid-ask prices are equivalent and they reduce to the regular Black-Scholes formula which is undistorted under the risk-neutral probability measure.In addition, we also have the relation as follows: and the bid-ask prices of the geometric Asian put option at time are given by where In particular, for = 0, the bid-ask prices are equivalent and the bid-ask prices of geometric Asian call option reduce to formula (6). Proof.Let the payoff of geometric Asian call option be = ( − ) + , with being the geometric average of the underlying asset price during the time to the maturity .The continuous case of is defined by (3).And (5) shows that random variable has a lognormal distribution; it means that where Φ is the standard normal cumulative distribution function.Now, by using Choquet expectation in Definition 5, we can derive the bid price of the geometric Asian call option: If we apply the Wang transform (12) to the distribution function , we will get the following representation: And, if ∼ Lognormal (, 2 ) with cumulative distribution function (CDF) (), then we can obtain By using ( 29) and ( 30), we can calculate the integral 1 in (28).It is shown that And the second integral 1 in ( 28) can be calculated as Substituting ( 31) and ( 32) into (28) and multiplying by a discount factor −(−) , we can get the following expression for the bid price: Now, by using Choquet expectation in Definition 5 we can derive the ask price of the geometric Asian call option: Similar to the way in which we obtain the bid price.From ( 27), (30), and Wang transform (12), we get the first integral in (34) as follows: and the second integral can be calculated as By combining parts 2 and 2 and considering the continuous discount factor −(−) , we have the ask price: In addition, we show the bid-ask prices of the geometric Asian put option.Let = ( − ) + ; by utilizing Choquet expectation in Definition 5 and transformations ( 29) and (30), we can derive the bid price of the geometric Asian put option as follows: and Substituting the results of the two integrals in (38) and multiplying by a discount factor −(−) , we get the bid price of put option as follows: Finally, by the same methods that we used in the calculation process of ask price of the geometric Asian call option, we can get the formula of the ask price of the geometric Asian put option: This completes the proof of Theorem 6. Numerical Examples In this section, we present numerical results obtained for the geometric Asian option pricing model proposed in this paper.Assume the risk-free interest rate being 8% per annum, the stock price volatility being 20%, and the geometric Asian option with 3 months to expiry (i.e., = 3/12).We show the bid-ask prices for the geometric Asian option with the different market liquidity parameter , which are displayed in Table 1 and Figure 1.Table 1 provides the numerical results for the bid-ask prices of geometric Asian put and call options.For = 0, the ask and bid prices are equivalent and they reduce to the analytic expression (6) presented by Kemna and Vorst [27].Figure 1 plots bid-ask spread for the geometric Asian put and call options at different static market liquidity parameter .The spread between bid-ask prices is a measure of illiquidity. The nonnegative parameter gives an indication of the markets' liquidity: the higher the , the wider the spread and hence the less the liquidity. Conclusion In this paper, within the framework of conic finance, we propose a useful approach to evaluate the ask and bid prices of geometric Asian options and obtain the explicit formulas for the ask and bid prices.Finally, by using the explicit formulas of geometric Asian options, we carry out the impacts of the static market liquidity parameter on bid-ask prices. ) 3.2.Bid-Ask Formulas for Geometric Asian Option.In this subsection, we give our main conclusion.For evaluation of the explicit formulas for the bid-ask prices of geometric Asian options, we first use the distortion function based on the Wang Transform from Definition 2. Furthermore, by using Choquet expectation in Definition 5 we can derive the bid-ask price explicit formulas of the geometric Asian call and put options.The following theorem shows our main results.Assume that the distortion function () is the Wang Transform; then the bid-ask prices of the geometric Asian call option at time is given by
v3-fos-license
2021-06-23T06:17:18.301Z
2021-06-21T00:00:00.000
235595839
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-92449-9.pdf", "pdf_hash": "2d7b4d544ca9f1d617af20e34f6984f878ac0e08", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41565", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b23f804cb9f0ef2a6dc26adf1cb3cbdf1cfc1f4b", "year": 2021 }
pes2o/s2orc
A metabolomic study of Gomphrena agrestis in Brazilian Cerrado suggests drought-adaptive strategies on metabolism Drought is the main factor that limits the distribution and productivity of plant species. In the Brazilian Cerrado, the vegetation is adapted to a seasonal climate with long- and short-term periods of drought. To analyze the metabolic strategies under such conditions, a metabolomic approach was used to characterize Gomphrena agrestis Mart. (Amaranthaceae) a native species that grows under natural conditions, in a rock-field area. Roots and leaves material from native specimens were sampled along different seasons of the year and LC–MS and GC–MS analyzed for multiple chemical constituents. The datasets derived from the different measurements were combined and evaluated using multivariate analysis. Principal component analysis was used to obtain an overview of the samples and identify outliers. Later, the data was analyzed with orthogonal projection to latent structures discriminant analysis to obtain valid models that could explain the metabolite variations in the different seasons. Two hundred and eighty metabolites were annotated, generating a unique database to characterize metabolic strategies used to cope with the effects of drought. The accumulation of fructans in the thickened roots is consistent with the storage of carbons during the rainy season to support the energy demand during a long period of drought. The accumulation of Abscisic acid, sugars and sugar alcohols, phenolics, and pigment in the leaves suggests physiological adaptations. To cope with long-term drought, the data suggests that tissue water status and storage of reserves are important to support plant survival and regrowth. However, during short-term drought, osmoregulation and oxidative protection seems to be essential, probably to support the maintenance of active photosynthesis. Results Water status of the soil and plants. The Table 1. From September to November (2013) and February to July (2014), great variations in rainfall were observed in the region, with the highest index in April (75 mm of rain). In February, even in middle of the rainy period, a reduction in rainfall rates was detected (13 mm in February). The past data for the region have shown lower precipitation rates in February (INMET, 2017), leading to a short dry period during the rainy season (this period is popularly called "Veranico"). In July, which is a part of the dry season, no precipitation was recorded. The soil moisture was positive correlated (p < 0.05) with the rainfall recorded for the period and ranged from 0.4 to 17.3% (Fig. 1). Interestingly, during the rainy season (from November to April), a decrease in the percentage of soil moisture, in February, matches with the Veranico period. In the months that comprise the dry periods, soil moisture was 0.4% in early September 2013 and 1.05% in July 2014. Phenology. The predominant phenological stage annotated for the species was compiled and represented in Fig. 2. At the end of the dry period, in early September, the plants were characterized in a dormant state, showing senescent floral branches and older leaves. In the DR period in October, the plants entered the sprouting state, with emission of new branches and leaves. In the Veranico period in February, presence of inflorescences with reddish floral parts was observed. In the rainy period in April, the fruiting phase was characterized by the presence of inflorescences with senescent and paleaceous floral parts. The fruiting phase lasted until May in the DR period, and the senescence of the floral branches was remarkable. In July, at the beginning of the dry period, the plants entered dormancy. In the dormant state, the younger leaves on the branches of the plants were still green. Abscisic acid content. In Fig. 3, the relative water content (RWC), abscisic acid (ABA) content, and Sm are summarized as an average of the samples of the D, dry period; RD, transition from dry to rainy period; V, veranico period; R, rainy period; and RD, transition from rainy to dry period. No correlation was observed between Sm and leaf and root RWC values of the plants. Nevertheless, in the Veranico period, RWC of the leaves was slightly lower when there was a decrease in Sm (Fig. 3A). Abscisic acid (ABA) content in the roots and leaves was measured because of its important role in plant responses to drought and other abiotic stresses (Fig. 3B). The general pattern of ABA accumulation was similar in the leaves and roots; however, ABA content was significantly higher in the leaves (Tukey's test, p < 0.05) in the V period than in the DR and R periods. Metabolic profiling. To obtain information on the metabolic phenotypes of G. agrestis grown under natural conditions in a rock-field area of the Cerrado in the dry and rainy seasons, we performed a multi-metabolomic analysis of the leaves and roots. The datasets derived from different measurements were combined and evaluated using multivariate analysis. Principal component analysis (PCA) was used to obtain an overview of the samples www.nature.com/scientificreports/ and identify outliers (data not shown). Later, the data was analyzed with orthogonal projection to latent structures discriminant analysis (OPLS-DA) to obtain valid models that could explain the metabolite variations in the different seasons. From the generated loading plots, the metabolites were filtered by variable importance in the projection (VIP) (cutoff ≥ 1) and listed. On the basis of the selected VIP metabolites, OPLS-DA score plots of the leaves and roots are shown in Fig. 4. To validate the strategy, several pair-wise OPLS-DA models were calculated between the groups by using the selected metabolites (Supplementary Table 2). Because all comparisons resulted in valid models, the distinct metabolites were clustered, and the heatmaps (Fig. 5A,B) show a global metabolite profile of the leaves and roots of G. agrestis in different seasons. The metabolites derived from the untargeted approach with GC-MS and LC-MS analyses were systematically annotated (Fig. 5A,B, as described in the "Material and methods"). The combination of targeted and untargeted approaches allowed us to have a significant coverage of all major metabolite classes that could characterize the plant adaptation to the environmental changes, resulting in the annotation of 280 metabolites (Supplementary Tables 3, 4, 5, 6, 7 and 8); 215 were detected in the leaves and 195 in the roots. For better interpretation, the annotated metabolites were clustered using a non-hierarchical analysis according to their metabolite class (Fig. 6). As expected, different metabolite patterns were observed in the leaves and roots of G. agrestis. The analysis of the leaves showed a unique pattern for Veranico samples, and the accumulation of Design of sampling and sample grouping for the metabolomic analysis. The groups are as follows: D, dry period (12 samples, red color); DR, transition from dry to rainy period (18 samples, blue); V, veranico, a short dry period during the rainy period (6 samples, yellow); R, rainy period (12 samples, green); and RD, transition from rainy to dry period (18 samples, brown). Below, months of the year and the number of the sample in the corresponding month. Soil moisture was used to select samples for metabolomics. www.nature.com/scientificreports/ organic acids, sugars, galactolipids, phenolics, and chlorophyll degradation products were observed. Interestingly, contrasting accumulation of some sugars, phosphocholine (PC), triacylglycerolipids (TAG), and galactolipids (MGDG and DGDG) were verified between Veranico and RD. Accumulation of amino acids and pigments (chlorophylls and ketocarotenoids) was observed in the leaves during the transition from drought to rainy season. In the roots, a distinct fructan pattern was observed: plants grown during Veranico and rainy accumulated fructans containing less than 10 units of fructose, and plants grown during dry and DR accumulated more complex fructans. In general, higher levels of amino acids were observed in the DR and R seasons. Drought-discriminating metabolites. The impact of water availability in the soil on the leaves and roots of G. agrestis grown in the rainy, dry, and Veranico seasons is shown in Figs. 7 and 8 and Supplementary Tables 9 and 10. In general, changes in the metabolism of sugar, specially fructans, lipids, amino acids, and phenolics , and TG (53:3) was pronounced during the dry season, which is in contrast to the reduced levels of fructans containing small units of fructose and amino acids GABA, phenylalanine, tyrosine, and valine during the rainy (R) season. However, during Veranico, accumulation of the sugars lactose, sorbitol, and mannitol, fructans 2_DP and 3_DP, lipid TG (55:4), and the phenolic 3,4-dihydroxybenzoic acid was observed, and a decrease in the amino acids valine, phenylalanine, tryptophan, and glycine betaine was observed. The metabolism of sugars, lipids, phenolics, and pigments www.nature.com/scientificreports/ was affected in the leaves in the dry, Veranico, and rainy seasons ( Fig. 8), with pronounced accumulation of galactolipids, xanthophylls, chlorophyll intermediates, and several classes of phenolics such as phenylpropanoids, benzoic acid, and flavonoids in Veranico. Although the dry and Veranico seasons were characterized by low water availability in the soil, the metabolism of G. agrestis responded differently. Discussion In the Brazilian Cerrado, the vegetation is adapted to a seasonal climate with short and long periods of drought. This condition causes the local species to experience low water availability during specific periods of the year. In the present study, the water status in the soil of a rock-field area in the Cerrado changed from low to high moisture, which directly impacted the water status of plants collected from the same area during different months of the year (Figs. 1 and 3). Endemic species, like Gomphrena, in the Brazilian Cerrado are adapted to such water variations and can survive via different and perhaps unknown physiological strategies. In general, plant adaptation can include changes in the developmental stage and phenology of plants obtained during different seasons (Fig. 2). The global analytical approach used to analyze G. agrestis allowed us to obtain a unique metabolite fingerprint of plants growing under natural conditions in Cerrado's dry (D: July and September), dry-rainy (DR: end of September and October), Veranico (V: February), rainy (R: March and April), and rainy-dry (RD: May and June) seasons. The results provide several hints about how this endemic specie can tolerate such drastic changes in the soil water availability throughout the year. The drought stress on the plants occur when either the water supply to their roots becomes limited or when the transpiration rate becomes too intense and causes a system imbalance 1 . In both situations, the stress starts to affect the water functions, culminating in reduced growth. Here, the leaves and roots of G. agrestis showed reduced RWC during Veranico, resulting in an increase in ABA content (Fig. 3B). The results suggest that G. agrestis controls the stomatal movement as a strategy to keep the tissues hydrated or maintain the CO 2 supply for photosynthesis, as observed in many other species [32][33][34][35][36] . The maintenance of open stomata can lead to a greater loss of water and consequently dehydration. However, if the soil is sufficiently moist or if the moisture is recovered soon, it is easy for the plant to replenish water and maintain both photosynthesis www.nature.com/scientificreports/ and the growth rate. Because of the physical characteristics of the rock-field soil (shallow and sandy) and lower precipitation rates during the Veranico period, the soil moisture decreases quickly and probably limits the water availability for the plants (Fig. 3C). ABA has an important role under stress conditions, especially during drought 6,37,38 . In the present study, we found a pronounced increase in ABA levels in the leaves during Veranico. Generally, the root system is affected to the greatest extent when there is water scarcity or the availability of water is inconsistent 39 . The fact that the ABA levels were not significant in the roots suggest that Gomphrena plants have other alternatives to compensate for the drought. Fructans in the tissues may act as osmotic solutes to maintain the water status of tissues 28 . Our results support their suggestion because we found increased levels of fructans containing up to 8 units of fructose during the Veranico season (Fig. 7). Similar results were obtained by 40 , who reported no changes in the RWC in the roots of Gomphrena marginata (also growing in a rock field) during the dry period, suggesting that accumulation of fructans could result in osmoregulation. Similar results were found in Vernonia herbacea, another local species present in the rock fields of Cerrado 29,41 . The dry season (during which the soil moisture levels were lower; Fig. 3C) was characterized by the accumulation of more complex fructans (Fig. 7). In general, complex fructans (containing up to 22 fructose units) were higher during the dry and DR periods. Fructans with lower DP were predominant during the Veranico and rainy periods (Figs. 6B and 7). These contrasting patterns are consistent with the involvement of fructans in drought strategies: complex fructans represent a carbon source that supports initial growth or regrowth during the beginning of the rainy season 26,40 . Simple fructans in the rainy and Veranico periods can be explained by the water-favorable condition for synthesis and turnover of the energy metabolism in the rainy season or a strategy to support osmoregulation during the Veranico. The accumulation of sugar alcohols such as arabitol and ribitol in the leaves during the dry season (Fig. 8) suggests their involvement in the non-photochemical quenching during drought 42 . However, other simple sugars (e.g., glucose, fructose) may also play a role in drought tolerance in plants by reducing the effects of osmotic stress, maintaining turgor, stabilizing cell membranes, and protecting plants from damage 43 . The level of xylose was also high in the dry season, and it is a component of cell wall metabolism and suggested to be involved in drought stress through cell wall modification 35 . www.nature.com/scientificreports/ Accumulation of phenolic metabolites during the dry and Veranico seasons was observed. The phenylpropanoid pathway was more pronounced during Veranico, resulting in the accumulation of sinapinic acid, sinapyl alcohol, 1-O-(4-coumaroyl)-B-d-glucoside, feruloyl-glucoside, and caffeoylshikimate as well as different flavonoids, which is in contrast to the accumulation of benzoic acid derivatives like vanillic acid and 1-galloyl b-d-glucose during the dry season (Fig. 8). Such accumulation of phenolic metabolites in both seasons might be related to plant protection against oxidative damage that occurs during these two periods of low water availability 21,45 . The production of reactive oxygen species may be the most important secondary effect of drought and can result in chloroplast membrane damage. We observed the accumulation of several galactolipids during Veranico. These lipids are major components of the photosynthetic apparatus 46 . Pigments like carotenes can act as antioxidants and energy quenchers 47 . Carotenes and xanthophylls, as well as chlorophyll metabolites like chlorophyllide, pheophorbide and pheophytin, were accumulated in the plants during Veranico (Fig. 8). Pheophytin is involved in the process of electron transfer in PS II, working as a bridge of electrons between the chlorophyll P680 and plastoquinone 48,49 . Previous studies have investigated how this mechanism works and the function of pheophytin 50,51 . In this study, the increased levels of pheophytin may be due to induced chlorophyll degradation 52 , the stress, or a response that benefits the plant. However, no chlorophyll changes were observed in the plants collected during Veranico. Therefore, pheophytin accumulation may be a mechanism that helps in photosynthesis efficiency by either acting in the flux of electrons or protecting the system from damage 50,53 . The role of pheophytin in plants is not well understood and needs to be studied under stress conditions. It is important for a plant to adapt to yearly (long-term) and short-term changes in drought and other environmental factor to coordinate growth and stress-related responses. Stomatal closure acts by reducing the loss of water and maintaining the hydration state in the tissues, but it also limits CO 2 influx for photosynthesis. G. agrestis is a C4 plant 54 and therefore exhibits efficient photosynthetic metabolism under drought conditions. Probably the plant showed high photosynthesis rates in the Veranico period, even with stomatal control. This strategy is important under mild or short-term water stress conditions because the plant can sustain growth, which is primarily affected. The adaptions of metabolism, as observed in the present study, suggests strategies to maintain photosynthesis during the Veranico period. This is of great importance to G. agrestis because this period coincides with the flowering time (Fig. 2), which requires high quantities of assimilated carbon. The minimum hydration necessary for survival, cell enlargement, or maintenance of metabolic activity is provided by regulated stomata control and other associated strategies, such as osmoregulation. Conclusions In this study, we used a metabolomic approach to understand and describe the metabolic adaption of a native species to seasonal changes in drought. We showed that fructans are accumulated in the thickened roots, suggesting a metabolic pattern that is consistent with the storage of carbons during the water-favorable season to support energy demand during the long period of drought and regrowth as well as metabolic adjustments for osmoregulation. In the leaves, ABA, simple sugars, sugar alcohols, phenolics, and pigment metabolism indicate the importance of metabolic responses that should act together to modulate general physiological adaptations such as stomatal control, photosynthesis, and oxidative stress. The metabolic pattern in the Veranico period suggests that during short-term drought, the maintenance of active photosynthesis seems to be more important, and stomatal control, osmoregulation, and protection from oxidative damage may be the strategies used by the species. Methods Geographical location. The study was conducted in the Environmental Preservation Area "Serra do Resplandecente Encantado", a public area in the municipality of Itacambira, north of Minas Gerais State (16° 59′ 47″ S, 43° 20′ 01″ W), Brazil. In this area, which is part of the Espinhaço mountain range, rock-field formations are predominant. Plant material and sampling. The study was conducted in accordance with relevant guidelindes and brazilian legislation 55 Relative water content. To characterize the water status of the plants, 10 leaves and root fragments were collected from each plant and weighed for determining the fresh weight (FW). Then, they were immersed in distilled water for 6 h to determine the turgid weight (TW), followed by drying at 70 °C to determine the dry weight (DW). The relative water content (RWC) was estimated using the following equation Soil moisture. Soil moisture (Sm, %) was measured using the gravimetric method 57 . In each field expedition, six soil samples were collected between 0 and 20 cm, which corresponds to the effective root depth. The soil samples were weighed to determine FW and then dried at 70 °C to measure DW. Sm (%) was determined using the following formula: Sm (%) = (FW − DW/DW) × 100. , and the flow-rate was 0.5 mL/min. The compounds were eluted with a linear gradient consisting of 0.1-10% B over 2 min, B was increased to 99% over 5 min and held at 99% for 2 min; B was decreased to 0.1% for 0.3 min and the flow-rate was increased to 0.8 mL/min for 0.5 min; these conditions were held for 0.9 min, after which the flow-rate was reduced to 0.5 mL/min for 0.1 min before the next injection. The compounds were detected with an Agilent 6550 Q-TOF mass spectrometer equipped with a jet stream electrospray ion source operating in positive or negative ion mode. The settings were kept identical between the modes, with exception of the capillary voltage. A reference interface was connected for accurate mass measurements; the reference ions purine (4 µM) and HP-0921 (Hexakis (1H, 1H, 3H-tetrafluoropropoxy phosphazine) (1 µM) were infused directly into the MS at a flow rate of 0.05 mL/min for internal calibration, and the moni- www.nature.com/scientificreports/ nebulizer pressure 35 psig. The sheath gas temp was set to 350 °C and the sheath gas flow 11 L/min. The capillary voltage was set to 4000 V in positive ion mode, and to 4000 V in negative ion mode. The nozzle voltage was 300 V. The fragmentor voltage was 380 V, the skimmer 45 V and the OCT 1 RF Vpp 750 V. The collision energy was set to 0 V. The m/z range was 70-1700, and data was collected in centroid mode with an acquisition rate of 4 scans/s (1977 transients/spectrum). For metabolite annotation autoMSMS was performed on pooled QC-samples at 3 different collision energies, 10, 20 and 40 eV. The fructans compounds were detected with an Agilent 6550 QTOF mass spectrometer equipped with a Jet Stream electrospray ion source operating in positive and negative ion mode 60 . The MS/MS spectra were obtained under the same conditions, with the collision energy from 10 to 40 V. Lipidomic analysis with LC-QTOF MS. The lipid analysis was performed in the positive ion mode 61 . In brief, lipid extracts based on chloroform/methanol extraction was chromatographic separation was performed on an Acquity UPLC CSH, 2.1 × 50 mm, 1.7 µm C18 column in combination with a 2.1 mm × 5 mm, 1.7 µm VanGuard precolumn (Waters Corporation, Milford, MA, USA) held at 60 °C. The gradient elution buffers were A (60:40 acetonitrile:water, 10 mM ammonium formate, 0.1% formic acid) and B (89.1:10.5:0.4 2-propanol:acetonitrile:water, 10 mM ammonium formate, 0.1% formic acid), and the flow-rate was 0.5 mL/ min. The compounds were detected with an Agilent 6550 Q-TOF mass spectrometer equipped with a jet stream electrospray ion source operating in positive ion mode. All mass spectrometer settings as for untargeted LC-MS analysis. All generated files were processed using Profinder B.08.00 (Agilent Technologies). Metabolite annotation. The metabolites were annotated by manual interpretation of the high mass accuracy of the fragments produced by MS/MS experiments and/or comparison with public (Kegg and PlantCyc) and in house database. Additional MS/MS networking (Global Natural products social molecular networking 62 ) was performed as a quality control to detect adduct masses that somehow were not excluded during the processing data. For annotation of fructans, the degree of polymerization, which means the number of fructose units in the molecule structure was used. Glycerolipids annotation was performed by comparison with in house lipid spectral databases. The lipid classes were differentiated by the presence of diagnostic fragments m/z 184.0733 (PC), m/z 243.0945 (MGDG-Na + ) or neutral losses of 162.0528 (DGDG), 161.0450 (PI) and 141.0191 (PE). Spectral information of phenolics and lipids are presented in Supplementary Tables 5 and 7. Amino acid analysis with LC-QqQ MS. The extracts were derivatized with the Waters AccQ•Tag method, in accordance with the manufacturer's protocol. The analysis was performed using a 1290 Infinitely UHPLC system from Agilent Technologies (Waldbronn, Germany) with G4220A binary pump, G1316C thermostated column compartment, and G4226A autosampler with G1330B autosampler thermostat coupled to an Agilent 6490 triple quadrupole mass spectrometer equipped with a jet stream electrospray source operating in the positive ion mode 63 . The amino acid multiple-reaction-monitoring (MRM) transitions were optimized using MassHunter MS Optimizer software (Agilent Technologies Inc., Santa Clara, CA, USA), and the data were quantified using MassHunter Quantitation software B07.01 (Agilent Technologies); the amount of each amino acid was calculated on the basis of the calibration curves. ABA analysis with LC-QqQ MS. For the ABA analysis 64 the analytes were separated using a 1290 UHPLC system from Agilent Technologies (Waldbronn, Germany), with a G4220A binary pump, G1316C thermostated column compartment, and G4226A autosampler with thermostat. A 2 µL aliquot of the sample was injected onto a Waters column (TSS3, C18; 2.1 × 50 mm, 1.7 µm) at 40 °C in a column oven. The analysis was performed in multiple-reaction-monitoring (MRM) mode, in which the fragmentation conditions for the analyses were optimized using MassHunter MS Optimizer software (Agilent Technologies Inc., Santa Clara, CA, USA). MRM scan was performed monitoring m/z 263 → 153 for ABA and m/z 269 → 159 for d6-ABA as quantifiers. Transitions m/z 263 → 219 for ABA and m/z 269 → 225 for d6-ABA were used as qualifier ions. The data were quantified using MassHunter Quantitation software B07.01 (Agilent Technologies); the amount of ABA was calculated on the basis of the calibration curve done with d6-ABA (1 pg/µL) and ABA standards (from 0 to 10 pg). Statistical analysis. The generated datasets from the different analyses were checked using statistical multivariate analysis in SIMCA-P 13 software (Umetrics AB, Umeå, Sweden). The samples were compared using PCA and OPLS-DA analysis. Before the analysis, the missing data were set to the mean value of each variable and were mean-centered and scaled to the unit variance. The samples were grouped according to the environment characterization into five groups: 1, dry (D: 12 samples); 2, transition between dry to rainy (DR: 18 samples); 3, "Veranico", a short dry period during the rainy season (V: 6 samples); 4, rainy (R: 12 samples); and 5, transition between rainy to dry (RD: 18 samples). To identify the most important metabolites in the OPLS-DA models, the VIP was used, and variables showing VIP values greater than 1 were considered of high importance 65 . The OPLS-DA models were validated using the goodness of fit (R 2 ) and prediction (Q2) parameters. Further statistical analysis and visualization (ANOVA, Tukey's test, t-test, Benjamini and Hochberg correction, and heatmaps) were performed using R-software version 3.4.1 66 .
v3-fos-license
2021-06-15T05:12:20.565Z
2021-06-01T00:00:00.000
235423071
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/13/11/2794/pdf", "pdf_hash": "828191d824c1452deb346c88152333a431e6afa5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41567", "s2fieldsofstudy": [ "Biology" ], "sha1": "828191d824c1452deb346c88152333a431e6afa5", "year": 2021 }
pes2o/s2orc
Evaluation of Collagen Alterations in Early Precursor Lesions of High Grade Serous Ovarian Cancer by Second Harmonic Generation Microscopy and Mass Spectrometry Simple Summary The collagen architecture in the extracellular matrix (ECM) is highly remodeled in high grade serous ovarian cancer (HGSOC). Many of these tumors begin in the fallopian tubes (FT) before metastasizing to the ovaries and it is important to study ECM alterations in carcinogenesis. Here, we used Second Harmonic Generation (SHG) microscopy to classify changes in the collagen fiber morphology in normal FT, and precursor pure p53 signatures and serous tubal intraepithelial carcinoma (STICs) in tissues with no HGSOC. Using a machine learning approach based on image features, we were able to discriminate the tissue groups with good classification accuracy. We additionally performed mass spectrometry analysis of normal and HGSOC tissues to associate the differential expression of collagen isoforms with fiber morphology alterations. This work provides new insights into ECM remodeling in early stage HGSOC and suggests the combined use of SHG microscopy and mass spectrometry as a new diagnostic/prognostic approach. Abstract Background: The collagen architecture in high grade serous ovarian cancer (HGSOC) is highly remodeled compared to the normal ovary and the fallopian tubes (FT). We previously used Second Harmonic Generation (SHG) microscopy and machine learning to classify the changes in collagen fiber morphology occurring in serous tubal intraepithelial carcinoma (STIC) lesions that are concurrent with HGSOC. We now extend these studies to examine collagen remodeling in pure p53 signatures, STICs and normal regions in tissues that have no concurrent HGSOC. This is an important distinction as high-grade disease can result in distant collagen changes through a field effect mechanism. Methods: We trained a linear discriminant model based on SHG texture and image features as a classifier to discriminate the tissue groups. We additionally performed mass spectrometry analysis of normal and HGSOC tissues to associate the differential expression of collagen isoforms with collagen fiber morphology alterations. Results: We quantified the differences in the collagen architecture between normal tissue and the precursors with good classification accuracy. Through proteomic analysis, we identified the downregulation of single α-chains including those for Col I and III, where these results are consistent with our previous SHG-based supramolecular analyses. Conclusion: This work provides new insights into ECM remodeling in early ovarian cancer and suggests the combined use of SHG microscopy and mass spectrometry as a new diagnostic/prognostic approach. Introduction High Grade Serous Ovarian Cancer (HGSOC) is a highly metastatic disease, defined genetically by mutations in the tumor suppressor genes Tp53, BRCA I, and BRCA II, and DNA copy number alterations [1]. While these mutations are well-documented, the associated effects in the tumor microenvironment (TME), especially in terms of remodeling of the extracellular matrix (ECM), have not been well-studied. Such modifications occur in essentially all epithelial cancers, and are important in HGSOC because this disease can metastasize while the lesions are smaller than the resolution of clinical imaging modalities (e.g., ultrasound, CT, MRI, and PET) [2][3][4]. Serum tests (CA125 and HE4) do not have sufficient specificity and sensitivity for early reliable diagnosis [5][6][7]. As a result of these factors, in more than 70% of patients, HGSOC is detected at an advanced stage when the treatment options are limited. We postulate that the development of efficacious imaging/screening modalities requires a more thorough understanding of the HGSOC microenvironment. For this purpose, we utilized the high-resolution collagen specific, optical modality of Second Harmonic Generation (SHG) microscopy to probe all levels of collagen structure (molecular through fiber). Importantly, we previously developed machine learning algorithms to differentiate between normal and high-risk ovarian stromal tissues as well as cancer sub-types based on the 3D collagen fiber morphology patterns [8]. We were able to discriminate HGSOC and normal tissues with excellent classification accuracy (~95%) [9] and other sub-types with good accuracy (~85%). We have also documented sub-resolution macro/supramolecular changes (protein helix attributes) and fibril organization (size and packing) in the aberrant tissue classes [10,11]. Collectively, these studies showed that the collagen fibers are more aligned in HGSOC than in the corresponding normal tissues or other ovarian cancer sub-types and that the underlying supramolecular and fibril structures are more disordered. These results are consistent with improperly synthesized new collagen and/or faster turnover in normal Collagen I (Col I). As the majority of HGSOC cases originate through precursors in the fallopian tube (FT) secretory epithelium [12][13][14][15], it is also important to investigate the corresponding collagen alterations as these can be biomarkers for early diagnosis of the disease. Both p53 signatures and serous tubal intraepithelial carcinomas (STICs) have been identified as two early precursors of HGSOC in the FT [16]. Aside from a loss of cilia and outgrowth of secretory cells, p53 signatures are defined by their aberrant and intense p53 staining [15]. In contrast, STICs are associated with high p53 intensity and the acquisition of cellular changes, where the morphology becomes more disorganized. Malignant disease in the FT (and primary ovary) that develops from the STICs has additional morphological alterations along with a high proliferative index [1]. Similar to the ovarian TME itself, the corresponding changes in the FT microenvironment, especially in terms of the collagen architecture, are not well-known beyond standard hematoxylin and eosin (H&E) histology, which is not sensitive to detailed fibrillar features. We recently used SHG microscopy to investigate the collagen fiber structure in concurrent STIC and HGSOC fallopian tube tissues [17]. Interestingly, the collagen morphology in HGSOC resembled that occurring in the ovary itself, and using a multivariate analysis, excellent classification accuracy (~95%) was obtained relative to STICs and normal regions in the same tissue. However, a more modest accuracy (~75%) was obtained between normal and STIC regions, suggesting that more detailed analyses are required to define these collagen structural changes. Here, we extended these studies to pure p53 and STIC precursors to determine if early changes in the collagen fiber morphology are detectable along with the p53 molecular changes. It is also important to complete these studies in the absence of concurrent HGSOC as these lesions can result in the transformation of distant collagen through a field effect mechanism [18][19][20][21] and obscure the morphologic changes associated only with the precursor states. We also used mass spectrometry to examine the molecular changes that underlie remodeling, e.g., the up-or down-regulation of collagen isoforms. The correlation of differential isoform expression to changes in the collagen fiber morphology has not been previously investigated, and we suggest that this analysis can provide useful insights into remodeling in early disease progression. Archived Human Tissues In this retrospective study, archived fallopian tubes and ovarian tissues from the University of Wisconsin Carbone Cancer Center Tissue Bank and the University of Wisconsin Department of Surgical Pathology were analyzed under an IRB-approved protocol (protocol #2019-0211). Flash frozen normal (N = 3) and tumor fallopian tube tissues (N = 3) were analyzed via mass spectrometry, while archived fallopian tube tissues (N = 12) were analyzed via SHG microscopy. Table S1 provides additional information on the tissues evaluated. Sample Processing, Histology, and Mapping of Precursor Lesions The SEE-FIM protocol [12] was executed to identify fallopian tube samples with HGSOC and HGSOC precursors. Paraffin blocks of the cases with confirmed normal, p53 signatures, STICs, and HGSOC were serially sectioned to obtain 5-10 µm thick sections. The sections were stained with H&E to monitor the morphology and to confirm the presence of HGSOC and its precursor lesions. The slides were also immunostained for p53 using the DO-1 hybridoma (SantaCruz Biotechnology, SantaCruz, CA, USA). Adjacent sections were retained as unstained slides for SHG imaging. The stained slides were examined by a trained pathologist (P.W.) to confirm the diagnosis. This pathological review was used to map the normal tissues, precursor lesions, and HGSOC in the unstained slides. Sample Processing for Mass Spectrometry-Based Proteomic Analysis Protein extraction and digestion. Each sample was dissolved in 1 mL of extraction buffer (4% sodium dodecyl sulfate, 50 mM Tris-HCl, pH 8) and sonicated using a probe sonicator (Thermo Fisher Scientific, San Jose, CA, USA). Protein extracts were reduced with 10 mM dithiothreitol (DTT) for 30 min at room temperature and alkylated with 50 mM iodoacetamide for another 30 min in the Dark before quenching with DTT. The proteins were then precipitated with 80% (v/v) cold acetone (−20 • C) overnight. The samples were centrifuged at 14,000× g for 15 min after which supernatant was discarded. The pellets were rinsed with cold acetone and air-dried at room temperature. Eight molar urea was added to dissolve the pellets, and 50 mM Tris buffer was used to dilute the samples to a urea concentration <1 M. On-pellet digestion was performed with LysC/trypsin (Promega, Madison, WI, USA) at a 50:1 ratio (protein: enzyme, w/w) at 37°C overnight. The digestion was quenched with 1% trifluoracetic acid, and the samples were desalted with Sep-Pak C18 cartridges (Waters, Milford, MA, USA). The concentrations of peptide mixture were measured by peptide assay (Thermo Fisher Scientific, San Jose, CA, USA). Ten micrograms of peptide were aliquoted for each sample and dried in vacuo. Liquid chromatography (LC)-tandem mass spectrometry analysis. The samples were analyzed on a Q-Exactive quadrupole Orbitrap mass spectrometer (Thermo Fisher Scientific, San Jose, CA, USA) coupled to a Waters nanoAcquity Ultra Performance LC. Each sample was dissolved in 15 µL 4% acetonitrile (ACN) and 0.1% formic acid (FA) in water before loading onto a 75 µm inner diameter homemade microcapillary column, which was packed with 15 cm of Bridged Ethylene Hybrid C18 particles (1.7 µm, 130 Å, Waters, Milford, MA, USA) and fabricated with an integrated emitter tip. Mobile phase A was composed of water and 0.1% FA, while mobile phase B was composed of ACN and 0.1% FA. LC separation was achieved across a 120-min gradient elution of 4% to 30% mobile phase B at a flow rate of 300 nL/min. Survey scans of peptide precursors from 300 to 1500 m/z were performed at a resolving power of 70,000 with an automatic gain control (AGC) target of 1 × 10 6 and maximum injection time of 250 ms. The top 15 precursors were then selected for higher-energy collisional dissociation (HCD) fragmentation with a normalized collision energy of 30, an isolation width of 2.0 Da, a resolving power of 17,500, an AGC target of 1 × 10 5 , a maximum injection time of 150 ms, and a lower mass limit of 120 m/z. The precursors were subject to dynamic exclusion for 45 s with a 10 ppm tolerance. Each sample was acquired in technical triplicates. Data analysis. The raw files were searched against the UniProt Homo Sapiens reviewed Database (February 2020) using MaxQuant (version 1.5.2.8) [22] with trypsin/P selected as the enzyme and two missed cleavages allowed. Carbamidomethylation of cysteine residues (+57.02146 Da) was chosen as fixed modification and variable modifications included oxidation of methionine residues (+15.99492 Da), acetylation at protein N-terminus (+42.01056 Da), and hydroxylation on proline residues (+15.99492 Da). The "LFQ quantification" and "Match between runs" features were enabled in MaxQuant. Search results were filtered to a 1% false discovery rate (FDR) at both the peptide and protein levels. Peptides that were found as reverse or potential contaminant hits were filtered out, and all other parameters were set as the default. ECM proteins were identified and classified by matching the results to Human Matrisome Dataset [23]. Proteins were considered as identifiable when detected in at least one sample and quantifiable when detected in at least two samples in each group. Missing intensities were replaced using the "replace missing values from normal distribution" feature in Perseus [24] (version 1.6.0.7) prior to further processing. Two-sample Student's t tests with a two-tailed distribution for binary comparison and hierarchical clustering analysis were conducted using Perseus. The volcano plot was generated using R packages. The mass spectrometry proteomics Data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository [25] with the Dataset identifier PXD025864. Second Harmonic Generation (SHG) Microscopy The SHG laser scanning microscope used here has been described in detail previously and only the salient features are given here [26]. The excitation source was a modelocked Titanium Sapphire laser (Mira; Coherent, Santa Clara, CA, USA), providing 890 nm excitation and coupled to an upright microscope (BX61; Olympus, Tokyo, Japan). Laser scanning and Data acquisition were achieved through home written LabVIEW code and National Instruments FPGA (National Instruments, Austin, TX, USA). The SHG Excitation used a 40 × 0.8 NA water immersion lens (LUMPlanFL/IF; Olympus, Tokyo, Japan) and a 40 × 0.9 NA condenser for collection of the forward propagating signal. The lateral and axial resolutions of the system were approximately 0.7 and 2.5 µm, respectively, as this is sufficient for resolving collagen fibers. The SHG emission has an associated directionality resulting from the sub-resolution fibril structure [27], and we acquire the forward and backward propagating signals [28]. These respective components were collected using identical photon-counting detectors (7421 GaAsP; Hamamatsu, Hamamatsu City, Japan) with the backward detector in a nondescanned geometry. For each channel, the SHG wavelength (445 nm) was isolated with a dichroic mirror and 10 nm FWHM bandpass filter (Semrock, Rochester, NY, USA). Circular polarization was used for imaging as this state excites all fiber orientations equally. This polarization of the excitation laser was determined at the focus by imaging dye-labelled vesicles [26]. The collected images were 512 × 512 pixels with a field of view of 180 × 180 µm. The image acquisition time was 3 s per frame with three-frame Kalman averaging. A total of 81 image stacks were analyzed and utilized for classification across three tissue groups: distal normal (N = 37), p53 only signature (N = 7), and STIC lesion (N = 37). We also included a comparison with HGSOC tissues (N = 33) from confirmed cancer patients in some of the analyses. Image Analysis An initial set of features to be used for linear discriminant analysis (LDA) was generated from the outputs of Gray Level Co-Occurrence Matrix (GLCM) analysis, twodimensional fast Fourier transform (2D-FFT) methods, and the curvelet transform combined with the FIRE extraction algorithm (CT-FIRE) [29]. FIJI with the Texture Analyzer plugin was used to calculate five GLCM texture features associated with the similarities and differences between adjacent pixels: Angular Second Moment (ASM), Entropy, Inverse Difference Moment (IDM), Contrast, and Correlation. The 2D-FFT analysis was performed in FIJI using the Radial Profile Extended and the Oval Profile Plot plugins to characterize the alignment and radial exponential decay of the image power spectrum. We previously described the radial-and azimuthal-averaging procedures used in this approach [17]. All curve fitting of this Data was done in Origin 2018 (OriginLab, Northampton, MA, USA). CT-FIRE was utilized to perform curvelet transform and fiber extraction to characterize individual fiber morphology features (fiber length, width, and straightness). As there was insufficient signal from the weaker backward channel for the CT-FIRE analysis, only the Data readouts from the forward channel were analyzed. Lastly, the image coverage (packing coefficient) was quantified by creating a binary mask over a dynamic lower threshold and calculating the fraction of the resulting non-vanishing pixels. The methods and features used are summarized in Table 1. The SAS software (SAS Institute Inc., Cary, NC, USA) was used to reduce the full feature set to the most significant metrics via forward selection at a significance level of α = 0.35 (STEPDISC procedure). After the features were selected, the truncated Dataset was inputted to a linear discriminant analysis (LDA) performed with singular value decomposition and N-weighted priors. For this portion, 37 crops as normal, 6 as p53 signatures, and 37 as STIC were analyzed. The accuracies and F 1 scores of the trained model were calculated for each class, where the latter quantity is the harmonic mean of precision (TP/(TP + FP)) and recall (TP/(TP + FN)), where TP, FP, and FN are true positives, false positives, and false negatives, respectively. An F 1 score is an additional metric of classifier accuracy which accounts for class imbalance. Receiver Operating Characteristic (ROC) curves were generated under five-fold cross-validation to assess the binary classifier performance and to quantify the trade-off between the true positive rate (TPR) and false positive rate (FPR). Mass Spectrometry Analysis We explored the changes in ECM and ECM-related proteins, with a focus on collagen isoform expression in human normal ovarian and tumor tissues via mass spectrometrybased proteomics approaches. In total, we identified 233 ECM proteins, where 25 were single α-chains comprising several collagen isoforms. We observed numerous proteins that were only found in either tumor or normal samples ( Figure S1a,b). For instance, COL8A1, COL8A2, and COL21A1 were only detected in normal tissues whereas COL7A1 and MUC1 were exclusively identified in tumor samples (Table S2). Importantly, even with a low number of replicates, we were able to identify and quantify many differentially expressed (especially down-regulated) ECM proteins ( Figure S1) in the tumor group. As shown in the cluster map in Figure 1, 15 single α-chains from different collagen isoforms were present in all samples, and statistical proteomic landscape differences were found between the two groups. Interestingly, we observed decreased expression levels of many of these chains in the tumor samples (e.g., those for COL1, COL3, COL5, COL6, COL12, and COL14). Moreover, differences in multiple single chains of the same collagen isoform were detected (e.g., COL1A1 and COL1A2), improving the confidence of our observations. These findings support the existence of unique matrisome features in each group, where there were larger intergroup differences than intragroup variations. This was also borne out by analysis of the principal components (not shown). A full list of identified and quantified proteins can be found in the supplemental spreadsheet. SHG Imaging and Analysis Locating and mapping pure p53 signatures and STIC lesions. To obtain tissues with p53 signatures and STIC lesions with no concurrent tumors, we focused on FT tissues obtained from gynecologic surgeries not related to HGSOC. An archival text-based survey of patients with STIC over a 5-year period (2013-2018) revealed 12 patient cases that met our criteria. Only 2 of the 12 had concurrent p53 signature lesions along with the STIC. None of these cases had any pathological characteristics for HGSOC. The low number of cases with pure p53 signature and STIC lesions is consistent with their reported low incidence in these cohorts. To confirm that these were pure precursors without the presence of cancer, routine histological stains (H&E and p53) were completed. Figure 2 provides an example of one such precursor along with an example of HGSOC. In addition to the normal H&E distribution, the weak p53 reactivity is consistent with the absence of cancer. SHG Imaging and Analysis Locating and mapping pure p53 signatures and STIC lesions. To obtain tissues with p53 signatures and STIC lesions with no concurrent tumors, we focused on FT tissues obtained from gynecologic surgeries not related to HGSOC. An archival text-based survey of patients with STIC over a 5-year period (2013-2018) revealed 12 patient cases that met our criteria. Only 2 of the 12 had concurrent p53 signature lesions along with the STIC. None of these cases had any pathological characteristics for HGSOC. The low number of cases with pure p53 signature and STIC lesions is consistent with their reported low incidence in these cohorts. To confirm that these were pure precursors without the presence of cancer, routine histological stains (H&E and p53) were completed. Figure 2 provides an example of The fallopian tube tissue slices for SHG imaging were unstained and, therefore, posed a challenge to accurately identify the normal regions, p53 signatures, and STICs. This issue was addressed by the H&E and p53 staining of an adjacent section of the same tissue, where these slides were used as a template to manually map and score the normal areas and precursors on the slides used for SHG imaging. This workflow was completed for all FT samples and is outlined in Figure 3. The fallopian tube tissue slices for SHG imaging were unstained and, therefore, posed a challenge to accurately identify the normal regions, p53 signatures, and STICs. This issue was addressed by the H&E and p53 staining of an adjacent section of the same tissue, where these slides were used as a template to manually map and score the normal areas and precursors on the slides used for SHG imaging. This workflow was completed for all FT samples and is outlined in Figure 3. A trained gynecologic pathologist (P.W.) first identified the specific areas of normal tissue (green rectangle), p53 signature (red rectangle), and STIC (blue rectangle) (Figure 3a). For orientation purposes, bright field images at 4×, 20×, and 40× at each spot were also taken (not shown). The correlated unstained tissue (Figure 3b) was mapped according to the designated areas and imaged. The SHG images of collagen in areas corresponding to normal tissue, the p53 signature, and STIC are shown in Figure 3c, followed by the corresponding CT-FIRE and 2D-FFT analysis, which is quantified in Figure 4. Our previous study indicated that a collagen coverage of less than 70% significantly altered the accuracy of the image analysis techniques, and many of these regions had sparser coverage. As a solution, representative image stacks for each channel were duplicated and cropped to 45 × 45 microns field of view (FOV), and we selected regions of interest (ROIs) with sufficient coverage. The p53 signatures and STIC lesions were small in spatial extent and localized in their respective tissue and yielded z-stacks comprised of 10 or fewer optical sections. Cropped images of each group (normal, N = 37; p53 signature, N = 6; STIC, N = 37) were analyzed using the image analysis protocols from our previous study [17]. A trained gynecologic pathologist (P.W.) first identified the specific areas of normal tissue (green rectangle), p53 signature (red rectangle), and STIC (blue rectangle) ( Figure 3a). For orientation purposes, bright field images at 4×, 20×, and 40× at each spot were also taken (not shown). The correlated unstained tissue (Figure 3b) was mapped according to the designated areas and imaged. The SHG images of collagen in areas corresponding to normal tissue, the p53 signature, and STIC are shown in Figure 3c, followed by the corresponding CT-FIRE and 2D-FFT analysis, which is quantified in Figure 4. Texture and other image features. For our analysis, we included features from GLCM, 2D-FFT methods, and CT-FIRE, where these techniques were applied to both the forward and backward channels. Unlike fluorescence where the emission is isotropic, SHG has an emission directionality that is related to the underlying structure [27]. Specifically, smaller and more disorganized features can appear in the backward channel, where these are often obscured in the forward collected signal. We have shown that these images are sufficiently different through the structural similarity index [17] to justify the inclusion of both signal pathways as independent features. However, the backward signal is intrinsically weaker, and insufficient signal was present for use in the CT-FIRE analysis. The Data from all these analyses are summarized in Figure 4. Although we are specifically focused on discrimination between the distal normal and precursors (p53 and STIC), we included HGSOC as a point of comparison. In good agreement with a previous study [17], we found that images from HGSOC were associated with higher entropy and correlation, as well as lower contrast with respect to other groups (Figure 4a). In this context, lower contrast refers to the similarity of pairwise pixels rather than low signal and is also consistent with high correlation. However, we did not find any significant differences in the GLCM metrics between either of the precursors and distal normal regions. Our previous study indicated that a collagen coverage of less than 70% significantly altered the accuracy of the image analysis techniques, and many of these regions had sparser coverage. As a solution, representative image stacks for each channel were duplicated and cropped to 45 × 45 microns field of view (FOV), and we selected regions of interest (ROIs) with sufficient coverage. The p53 signatures and STIC lesions were small in spatial extent and localized in their respective tissue and yielded z-stacks comprised of 10 or fewer optical sections. Cropped images of each group (normal, N = 37; p53 signature, N = 6; STIC, N = 37) were analyzed using the image analysis protocols from our previous study [17]. Texture and other image features. For our analysis, we included features from GLCM, 2D-FFT methods, and CT-FIRE, where these techniques were applied to both the forward and backward channels. Unlike fluorescence where the emission is isotropic, SHG has an emission directionality that is related to the underlying structure [27]. Specifically, smaller and more disorganized features can appear in the backward channel, where these are often obscured in the forward collected signal. We have shown that these images are sufficiently different through the structural similarity index [17] to justify the inclusion of both signal pathways as independent features. However, the backward signal is intrinsically weaker, and insufficient signal was present for use in the CT-FIRE analysis. The data from all these analyses are summarized in Figure 4. Although we are specifically focused on discrimination between the distal normal and precursors (p53 and STIC), we included HGSOC as a point of comparison. In good agreement with a previous study [17], we found that images from HGSOC were associated with higher entropy and correlation, as well as lower contrast with respect to other groups (Figure 4a). In this context, lower contrast refers to the similarity of pairwise pixels rather than low signal and is also consistent with high correlation. However, we did not find any significant differences in the GLCM metrics between either of the precursors and distal normal regions. 2D-FFT methods were able to distinguish HGSOC as having higher alignment in the forward and backward channels, where both readouts of this metric should trend in the same direction. There was also a greater relative occurrence of high frequency (smaller) features in HGSOC, which is indicated by the higher time constant in its radial power spectrum (Figure 4b). Conversely, the STIC and p53 signature groups were characterized by lower alignment in the forward channel, although this was not significantly different. Lastly, while HGSOC was associated with straighter fibers, no differences were found between the two precursors and normal (Figure 4c). Linear Discriminant Analysis (LDA). While none of the individual metrics from Figure 4 showed differences between the two precursors, we can attempt to obtain discrimination through the development of a linear discriminant (LD) model. This process can provide improved classification even if the individual components are not themselves statistically different, and we have used this process previously [17,30]. Since the collagen morphology of HGSOC in the FT is markedly distinct and already characterized [8,17], we limited our discrimination analysis to the two precursors and distal normal regions. In order to better differentiate between the precursor groups (and to prevent overfitting in the trained LD model), the feature space generated by the GLCM, 2D-FFT methods, CT-FIRE, and the packing coefficient was limited via forward selection up to a significance level of α = 0.35 (SAS/STEPDISC procedure). The most significant variables for discriminating between these groups (Packing Coefficient These forward selected metrics were then used to train an LD model capable of distinguishing between the three tissue groups by a set of binary classifiers (One-vs-Rest or OvR). Through this analysis, we were able to achieve accuracies and F1 scores between ~65 to 91% and 9.1-66.0, respectively (Table 2). In particular, we achieved good discrimination for the distal normal and STIC lesion groups, despite low sample sizes (N = 37 each). The corresponding AUROCs for these classifiers were somewhat low (0.71 and 0.62 for the distal normal and STIC lesion, respectively; see Figure 6) but this may be improved upon by increasing the size of the training set. These forward selected metrics were then used to train an LD model capable of distinguishing between the three tissue groups by a set of binary classifiers (One-vs-Rest or OvR). Through this analysis, we were able to achieve accuracies and F 1 scores between~65 to 91% and 9.1-66.0, respectively (Table 2). In particular, we achieved good discrimination for the distal normal and STIC lesion groups, despite low sample sizes (N = 37 each). The corresponding AUROCs for these classifiers were somewhat low (0.71 and 0.62 for the distal normal and STIC lesion, respectively; see Figure 6) but this may be improved upon by increasing the size of the training set. Despite the high overall accuracy (~91%) for p53 signature classification, the corresponding F1 score of 9.1 indicates a low number of true positives. To overcome the limitation of low N for the p53 signature group, we trained a classifier to distinguish between the distal normal and precursor regions, where the p53 signature and STIC were aggregated into a more general precursor group. Through a similar model using slightly different metrics, we achieved a high accuracy and F1 score (74.7 and 77.8, respectively), as well as an AUROC of 0.68. The scatter plot and ROC curve for this model are included in the Supporting Information (see Figures S2 and S3). Discussion Using SHG microscopy, we have previously shown that there are significant changes in the collagen fibrillar morphology in HGSOC in the ovary itself, as well as in the fallopian tubes [8,17]. More subtle differences were observed in STIC regions that were coexistent in tissues with HGSOC. It is important to examine the collagen morphology in pure precursor tissues (p53 and STIC) to determine when characteristic collagen fiber alterations can be detected by SHG, as these could be used as unique diagnostic biomarkers of early-stage disease. It is further important to perform these investigations in tissues without HGSOC, as Despite the high overall accuracy (~91%) for p53 signature classification, the corresponding F 1 score of 9.1 indicates a low number of true positives. To overcome the limitation of low N for the p53 signature group, we trained a classifier to distinguish between the distal normal and precursor regions, where the p53 signature and STIC were aggregated into a more general precursor group. Through a similar model using slightly different metrics, we achieved a high accuracy and F 1 score (74.7 and 77.8, respectively), as well as an AUROC of 0.68. The scatter plot and ROC curve for this model are included in the Supporting Information (see Figures S2 and S3). Discussion Using SHG microscopy, we have previously shown that there are significant changes in the collagen fibrillar morphology in HGSOC in the ovary itself, as well as in the fallopian tubes [8,17]. More subtle differences were observed in STIC regions that were co-existent in tissues with HGSOC. It is important to examine the collagen morphology in pure precursor tissues (p53 and STIC) to determine when characteristic collagen fiber alterations can be detected by SHG, as these could be used as unique diagnostic biomarkers of early-stage disease. It is further important to perform these investigations in tissues without HGSOC, as these lesions can induce collagen remodeling in distant regions through a field effect mechanism [19]. However, the acquisition of these pure precursors is clinically rare and it took extensive time and effort to identify even the relatively low number of suitable banked tissues used here. Specifically, an experienced gyn/onc pathologist (P.W.) scanned patient cases from over a 5-year period for this study. We also sought to determine if there were differences in the collagen expression patterns between normal ovarian tissues and HGSOC, and further, if these were related to the collagen morphology changes visualized by SHG microscopy. This is important as, while previous studies have suggested the up-regulation of several collagen isoforms in HGSOC (e.g., Col III and VI) [31,32], these studies used immunostaining and the results were not verified by quantitative molecular techniques. This is potentially problematic as most available antibodies lack a high level of specificity for different isoforms (specifically Col I vs. Col III) [33] as the same epitope at the end of the helix is often tagged. As a consequence, the in vivo isoform composition in HGSOC is not yet definitively known and this represents a large gap in knowledge. We note that we have used self-assembled in vitro models of known composition [33,34] to show how the incorporation of Col III and Col V affects the fibrillar structure, but it has not yet been possible to create a direct link between collagen proteomic changes and fiber alterations in tumors. This is because SHG is not sensitive to non-collagen components, and more generally applicable techniques, such as mass spectrometry (MS), are required for this purpose. To begin making this connection, we utilized MS and found the decreased expression of numerous single α-chains from several different collagen isoforms in HGSOC. Of particular note, the expression of Col I and Col III chains were both downregulated, where the latter is not consistent with previous immunostaining Data [31]. This is likely due to the qualitative nature of immunostaining and the lack of specificity of the available Col III antibodies. Interestingly, we previously used detailed SHG analysis to show that there were large structural changes in the α-chains; however, these were not consistent with an increase in Col III expression [11]. Moreover, the triple helical structure was found to be more disorga-nized in HGSOC. This disorganization was also validated by our wavelength dependent optical scattering measurements, which probed size scales over the range of~50 nm-1µm [30]. Collectively, the MS Data and our previous macro/supramolecular SHG analyses suggest there may be transcriptomic and/or post-translational modifications in HGSOC coinciding with characteristic, pronounced changes in the collagen fiber morphology. We did not expand the proteomic analysis to the p53 and STIC precursors as the tissues were insufficient in volume. However, given the current paradigm in HGSOC that the ovary is the metastatic site from the FT [16,35], the genetic modifications giving rise to different isoform expression are expected to be similar in both sites. Indeed, we showed that the collagen fiber morphology in HGSOC was highly similar in the FT and ovary [17], suggesting that similar proteomic modifications occur in both tissues. Given the congruence in the SHG and the proteomic Data suggesting differential collagen isoform expression in normal and HGSOC tissues, these observations further support the validity of performing the MS analysis on more available ovarian stromal tissues. We further suggest that the SHG analysis of the collagen in precursors is a true reflection of the biochemical alterations occurring during carcinogenesis and is an important surrogate measurement that allows for non-destructive analysis, preserving the rare precursors for IHC and transcriptomic analysis. Based on our prior work [17], we expected that the collagen organization differences between the two precursors (p53 only and STICs) would be less pronounced relative to that of high grade disease. However, our linear discriminant model still achieved good accuracies, F 1 scores, and ROC curves, which were further improved by grouping p53 signatures and STIC lesions together. Importantly, even with the small sample size, our analysis supports our hypothesis that collagen alterations in the FT occur prior to frank HGSOC. We note that, since the collagen alterations of early-stage disease are subtle, the classifier performance will be sensitive to under-sampling. We suggest that with a much larger specimen number, the performance could be improved to a standard that is suitable for future clinical applications. The sample thickness and coverage and limited collagen density leading to relatively weak SHG signal intensities were major limitations in this study. We suggest that higher accuracies should be achievable on thicker sections (~50-100 µm), which are readily imaged by SHG. Another limitation of the study at the moment is that there is no clear definition of the biochemical factors that affect the change in the collagen structure in HGSOC or in precursor lesions. The differential expression of collagen isoforms combined with specific post-translational processing may contribute to the change in collagen structure. The successful implementation of mass spectroscopy and transcriptomic analysis will be necessary to obtain the biochemical understanding of changes occurring in the collagen and the surrounding ECM. For example, other markers, such as fibronectin, laminin, and secreted MMPs, have been suggested to have altered regulation in HGSOC and could be added to the analysis. We foresee both long and short-term applications based on our observations and methodology. For the former, it may be possible to construct an SHG laser-scanning micro-endoscope to be used in conjunction with laparoscopy or hysteroscopy [36,37]. Analogous fiber optic-based scanning approaches are currently under development for other pathologies [38,39], and these should be adaptable for FT imaging. The scheme may be feasible for the in vivo detection of precursor lesions, especially since the majority of the significant variables for class discrimination came from the SHG backward channel, which would be the usable direction in an endoscopic configuration. In the shorter term, the ex vivo analysis of resected fallopian tube tissue from risk reduction surgery can be used either as a pre-screen or to complement the histology and also to identify MSbased proteomic correlations. Conclusions The collagen fiber morphology is highly remodeled in HGSOC in the ovary or the fallopian tubes, where the fibers become more aligned relative to normal tissues and other tumor sub-types. However, for clinical applications, it is important to investigate these alterations in the precursors (p53 and STICs) due to the early metastasis of HGSOC. Unfortunately, there is a limited availability of pure precursor lesions without the presence of malignant tissue. Still, due to the high specificity and sensitivity to collagen morphology afforded by SHG microscopy, sufficient discrimination between distal normal regions and p53 and STIC precursors was attained in this limited study to demonstrate proof of concept. Moreover, mass spectrometry analyses showed concurrent proteomic changes in normal and HGSOC tissues, where many collagen single α-chains were downregulated. These results suggest that the combined use of MS proteomic and SHG microscopy analyses forms a basis for further in vivo and ex vivo explorations of HGSOC and its precursor lesions. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers13112794/s1, Figure S1: ECM alterations in ovarian tumor versus benign tissue revealed by mass spectrometry, Figure S2: Scatter plot of the distal normal and precursor groups with 95% confidence ellipses depicted, Figure S3: ROC curve and AUROC for distal normal versus precursor classification, Table S1: Summary information for all STIC tissues, Table S2: Differential expression of ECM proteins in normal ovary and HGSOC tumor samples, Supplemental Spreadsheet: All identified and quantified ECM proteins in normal and tumor tissues. Author Contributions: K.L.G. performed Data acquisition, completed image analysis, initial statistical analyses, and drafted the paper and figures; A.N.J. completed additional image and statistical analyses, drafted the paper, and provided statistical figures and feedback on Data conclusions; Z.L. completed the mass spectrometry analysis of normal and tumor tissues, provided the transcriptomic heat maps and Table S2, and assisted in drafting the paper; E.C.R. developed the initial image and statistical analysis protocols; P.W. provided insight on fallopian tube biology and ovarian cancer pathology, accessed all samples for precursors, established mapping protocols, and provided pictures of histology for all cases; L.L. provided insight on mass spectrometry analyses and advises Z.L.; M.S.P. provided insight into ovarian cancer biology, drafted the manuscript, and advises K.L.G.; and P.J.C. was the project director, refined the manuscript, formerly advised E.C.R., currently co-advises K.L.G., and advises A.N.J. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was waived as the study presented no more than minimal risk and did not involve recruiting, consenting, or interacting with human subjects. All analyzed samples were previously collected and are prepared pathology slides. The use of this Data does not affect the patients from whom the Data are derived. Data Availability Statement: The mass spectrometry proteomics Data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the Dataset identifier PXD025864.
v3-fos-license
2021-09-25T06:17:02.163Z
2021-08-23T00:00:00.000
237605731
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aasopenresearch.org/articles/4-43/v1/pdf", "pdf_hash": "a8583bd67ac4e28f48e72e8a4c7cc0f4f627b171", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41568", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "sha1": "759160e2bf87350112055756c355cb6557b2e0a6", "year": 2021 }
pes2o/s2orc
Synthesis of bismuth oxyhalide (BiOBr zI (1- z)) solid solutions for photodegradation of methylene dye. Background: The removal of textile wastes is a priority due to their mutagenic and carcinogenic properties.  In this study, bismuth oxyhalide was used in the removal of methylene blue (MB) which is a textile waste. The main objective of this study was to develop and investigate the applicability of a bismuth oxyhalide (BiOBr zI (1-z)) solid solutions in the photodegradation of MB under solar and ultraviolet (UV) light irradiation. Methods: Bismuth oxyhalide (BiOBr zI (1-z)) (0 ≤ z ≤ 1) materials were successfully prepared through the hydrothermal method. Brunauer-Emmett-Teller (BET), transmission electron microscope (TEM), X-ray diffractometer (XRD), and scanning electron microscope (SEM) were used to determine the surface area, microstructure, crystal structure, and morphology of the resultant products. The photocatalytic performance of BiOBr zI (1-z) materials was examined through methylene blue (MB) degradation under UV light and solar irradiation. Results: The XRD showed that BiOBr zI (1-z) materials crystallized into a tetragonal crystal structure with (102) peak slightly shifting to lower diffraction angle with an increase in the amount of iodide (I -). BiOBr 0.6I 0.4 materials showed a point of zero charge of 5.29 and presented the highest photocatalytic activity in the removal of MB with 99% and 88% efficiency under solar and UV irradiation, respectively. The kinetics studies of MB removal by BiOBr zI (1-z) materials showed that the degradation process followed nonlinear pseudo-first-order model indicating that the removal of MB depends on the population of the adsorption sites. Trapping experiments confirmed that photogenerated holes (h +) and superoxide radicals ( •O 2 -) are the key species responsible for the degradation of MB. Conclusions : This study shows that bismuth oxyhalide materials are very active in the degradation of methylene blue dye using sunlight and thus they have great potential in safeguarding public health and the environment from the dye's degradation standpoint. Moreover, the experimental results agree with nonlinear fitting. Introduction The availability of clean water is key for human and environmental health. The rise in demand for dyed commodities such as plastics, and textiles has led to an increase in the discharge of organic dyes into the environment and the deterioration of water quality. Approximately 17-20% 1,2 of water pollution is from the dyeing and textile industries. About 7×10 5 tons [3][4][5] of organic dyes are used to manufacture 3×10 7 tons of textiles annually 3 . Textile dyeing also consumes large volumes of water. However, approximately 30%, (2 × 10 5 tons) 4 , of dyes are lost as waste during the manufacturing process thereby finding their way into the environment and water bodies. Therefore, much attention has been directed towards developing efficient treatment techniques for these organic dyes in wastewater in the recent past. These techniques include conventional ones such as biological, physical, and chemical treatment 5 . However, several limitations associated with these methods have been reported. First, treated effluent often does not meet the set standards for parameters such as color and chemical oxygen demand (COD) due to the limited effectiveness of these methods in breaking down the dyes. Second, the wastewater often contains both inorganic and organic compounds, making it hard to treat by conventional methods 3,6-8 . Therefore, there is a need to employ a method that is effective, less costly, and environmentally friendly. Oxidative processes such as photocatalysis, the Fenton method, photolysis, sonolysis, sonocatalysis, sonoFenton, photo-Fenton, and ozonolysis 6,[9][10][11][12][13][14][15][16][17] have recently been explored. Among these, photocatalysis, which depends on in-situ photogenerated positively charged holes (h + ), hydroxyl radicals ( • OH), negatively charged electrons (e−), superoxide radicals ( • O 2 − ) has been demonstrated to be promising in terms of cost, toxicity, recyclability, mild reaction conditions, ease of operation, efficiency, and high degradation ability [18][19][20][21][22][23] . Photocatalysis is also a better option because it could potentially mineralize bio-recalcitrant compounds to produce carbon dioxide, water, and other inorganic substances hence no waste is left for secondary disposal 8 . Various semiconductor photocatalysts such as metal oxides and sulfides have been widely probed and applied in environmental remediation. Traditionally, TiO 2 has been known to be effective in the treatment of wastewaters. However, titanium is costly and not readily available. Furthermore, TiO 2 only absorbs in the ultraviolet (UV) region, which is about 3-5% of the solar spectrum due to its wide bandgap of 3.2 eV 17,19,22 . Instability and photo-corrosion are also drawbacks associated with metaloxide semiconductors. Much of the inexpensive and abundant visible radiation energy is not harnessed in wastewater treatment with these types of metal oxide photocatalysts because they are only active within the UV region which constitutes a very small fraction of the solar insolation 18,[24][25][26][27] . Metal sulfides like CoS 2 , In 2 S 3 , CdS, and Sb 2 S 3 have been studied and found to have a proper location of conduction valence bands and high sensitivity to visible light, however, they are costly, prone to photo-corrosion, and the heavy metals involved are toxic 2,19 . To overcome these drawbacks, it is imperative to develop alternative photocatalytic materials that are more stable but less costly. Bismuth oxyhalide (BiOZ (Z = Cl, Br, I)) materials are a novel class of photocatalyst due to their environmental friendliness, outstanding photocatalytic performance under both solar and UV irradiation, unique optical and electrical properties, unique layered crystal structure, high oxidation capacity, good chemical stability, and internal electric field effect (IEE) 19,23,[28][29][30] . BiOZ compounds are tetragonal matlockite (space group p4/nmm) which consist of (Bi 2 O 2 ) 2+ layer interleaved (Z 2 ) 2 halogen ions (Z = Cl, Br, I) slabs which crystallize into a layered structure, forming [Z-Bi-O-O-Bi-Z] stacked slices. These slices interact with halogen ions along the c-axis via the Weak Van der Waals forces. Due to the layered structure, bismuth oxyhalide compounds, display unique optical, mechanical, and electrical qualities and have found application in fields including photocatalysis, organic synthesis, nitrogen fixation, and solar-driven H 2 generation. Furthermore, the electric field that forms between (Bi 2 O 2 ) 2+ and 2Z − layers enhance the separation of photoexcited e⁻ and h + , thus improving the photocatalytic activity [30][31][32] . Bismuth oxyhalides have been studied as photocatalysts, interfaced with other photocatalytically active materials, and quaternary alloys 12 . The yellow colour of BiOBr and the coral red colour of BiOI indicate that they strongly absorb light within the visible range, however, their performance is still low 8 . Therefore, improving the photocatalytic abilities of BiOZ compounds is necessary for practical applications. Several approaches, for instance, doping with metals and/or non-metals, compositing BiOZ with other materials, for example, TiO 2 , Ag, AgCl, AgBr, WO 3 , and AgPO 4 , noble metals deposition, synthesis of different heterojunctions [20][21][22] have been shown to enhance the activity of these BiOZ photocatalysts in dye degradation. Zhang et al., synthesized nitrogen-doped graphene quantum dots/BiOZ (Z = Br, Cl) for the removal of rhodamine B (RhB) dye 14 . Lee et al., prepared AgX (X = Cl, Br, I)/BiOZ for the removal of methyl orange, RhB, and MB 15 . Qu et al., reported the treatment of methyl orange with TiO 2 /CQDs/BiOZ (Z = Cl, Br, I) heterostructure. It has been shown that the preparation of solid solutions generally enhances the activity of a catalyst. The development of solid solution has been acknowledged to change the bandgap, crystal, and electronic structures of a photocatalyst which can impact their photocatalytic properties 16 . Solid solutions such as BiOI z Br 1-z , BiOCl z I 1-z , zBiOBr-(1-z)BiOI, BiOBr z I 1-z , and BiO(ClBr) 1-z /2I z have been synthesized and have shown improved photocatalytic activity when compared to corresponding single components 19,23,29,31 . These materials have been utilized in the remediation of wastewater containing different organic dyes. Gnayem et al., 20 prepared hierarchical nanostructured 3D flowerlike BiOCl z Br 1−z for RhB degradation under visible light irradiation. Zhang et al., 16 fabricated BiOBr z I 1−z nanoplates for the removal of RhB. Zhang et al., 21 synthesized BiOCl z Br 1-z nanoplate solid solutions for the removal of RhB under the irradiation from visible light. Zhang et al., 22 synthesized BiOBr z I 1-z solid solutions for the removal of various dyes from textile wastes. However, to the best of the authors' knowledge, the application of BiOBr z I (1-z) solid solutions in the treatment of methylene blue (MB) dye and comparative studies on the photocatalytic performance of these materials under natural sunlight and UV light has not been reported. Additionally, no report on the comparison of the experimental kinetic data to both linear and non-linear fitting using Langmuir-Hinshelwood (L-H) kinetic model. Methylene blue dye is extensively used in printing and textile dyeing industries due to its intense blue colour and thus a common pollutant in industrial wastewaters. The purpose of the present study is to enhance the absorption of visible light by BiOZ. Thus, for the improved visible-light uptake, the BiOBr z I (1-z) solid solutions with a suitable bandgap were prepared by doping the bromide and the iodide ions through varying the amounts of the dopants from 0 to 1 in the compound. The photocatalytic activity of BiOBr z I (1-z) solid solutions was evaluated by determining their efficiency in removing MB under solar and UV light exposure. Furthermore, the efficiency of the most active materials was optimized by careful adjustment of catalyst dosage, temperature, and pH, while the degradation pathway was examined by use of different scavengers. Duration of methodology The preparation of the photocatalysts was done between Oct-Nov 2018. After the preparation, the degradation efficacy of the prepared samples was tested before characterization. This was carried out in Jan 2019. The XRD, SEM, TEM, BET, Raman characterization was carried out between Feb-Dec 2019. The photodegradation studies were carried out throughout 2019 and between Jan-May 2020. The XRD and Raman analysis was carried out at the Botswana International University of Science and Technology (BIUST). The SEM characterization was done at Botswana Institute for Technology Research and Innovation (BITRI), while TEM and BET analysis was done at Global Change Institute, University of the Witwatersrand, South Africa. No adjustment was done to the scanning electron microscope micrographs. All the degradation performance was carried out at BIUST. OriginPro 8.5.0 SR1 b161 software was used for data analysis and graphing. Alternatively, R and excel can be applied for analysis. and methylene blue (MB, Cat No. MB057) reagents used in this work were acquired from Sigma-Aldrich. The dye working solutions used in this investigation were made by adding a 10 mg methylene blue in 1000 mL of distilled water (DI) to make the desired concentrations. Synthesis of BiOBr z I (1-z) solid solution Bismuth oxyhalide (BiOBr z I (1-z) ) solid solutions were synthesized via hydrothermal method at 160 ℃. Typically, a 0.4123 mM Bi(NO 3 ) 3 .5H 2 O solution was prepared in CH 3 COOH. A solution mixture having stoichiometric amounts of KI and NaBr was then dropwisely added into the above Bi-based solution while continuously stirring. The resultant mixture was then put into 23 mL stainless steel autoclave lined with teflon (4749 PTFE A280AC Teflon, Parr Instrument Company) and heated at 160 ℃ in an oven, for 24 h. When the reaction was complete, the sample was naturally cooled to room temperature, centrifuged (Heraeus Megafuge 40 centrifuge) 6 times at 6000 rpm to effectively separate the product, and washed 6 times with deionized. The product obtained was then dried at 70 ℃ for 24 h and used in photodegradation reactions. The molar ratios of Brand Iwas varied between 0 and 1 with Z = 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 for BiOBr z I (1-z) materials and samples prepared hereafter labelled BiOI, BiOBr 0.2 I 0.8 , BiOBr 0.4 I 0.6 , BiOBr 0.6 I 0.4 , BiOBr 0.8 I 0.2 , and BiOBr, respectively. Point of zero charge (pH PZC ) determination The procedure used in this work was adopted from Tahira et al. 23 . The PZC values for BiOBr z I (1-z) samples were determined in 0.1 M solution of NaNO 3 at 298 K. Typically, 0.1 g of the samples was dispersed in NaNO 3 solution (0.1 M, 30 mL) in various reaction flasks. Adjusting the initial pH of the mixtures to 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11 was done using 0.1 M HNO 3 and NaOH. Each reaction vessel was then agitated at 130 rpm in a shaker (Stuart orbital shaker SSL1) for 24 h. The final pH of the mixtures was determined by a Basic20 pHmeter (Crison). A graph of the difference between final and initial pH (ΔpH) values was then plotted against the initial pH values. The pH PZC was taken to be the initial pH at which ΔpH is 0. Characterization The crystal structure of the synthesized materials was obtained by X-ray diffraction (XRD) at room temperature by a Bruker D8 Advance powder diffractometer with a Cu tube X-ray source and a LynxEye XE-T energy-dispersive strip detector. The radiation used was a Cu-Kα (λ = 1.54056 nm) with increasing current and voltage of 40 kV and 40 mA, correspondingly. The patterns were acquired at a 2θ scan rate of steps 0.02 degrees with a time of 0.500 sec/step from 5 to 90°. Raman spectra were acquired by LabRAM HR800 Raman spectrophotometer, exciting the samples with 532 nm laser. FEI Tecnai G2 Spirit transmission electron microscope at an acceleration voltage of 120 kV and scanning electron microscope (SEM, Gemini SEM 500, Carl Zeiss, 5.00 KX magnifcation) with energy-dispersive X-ray detector (EDX) were applied to obtain the morphology and the elemental composition of the BiOBr z I (1-z) solid solutions. Porosity and surface area investigations were performed by Brunauer-Emmett-Teller technique (Micro metrics Tristar 3000 porosity analyzer). The samples were degassed at 150℃ for 5 h before sample analysis at liquid nitrogen temperature (77.350 K). Ultraviolet-visible (UV-Vis) spectrophotometer (UV201, Shimadzu) was applied to study the photodegradation of MB. Photodegradation studies To test for the photodegradation ability of the BiOBr z I (1-z) solid solutions, 50 mg of the material was added in 50 mL of 10 mgLˉ1 dye solution. The equilibrium (adsorption-desorption) between the photocatalyst and the solution containing the dye was attained through the stirring of the mixture in the dark for 30 min, thereafter the photodegradation was carried out under solar irradiation and UV lamp operating at 0.16 Amps with a UV output of 254 nm (UVP UVG-54). The set up for the photodegradation under UV lamp and solar is shown in Figure S1. During the reaction process, 2 mL of aliquot was drawn at 30-minute intervals from the mixture to monitor the degradation of MB. The separation of the photocatalyst and the dye solution was achieved through centrifuging the aliquot for 7 min at 13000 rpm. The concentration was monitored using UV-Vis spectrophotometer within a 200-800 nm range. The absorption of MB was examined at a maximum wavelength of ~661 nm, according to the Beer-Lambert law. An experiment without a sample (blank) was also carried out under UV and solar irradiation as a control. The same procedure was followed for the two conditions (solar and UV light). The active species responsible for the photodegradation process was investigated by conducting scavenging experiments. Tert-butanol (TBA), EDTA, silver nitrate, (AgNO 3 ), and P-benzoquinone (BQ) were applied to quench • OH, h + , e⁻, and Results and discussion X-ray diffraction Figure 1 displays the XRD spectra of BiOI, BiOBr 0.2 I 0.8 , BiOBr 0.4 I 0.6 , BiOBr 0.6 I 0.4, BiOBr 0.8 I 0.2 , and BiOBr. It was observed that all the diffraction peaks of the spectra are sharp showing an efficacious crystallization of BiOBr z I (1-z) materials synthesized via the hydrothermal method. The pure BiOBr z I (1-z) materials can be attributed to tetragonal structure BiOBr (PDF Card No. 04-002-3609) and BiOI (PDF Card No. 00-10-0445). Even though the composition of the materials is different, they possess a similar tetragonal phase. The BiOBr peaks at 2θ = 11.0, 31.8, and 32.3° are assigned to the hkl values (001), (102), and (110), respectively. While the BiOI peaks at 2θ = 29.6 and 31.7° can be respectively assigned to the hkl values (102) and (110). There is an observable shift in the diffraction peaks to lower diffraction angles with an increase of I content in BiOBr z I (1−z) compound. This observation was also reported by Xu et al., 19 who suggested that the shift is a result of bromide ions substitution (ionic radius of 0.196 nm) with the larger iodide ions (ionic radius of 0.216 nm) that leads to an expansion of the interlayer spacing. The percentage difference between the ionic radii of bromide and iodide ion is 9%, a value less than the maximum of 15% for substitution to occur. Additionally, the unlimited formation of solid solutions between BiOI and BiOBr can further be shown by the shift in diffraction peaks to lower angles as the amount of iodine increases. 25 . Cell parameters also increased gradually with an increase in I content (Extended data, Table S1 33 ) 18 . Figure 1 shows a perfect change from BiOBr to BiOI in the BiOBr z I (1-z) pattern, this has been observed before by Lei et al. 26 . There are no peaks attributed to impurities, indicating that the samples consisted purely of the desired phases. Raman spectroscopy To further understand the structure of the as-prepared materials, the Raman technique was used. Figure S2 (Extended data 33 ) represents the Raman spectra of BiOBr z I (1-z) solid solutions prepared by varying amounts of Brand Iions. To show a variation in structural features of BiOBr z I (1-z) solid solutions, the Raman spectra were recorded from 25 -500 cm -1 . Bismuth oxyhalide belongs to the tetragonal PbFCI type structure with space of p4/nm, hence the active modes of the Raman exhibited are A 1 g, Eg, and B 1 g. BiOBr samples presented two major Raman peaks at 45 and 106 cm -1 allotted to the firstorder vibration of Bi-O and A 1 g internal Bi-Br stretching mode, respectively 27 . The stronger bands of BiOI at 78 and 145 cm -1 is allotted to the A 1 g stretching mode from internal Bi-I bonds. It can also be observed that the peak at 45 cm −1 diminishes while the peak 106 cm −1 changes to a lower value of 78 cm -1 as the amount of Iˉ ions increases. The shifting of the peaks to the lower Raman value agrees with the XRD results, which also indicates that as the amount of Iˉ increases the (102) plane shifts to the lower angles. Morphological analysis by SEM and TEM The examination of surface morphologies of BiOBr z I (1-z) solid solutions was done using SEM, as shown in Figure 2(a-f). The samples display plate-like morphology with interleaved nanoplates of varying sizes. The formation of the plate-like morphology can be attributed to the internal structure of bismuth oxyhalide (BiOZ, Z = Cl, Br, I) where (Bi 2 O 2 ) 2+ layers sandwich between the two slabs of halogen atoms resulting in the anisotropic growth of BiOZ at a c axis forming 2D structures. As shown in Figure 2a, pure BiOBr formed relatively large plates which upon iodine doping reduced in size and self-assembled in 3D flowers, Figure 2b,c. As the amount of bromide decreased and iodine increased towards pure BiOI, the size of the flakes increased forming relatively large 2D structures akin to those of pure BiOBr. TEM images were obtained to further understand the BiOBr, BiOBr 0.6 I 0.4 , and BiOI SEM morphology. The TEM micrographs, Figure 2g, h, i, confirm the plate-like morphology observed by SEM for all the BiOBr z I (1-z) materials. The presence of Br, Bi, O, and I was confirmed by EDX examination. The amounts of the elements detected agree with the BiOBr z I (1-z) formula, (Extended data, Figure S3 33 ). Surface area analysis The textural properties of BiOBr, BiOBr 0.6 I 0.4 , and BiOI materials were obtained using N 2 adsorption-desorption analysis. The isotherms displayed in Figure S4a (Extended data 33 ) can be allotted to type III based on IUPAC classification. This type of isotherm has no recognizable monolayer formed due to the relatively weak interaction between the adsorbent and the adsorbate hence leading to a clustering of adsorbed molecules around the most favourable sites on the surface of a macroporous or nonporous solid. Compared to type II isotherms, the quantity of molecules adsorbed remains finite at p/p0 = 1 which is the saturation pressure 28 . The BET surface areas of the as-prepared materials were 0.517, 3.249, and 1.890 m 2 /g for BiOBr, BiOBr 0.6 I 0.4 , and BiOI, respectively. The BiOBr sample has a larger and thicker microplate, which has a much lower surface area. Figure S4b (Extended data 33 ) shows Barret-Joyner-Halenda (BJH) plots of BiOBr, BiOBr 0.6 I 0.4 , and BiOI materials obtained from desorption isotherms. The BJH plots confirmed that the corresponding macropores peaks of the pore-diameters distributions of BiOBr, BiOBr 0.6 I 0.4 , and BiOI could be found up to 159, 170, and 153 nm, respectively. The BJH result shows that the pore diameter distributions of the materials are wide. This can be ascribed to the inter-crossing of the spaces between the microplate structure 29 . pH PZC and optical properties The results are displayed in Figure S5 of the Extended data 33 . The PZC (the pH at which the surface charge of a material is 0) of BiOBr, BiOBr 0.8 I 0.2 , BiOBr 0.6 I 0.4 , BiOBr 0.4 I 0.6 , BiOBr 0.2 I 0.8 , and BiOI materials was found to be 5.27, 5.27, 5.29, 5.31, 5.38, and 5.39, respectively. At a pH lower than the PZC, the material will be positively charged whereas at a pH higher than PZC the materials acquire a negative charge 30 . The photocatalytic performance of a semiconductor depends on the band structure of the photocatalyst. Figure 4 shows the UV/Vis spectra of the as-synthesized BiOBr z I (1-z) materials. When the composition of iodine is increased from 0 to 1, there is a significant redshift of the absorption edges from 478 to 632 nm for pure BiOBr and BiOI, respectively. Whereas the absorption edges of BiOBr z I (1-z) materials whose composition lie between that of BiOBr and BiOI were found to be 546, 577, 591, and 611 nm for Z = 0.8, 0.6, 0.4, and 0.2, respectively. The change is consistent with the colour of the material, from white to coral-red (Figure 3b). The spectra revealed that the absorption of BiOBr is slightly within the visible region and the absorption edge of BiOI extends towards the visible region indicating that it is highly responsive within visible light. The bandgap energy (E g ) values of the obtained BiOBr z I (1-z) materials are determined using Equation 1 below: where α is absorption coefficient, h is Plancks constant, ν is frequency of light, A is absorbance, and E g is bandgap energy 8 , and n is dependent on the transition characteristics of the semiconductor. For BiOZ, n is 4 owing to its indirect change. The corresponding bandgap energies of BiOBr z I (1-z) materials were calculated and the result is shown in Table 1. The bandgap energy of the materials could be tailored from 2.59 to 1.96 eV by reducing the value of Z from 1 to 0 as shown in Table 1, demonstrating that doping I atoms into BiOBr crystal reduced the bandgap and increased the absorption range of BiOBr. This improves the photocatalytic performance of BiOZ. The Mulliken electronegativity theory was used in calculating the conduction band (CB) and valence band (VB) electric potentials using Equation 2 and Equation 3: where E CB and E VB are the electric potentials edges, respectively. E c is the free electrons energy on the hydrogen scale (4.5 eV), E g is semiconductor's bandgap energy. While χ is the semiconductor electronegativity, equivalent to geometric mean electronegativity of atoms forming the compound. From the calculations obtained from the UV-Vis results, the CB and VB of BiOBr z I (1-z) materials were approximated and the results are presented in Table 1. The gradual decrease in VB edge potential from 2.98 to 2.42 with increasing iodine composition indicates a weak oxidation ability and a stronger light absorption capability 31 . The generation of • O 2 − radicals depends on the CB edge potential, the more positive the CB, the more difficult it is to generate • O 2 − radicals. Moreover, increased iodide concentration leads to a reduction in the level of VB. This in turn reduces the bandgaps energy thereby facilitating photosensitization of the catalyst. Therefore, the highest photocatalytic performance of BiOBr 0.6 I 0.4 material; the most active material, is ascribed to the suitable bandgap structure that may attain the equilibrium between the light absorption capacity and redox power. (1-z) . The adsorption ability of a photocatalyst is well-known to play an important role in the photodegradation process. Figure S6 in the Extended data 33 displays the MB adsorption uptakes by BiOBr z I (1-z) materials at constant concentration and catalyst dosage. The adsorption uptake values for the MB dye by BiOBr, BiOBr 0.8 I 0.2 , BiOBr 0.6 I 0.4 , BiOBr 0.4 I 0.6 , BiOBr 0.2 I 0.8 , and BiOI were 12, 17, 19, 16, 15, and 14%, respectively. The adsorption efficiency was found to be in the order BiOBr 0.6 I 0.4 > BiOBr0 .8 I 0.2 > BiOBr 0.4 I 0.6 > BiOBr 0.2 I 0.8 > BiOI > BiOBr. Fitting of the adsorption data to adsorption isotherms using nonlinear model fitting procedures (Extended data, Figure S7 33 ) indicated that the process of adsorption was according to Langmuir. The Langmuir constant and the maximum adsorption capacity of BiOBr 0.6 I 0.4 were found to be 6.10×10 -2 L/mg and 31.5 mg/g, respectively. This implies that at the highest concentration used in this study, the adsorption sites were not saturated with the dye. Kinetics, scavenging experiments and recyclability. To broadly understand the photocatalytic ability of the as-prepared materials, the removal of MB was performed under irradiation from solar and UV light in the presence and absence of the photocatalyst materials. Stirring of photocatalyst and dye mixture was carried out in the dark for half an hour before illumination to attain adsorption-desorption equilibrium (Extended Table 1 data, Figure S6 33 ). Figure 4a, b shows the removal of MB under the irradiation from solar and UV light, respectively. The degradation in the absence of the photocatalyst (photolysis) both under the solar and UV light showed a negligible change indicating that MB is stable under both light sources hence photolysis can be neglected. Therefore, the removal of MB can be attributed to solar and UV light in the presence of the photocatalysts. The photodegradation efficiency was calculated using Equation 4 below: . Absorption thresholds, electronegativity, bandgap energy, the conduction band (CB) and valence band (VB) edges of as-prepared BiOBr where C 0 is the concentration of MB dye at equilibrium, C t the concentration at the time, interval, t. Finally, the optimal composition of BiOBr 0.6 I 0.4 promotes the photoactivity by minimizing the recombination of photoexcited electron-hole pair. The model that was used to fit the experimental data obtained from UV-light irradiation was pseudo-first-order. The equation ln (C o /C t ) = κt, was used to determine the pseudo-first-order rate constant. In the equation, ln (C o /C t ) = κt, κ and t denote the rate constant (min −1 ) and time of irradiation, respectively. C o is the initial concentration of MB dye, whereas C t is the concentration at the time interval, t. As presented in Table S3 of the Extended data 33 The pseudo-first-order kinetic model, derived from the Langmuir-Hinshelwood (L-H) kinetic model under conditions of low dye concentration, has been used extensively to evaluate the reaction kinetics of photodegradation processes 34,35 . Whereas the linear equation has been widely (almost exclusively) used during kinetic studies in photodegradation studies, it has been noted that the nonlinear model fitting gives more accurate results. This is highlighted in Figure 5, which shows a comparison of nonlinear and linear fitting for BiOBr and BiOBr 0.6 I 0.4 in which nonlinear model-fitting has a lower sum squared error (SSE) values compared to linear fitting. As the deviation from L-H kinetic model increases, the differences in parameters obtained from nonlinear and linear model fitting becomes significant as shown in Table 2 in which the differences in the obtained rate constants are 1.8% for BiOBr and 10% for BiOBr 0.6 I 0.4 . Results for nonlinear fitting of BiO-Br z I (1-z) materials are shown in Figure S7 of the Extended data 33 and summarised in Table 2. The main active species involved during photocatalysis are known to be O 2 •− , e -, h + , and HO • . However, the main active species during the degradation of MB was established through the quenching experiment over BiOBr 0.6 I 0.4 under the irradiation from the UV light. P-benzoquinone (BQ), silver nitrate (AgNO 3 ), EDTA-2Na, and tert-butanol (TBA) was added to the reaction vessels to quench O 2 •− , e − , h + , and HO • respectively. When TBA and AgNO 3 was added, the removal efficiency was insignificant compared to when no quencher was added as presented in Figure 6. This indicates that the HO • and ewere not the main active particles in the photodegradation reaction system. However, when BQ and EDTA-2Na were added, there was an obvious reduction in degradation efficiency as a result of the suppression of h + and O 2 •− which were the major species in the reaction system. Rashid et al., 36 made similar conclusions for the photocatalytic mineralization of aqueous ciprofloxacin. The recyclability/reusability of a material is key for its practical application. Therefore, the stability of the prepared materials was determined through the recyclability of BiOBr 0.6 I 0.4 for the removal of MB. The results are displayed in Figure 7. In this study, BiOBr 0.6 I 0.4 was reused five times. After every cycle, the catalyst was centrifuged, cleaned, and dried for recovery. The degradation efficiency was reduced to 82% from 88%. The slight drop in efficiency of BiOBr 0.6 I 0.4 , even after five cycles, signifies that the BiOBr z I (1-z) materials showed remarkable good stability. Thermal contribution on the MB photodegradation under solar. It is observed that the photocatalytic efficiency was higher under solar irradiation compared to UV light. Unlike the controlled conditions under which the removal of MB under irradiation from UV was done, the removal under solar was performed in open-air that may have introduced other factors that led to the efficiency enhancement. Therefore, an investigation, particularly on the thermal contribution, was done, with other factors like catalyst loading and dye initial concentration kept constant. BiOBr 0.6 I 0.4 being the best photocatalyst was chosen for the study. An experiment was performed with a set up having four different reaction vessels: two aluminium foil-covered vessels, one with catalyst, and another without and another two uncovered vessels with similar catalyst conditions (with/without) were set up. The temperatures of the four samples were recorded as 27 ℃ before exposure to sunlight. Table S3 in the Extended data 33 shows the changes in temperature with irradiation time. For the covered vessels, the temperature increases steadily up to the 150 th minute whereas for the uncovered vessels, the temperature increases up to the 120 th minute, with no noticeable change thereafter. The covered vessels recorded the highest temperature due to the build-up of heat that could not escape as the vessels were covered. Further, MB degradation still occurred in the covered vessels, despite the absence of light, which may be attributed to the build-up of heat energy. Figure 8 illustrates the photocatalytic change for MB both from covered and uncovered flasks. Effects of temperature. Temperature is a relevant parameter since the rate of reaction is temperature dependent. The influence of temperature on the degradation process of MB over BiOBr 0.6 I 0.4 is shown for the temperature range 30 -60°C in Figure S8a of the Extended data 33 . Other parameters were kept constant. It was observed that a rise in temperature caused a gradual increase in degradation efficiency. An increase in temperature led to the enhancement in photooxidation as a result of the increased frequency of molecules collision. Furthermore, when the temperature increases, the reaction competition is increased thereby restraining electron-hole recombination 37-39 . Figure S8b displays the Arrhenius plot of k versus T⁻ 1 where the activation energy (Ea) of 9.1 × 10⁻ 1 J mol⁻ 1 was obtained for the removal of MB. The lower activation energy implies that the removal of MB over BiOBr 0.6 I 0.4 requires little energy hence making it economical. Effect of amount of catalyst on MB removal . To determine the influence of the amount of catalyst on the MB removal, a dosage of BiOBr 0.6 I 0.4 was increased from 0.01 to 0.07 g/L under optimal conditions (10 mg/L MB concentration, pH, 7.0, and 150 min reaction time). The result is shown in the Extended data, Figure S9 33 . The removal of MB was increased from 30% to 99% with an increasing amount of dosage from 0.01 g to 0.05 g. The removal was maximum at 0.05 g, which is the optimal catalyst loading. At the optimal dosage, there is maximum availability of the catalyst surface area plus active sites for the production of active radicals under UV-light irradiation. However, the catalyst dosage above 0.06 g results in a decrease in degradation since the screening effect and light scattering tend to become higher [40][41][42] . Additionally, the illumination of key active primary oxidants in photocatalysis will be prevented when the catalyst dosage exceeds the optimum amount hence the degradation efficiency will reduce 43,44 . Economically, therefore, optimum catalyst dosage is vital, as less energy would be required for regeneration. Effect of pH on photocatalysis. The initial pH of a wastewater solution can significantly influence the process of waste treatment since the extent of MB removal is affected by the charge on the photocatalyst surface. The catalyst surface charge depends on the concentration of OH⁻ and H + in aqueous solutions 41 Conclusion The preparation of a series of BiOBr z I (1-z) solid solutions via the hydrothermal technique was successful. The crystal structure, morphology, pore size, and surface area were obtained by XRD, SEM, TEM, and BET, respectively. The investigation of the photocatalytic ability of the prepared BiOBr z I (1-z) materials was carried out through the removal of MB, additionally the optimization of the operational parameters was also performed. The XRD revealed that the (102) and h + radicals were the key active species responsible for the removal of MB. Therefore, the use of solid solutions has promised to be a better route for environmental remediation. John Mmari Onyari Department of Chemistry, University of Nairobi, Nairobi, Kenya This study, involved the synthesis and characterization of Bismuth oxyhalide (BiOBrz I (1-z)) (0 ≤ z ≤ 1) materials. Bismuth oxyhalide was used in the removal of methylene blue (MB) which is a textile waste that is discharged to the environment. The photocatalytic performance of the materials was examined by photodegradation of the MB done under solar and ultraviolet (UV) light irradiation. The presentation of results is well done and the conclusions drawn in the study were adequately supported by the results. The manuscript makes a significant contribution to scientific knowledge and my recommendation is "Approved with Reservations." The paper is to be accepted for indexing after the minor revisions to address issues summarized below. 2) Page 5, Photodegradation studies: "The set up for the photodegradation under UV lamp and solar is shown in Figure S1." Where is Figure S1 shown? 3) Page 7: "The corresponding bandgap energies of BiOBrz I (1-z) materials were calculated and the result is shown in Table 1". [ … results are shown …]. 4) Page 8 Fitting of the adsorption data to adsorption isotherms using nonlinear model fitting procedures (Extended data, Figure S7 33 ). See also Page 9 etc … As presented in Table S3 of the Extended data 33 Page 10 …. are shown in Figure S7 of the Extended data 33 . Reference is made to "Extended data" several times. Why not just refer to the key finding and give the reference? 5) Page 10: Label the x-axis in Figure 6. analytical applications.
v3-fos-license
2016-05-12T22:15:10.714Z
2008-11-12T00:00:00.000
1354802
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://bmcclinpathol.biomedcentral.com/track/pdf/10.1186/1472-6890-8-11", "pdf_hash": "89ae879399e3477c247e75a55376de9e1a72755b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41569", "s2fieldsofstudy": [ "Medicine" ], "sha1": "89ae879399e3477c247e75a55376de9e1a72755b", "year": 2008 }
pes2o/s2orc
External quality assurance as a revalidation method for pathologists in pediatric histopathology: Comparison of four international programs Aim External quality assurance (EQA) is an extremely valuable resource for clinical pathologists to maintain high standards, improve diagnostic skills, and possibly revalidate medical license. The aim of this study was to participate in and compare four international slide survey programs (UK, IAP-Germany, USA-Canada, Australasia) in pediatric histopathology for clinical pathologists with the aim to use it as a revalidation method. Methods The following parameters were evaluated: number of circulations per year, number of slides, membership requirement, proof of significant pediatric pathology work, open to overseas participants, laboratory accreditation, issue of continuing professional development certificates and credits, slides discussion meeting, use of digital images, substandard performance letter, and anonymity of responses. Results The UK scheme, which has sampling procedure over several time frames (2 circulations/year, 30 slides), partial confidentiality, and multiple sources of data and assessors, can be used as a model for revalidation. The US-Canadian and Australasian schemes only partially fulfill the revalidation requirements. The IAP scheme appears to be essentially an educational program and may be unsuitable for revalidation. Conclusion The purposes and programs of EQA schemes vary worldwide. In order for it to be used for revalidation, it is advisable that EQA schemes are immediately unified. Background Clinical Governance is a term, originally used in the National Health Service (NHS) of the United Kingdom (UK), to describe a systematic approach to maintain and improve the quality of patient care. It constitutes an official framework through which the NHS is accountable for the ongoing improvement of service quality safeguarding high standards of care and creating an environment focused on clinical excellence. Where communication failure is the most likely cause for medical errors, the decrease of professional skills may contribute to fatalities. Indeed, medical errors are not usually the result of the failure of particular providers, but are often systems-related and not attributable to individual negligence or misconduct. Pediatric Pathology embraces two areas of pathology: 1. Specialist organ and system pathology, notably the study of surgical and oncological diseases, but also including the causes, nature and processes of disease and other forms of illness or abnormal conditions of the fetus, newborn, infant and child; and 2. Post mortem examination of the fetus, newborn, infant, and child [1]. Pediatric clinical services rely on safe pediatric pathology services and several initiatives can be employed to keep high the standards of care. In 1999, Quality System Essentials were introduced to laboratory practice by the National Committee for Clinical Laboratory Standards (now Clinical Laboratory Standards Institute [CLSI]) identifying 10 or more major laboratory activities that are important components of a laboratory quality program [2,3]. The essentials include equipment, process improvement, safety and personnel development among others. All of these essentials were developed to ensure that data reported from the laboratory are as accurate as possible and serve the needs of patients and clinicians. An important component in the control of any laboratory procedure is the participation in an external quality assurance (EQA) or proficiency testing program to demonstrate that the method will give the correct result with an unknown random specimen. However, quality assurance in a clinical laboratory also depends on 'safe' pathologists. Thus, EQA may be a fundamental part of the continuing professional development (CPD) of health care professionals [4]. This aspect is recognized internationally and there is a move from continuing medical education (or clinical update) to continuing professional development, including medical, managerial, social, and personal skills. New compulsory policies for re-validation of doctors in specific areas are currently topical worldwide [5]. If appraisal is a formative and developmental process intended to identify development needs and not performance management, then revalidation is undeniably an episodic process to show capability to practise to the professional regulator (e.g. the General Medical Council in the UK). Thus, revalidation is an assessment that requires a summative judgment (pass or fail). Postgraduate assessment validates doctors for specialist practice initially by allowing them to be entered onto the specialist register, whilst revalidation is the affirmation of continuing fitness to practise and therefore must relate to compliance with defined competencies. The usual academic criteria for such an assessment process include: clear-cut standards, the possible involvement of the public in judging assessments, the use of multiple sources of data and assessors, and sampling procedures carried out over several time frames rather than at a single point [6]. The aim of EQA in pathology is to maintain good running standard operating procedures and to improve the performance of all sub-specialties in order to ensure that patients have access to a high quality service wherever they live. In some countries, laboratories must provide documentation of success in EQA to maintain accreditation or licensure and a number of programs have been developed with assistance from professional bodies together with significant input from accreditation agencies and institutions for improving medical activities [7,8]. In this study, we compare four slide survey programs from four geographical regions (UK, Germany, USA-Canada, and Australasia) with regard to EQA in pathology for pediatric pathologists in the setting of professional development. We discuss the possibility of using these surveys as a method for medical licensure revalidation. Methods Four international tissue slide survey programs the British Paediatric Pathology Association (BRIPPA) from the UK http://www.cardiff.ac.uk/medic/aboutus/departments/ pathology/index.html, the Royal College of Australasian Pathologists from Australia and South-East Asia -quality assurance programs (RCPA AP QAP) http:// www.rcpaqap.com.au/, the North-American Society of Pediatric Pathology (SPP) including US and Canadian subgroups http://www.spponline.org/, and the International Academy of Pathology (IAP) -German Division including German speaking countries -Germany, Switzerland, Austria, Lichtenstein, Luxembourg and South-Tyrolean region of North-Italy) http://www.iap-bonn.de/ were used in this study. To take into account the different objectives and context of each scheme we evaluated the different EQA programs using the following parameters: circulations per year, number of slides per year, requirement of membership to the regulatory body organizing the EQA, proof of significant pediatric pathology work, open to overseas participants, Clinical Pathology Accreditation Ltd. (CPA) accreditation in the UK or similar accreditation bodies in other countries, issue of CME certificates and CPD or continuing medical evaluation (CME) credits, forthcoming meetings for slides discussion, fixed organizer, annual report issue, improvement changes meeting discussion, digital images option, substandard performance letter, minimum levels of participation, and coding preserving the anonymity of participants. In addition, we also evaluated the participant's option to decide whether some cases are inappropriate for EQA purposes, that some cases require expert opinion or there is either insufficient information or poor quality staining of the glass slides. To assess an EQA program as a revalidation method we studied these surveys from the viewpoint of clear-cut standards, the possible involvement of the public in judging assessments (confidentiality), the use of multiple sources of data and assessors (personal participation), and type of sampling procedures (number of circulations, number of slides per year). Results The EQA scheme for BRIPPA has full CPA accreditation, although re-accreditation according to new standards is due [9]. The last two circulations, i.e. circulation W and circulation X, registered 51 and 52 participants, respectively. The percentage of response was 68.2% for circulation W and 78.8% for circulation X. The Pediatric Histopathology EQA Scheme aims to provide a platform suitable for both pediatric/perinatal pathologists and general pathologists with responsibilities for pediatric pathology. In the UK scheme there is a full range of pediatric surgical material, autopsy material, and placenta. There are two circulations per year in March and October. It is regularly emphasized that the cases are intended to reflect routine practice as far as is possible within the constraints of an EQA scheme. The UK scheme is run by the BRIPPA Committee, who supports the organizer, and is subject to scrutiny by the Royal College of Pathologists EQA Steering Committee and by the National Quality Assurance Advisory Panel (NQAAP) on Histopathology. The participation is voluntary and the scheme is open to both UK and overseas pathologists, provided they work or have worked, at least occasionally, in the UK. In each circulation there are 15 cases that are selected by the organizer on the basis of the clinical history alone. It is advised that any relevant clinical information existing when the original report was dispatched is provided and that this material should be made available to all participants. Each participant is given a numeric code, which is entered on the return forms to permit a personal statistical analysis of results. Reports giving the range and popularity of the diagnoses are sent to all participants, along with the comments and the original diagnoses. Histograms or other statistical figures are used to give the distribution of the accumulated scores for of each participant. This allows the participant to see confidentially how his/her performance compares with his/her peers. The computer program which runs this system has been previously described [10]. The program allows calculation of the degree of correlation between the personal diagnosis and that formulated by the consensus of the group. The numerical code also permits anonymity and confidentiality and is known only to a part-time secretary employed on the scheme. There is an annual meeting at which cases are presented and discussed and personal scores are calculated after the suitability of each case for EQA purposes has been discussed at such a meeting. There is a definition of a "persistent sub-standard performance" which is defined as any participant whose score remains in the bottom 2.5% for two circulations. Anonymity is broken if the chairman can identify that standards of care are jeopardized. In the event of a pathologist making diagnoses which are markedly at variance with the consensus, the feedback system should make that pathologist aware of the position. A bottom line of 2.5% of participants has been proposed [11]. A number of procedures have also been proposed where the Chairman of the Histopathology Advisory Panel will be informed if he/she again scores in the bottom 2.5% of the ranked order in two out of the next three circulations [12]. It is advisable to start an investigation without breaking confidentiality, which seeks explanations rather than taking punitive action. Such procedures have been approved by the Council of the RCPath and are described in more detail on the College website [13]. The organization of the scheme requires the maintenance of a computer, postage, packing, and photocopying costs and a parttime secretary. Thus, a number of positive aspects are present in this scheme if compared with the other three ( Table 1). Concerning the evaluation of the BRIPPA EQA as a revalidation method it should be indicated that such a scheme has partial confidentiality, multiple sources of data and assessors, and a sampling procedure over several time frames, fulfilling almost all criteria necessary for a revalidation method. The EQA scheme of the Society of Pediatric Pathology runs acceptably. In particular, the slide survey program of the Society of Pediatric Pathology (USA) contains an important educational component using three single choice questions ("best of four") in addition to the field "Your diagnosis". In this sense the credits gained by the survey program are corroborated by active research of the participant in the medical literature and the educational value of the survey is perceptibly increased. Although the use of a multiple choice question is a valuable and probably stimulating aspect of continuing medical education, it remains of educational value, because it is so far from routine practice as to be irrelevant to EQA. A digital slide survey program has recently been started in late 2007, but a full evaluation is lacking. The slide box of the German Division of the International Academy of Pathology allows pathologists working in a German-speaking country to familiarize themselves with pediatric pathology entities. A feedback form is included, but it is not compulsory. A standard course (about once a year) is offered explaining the diagnoses and differential diagnoses as well as inserting additional comments or molecular pathology notes. Digitalization of data is in progress. At present four modules are offered for pediatric pathologists and general pathologists with a significant workload in pediatric pathology. No substandard performance letter is issued. The specialist module "Pediatric Pathology" of the Royal College of Pathologists of Australasia, Section of Anatomical Pathology, Quality Assurance Programme (RCPA AP QAP) is accredited by the National Association of Testing Authorities, Australia, and complies with the Requirements of ILAC G13 [14]. The 2007 survey was distributed to 75 participants in a range of Australian and international laboratories: Australia 63%, New Zealand 12%, Malaysia 12%, Other (Austria, Fiji, New Caledonia, Saudi Arabia, Singapore, South Africa, Sweden, United Arab Emirates) 13%. A major change for 2006 was the increase of cases for each specialist module, now including a total of ten slides per year. Initially slide sets and additional digital files for virtual microscopy are provided. In 2005 the Royal College of Pathologists Australasia acquired the Aperio ScanScope ® system [15]. This system, which is in addition to existing virtual microscopy equipment, was funded under a Commonwealth government contract and is fully operational at the RCPA QAP office. The virtual microscopy equipment is able to scan diagnostic slides and create digital images that can be magnified in a similar manner to light microscopy. The equipment contains special features including a particular system of file compression that allows highest resolution of histological glass slides and the files are accessible through a free ware viewing program for personal computers. In response to a questionnaire sent to RCPA AP QAP participants, it was found that about two thirds of participants were able to download the software and open the digital files, although only one out of two respondents had personal access to a personal computer with the minimum requirements for viewing the scanned slides. About one third of the participants upgraded a personal computer during the following 12 months to establish all of the minimum requirements and a further one third was able to upgrade to all of the ideal requirements for viewing the images. Three quarters of the respondents made a comment about the use of virtual microscope images for EQA purposes. Of these comments one third were positive, while the negative responses were mostly associated with the IT difficulties found at workplace, such as firewalls or the lack of DVD ROM drives. In the RCPA AP QAP module preferred diagnoses can be submitted electronically (website) or by fax or post. One of the most important achievements is the acceptance of website submitting and interim results are available on the RCPA AP QAP website immediately Notes: 1 The membership of the regulatory body (BRIPPA, RCPA, SPP, IAP) organizing the EQA is a requirement to participate in the histopathology survey; 2 the parameter 'anonymous responses' indicates that a coding is used to preserve the anonymity of the participants (It also means indirectly that explicit permission from the participant is required before data may be shared with local management, regional QA officers, accrediting bodies and suppliers of equipment and reagents); 3 with limitations (membership for pathologists is usually restricted to those with a regular commitment to work, usually locum work, in the United Kingdom); 4 with service online; 5 permanent set of stained sections (probably unsuitable for revalidation); 6 The comprehensive EQA is considered as major acceptance and author recommendation among the different EQAs. after the closing date. Assessment criteria include five categories for the preferred diagnosis classified against the target diagnosis. The diagnosis is classified concordant, if the preferred diagnosis is essentially/substantially identical with the target diagnosis; minor discordant, if the preferred diagnosis has one or more minor differences from the target diagnosis; discordant, if the preferred diagnosis is substantially different from the target diagnosis, differential diagnoses only, if only a number of differential diagnoses are reported, and unable to be assessed, if the submission was late, illegible or unable to be interpreted, including submissions in the form of a clinical report, a fax instead of a web submission or those with no text in the "preferred diagnosis" field. Participation as an individual in any of the diagnostic modules is recognized as a CPD activity by the Board of Education of the Royal College of Pathologists of Australasia. If we consider all four programs together, we recommend that an ideal quality assurance scheme should have, on average, two circulations per year for a total of at least 30 slides per year, be addressed to pathologists with a substantial pediatric pathology workload, are accredited, provide CPD credits and certificates, and have regular slides discussion meetings as well as scheme improvement meetings, and lastly issue an annual report (Table 1). Discussion Clinical governance is one of the most frequent terms encountered in health care management, in both the national health services and private healthcare systems. It describes a systematic approach maintaining and regularly improving the quality of patient care while safeguarding high standards of care through creation of an environment in which excellence in clinical care will flourish. The elements of clinical governance advance health care professional education, clinical audit, clinical effectiveness, risk management, good professional practice-based research and development, confidential-based honesty procedures. In this study, we delineate the characteristics of an EQA scheme that should be taken into consideration when establishing a national or local EQA scheme. We consider that if a quality assurance scheme is to be suitable as revalidation method it should have two circulations per year, requirement for membership or a substantial pediatric pathology workload. It should be open to overseas participants, have CPA accreditation, provide CPD credits and certificates, and have regular slides discussion and scheme improvement meetings and an annual report. Internal audits and a 'black box' of tissue slides are important, but external proficiency testing is advocated as an independent means for continuing professional development and to support laboratory accreditation. Indeed, laboratory services are a central core in the diagnostic procedures of a general hospital [16]. The "Pediatric Pathology" module of slide survey programs is targeted for pediatric pathologists, general pathologists who have pediatric cases in their diagnostic routine, pediatric pathology fellows and pathology residents. The concept of EQA programs relies on the fact that participants should view cases without consulting colleagues before submission to the program office. Uniformity of standards to set up a quality assurance program at national level is an important task. The use of physician specialization to assess the quality of care provided by individual physicians represents a structural approach to measuring quality. In the last 10 years there has been an enormous change in revalidation in the United States. This is because physician specialization as represented by board certification may be an unreliable measure of the quality of a physician's performance over time, unless his or her knowledge and skills in a specialty area are periodically updated or assessed. In such a way, there is no guarantee that physicians have maintained the same level of skills and knowledge they demonstrated for their initial certification without a revalidation process. Thus, the search for a practicable revalidation method is very valuable. According to our data and considerations, the UK scheme has an educational component largely as a by-product of participation, but it can easily be used as a form for medical revalidation. This is because it possesses a partial confidentiality, contains multiple sources of data and assessors, and has a sampling procedure over several time frames rather than at a single point. The German scheme would appear to be essentially that of an educational program and possibly the North American scheme too. The Australian scheme seems to have both educational and revalidation components, but the number of slides is low and there is only one circulation per year that we consider insufficient as a revalidation method. In assessing EQA schemes one aspect that has to be emphasized is the 'consensus answer'. The EQA utilizes the consensus answer as the correct answer which may not necessarily be the same as an 'expert' answer. Interobserver reliability (consistency) may be reassuring, but there is a risk of a group response that can become very unpredictable. [17] The participants may all agree, but the correct or best answer may not be made as all of the participants are wrong. Similarly, an expert opinion is not necessarily correct. It is well recognized in pathology that there are instances when pathologists cannot agree as to the correct answer even on cases that are not unusual or rarities. Disagreement between experts is to be expected and should not necessarily give rise to concern over their competence [18]. Consequently, the EQA, rather than measuring expertise, is measuring a minimum common level of attainment (i.e. conformity to consensus). Therefore how the consensus should be assessed? There are several forms of consensus used today. The promise of achieving a consensus creates the impression that one group may be able to operate without conflict. Consensus' philosophy fosters an atmosphere that encourages the expression of different opinions and even conflict, and then provides a process that may lead to a resolution in a creative and very supportive environment. It recognizes that everyone has a contribution to make, and that all views are encouraged, but it may take several meetings to work towards a decision and at the end of the process it is not uncommon for some people to continue to disagree with the final decision. The Delphi Technique and Nominal Group Technique are two well-recognized consensusformation methodologies specifically designed to combine judgments from a group of experts [19,20]. The Delphi Technique utilizes a series of well-defined questionnaire-based surveys, whereas Nominal Group Technique is a structured face-to-face meeting designed to facilitate consensus. Consensus-formation techniques require that each step builds on the results of the previous steps. In the majority of the EQA survey programs for pathologists the correct diagnosis is not indicated as 'expert judgment' but conversely a 'democratic' consensus is sought by judgment of the majority of participants and discussion in the case of conflicting results. In some situations, a consensus cannot be reached and the controversial tissue slide is excluded from the final EQA performance. The rejection of cases not reaching 80% agreement between participants can be criticized, as it is artificial, but it is a compromise [21]. As a consequence of rejecting such cases, the distribution of the score profile becomes skewed. Another aspect at the base of some controversial issues in EQA in pathology may be the lack of uniform guidelines that are valid worldwide. Thus, our study first tries to compare four international survey programs, emphasizing the need for harmonization. However, lack of harmonization also relies on the different interpretation of some pathology. An example is the diagnosis of chronic intestinal pseudo-obstruction in children. The diagnosis of intestinal neuronal dysplasia as an isolated entity is indeed very controversial. Intestinal neuronal dysplasia is characterized by hyperplasia of the submucosal and myenteric plexuses and the isolated form has been described in the distal colon and rectum, and its clinical presentation, with constipation and intestinal obstruction, can mimic that of Hirschsprung disease or aganglionosis [22,23]. During the years since the entity was first described, the criteria have also been modified. However, in many countries, mostly of Anglo-Saxon origin, the current view is that isolated intestinal neuronal dysplasia is seen in a variety of clinical settings and is more a descriptive entity than a specific disease requiring surgical intervention [24]. It is difficult to identify the minimal number of slides that needs to be assessed for consideration as a revalidation method. We initially considered the number of cases of pediatric and placental pathology in a tertiary academic center. The number of cases of pediatric and placental pathology was found to be approximately 1500 in many institutions (personal communication) over the years. Subsequently, we considered the percentage of 'permitted wrong diagnoses' in the diagnostic routine identified in the English literature. The results of studies concerned with error rates in histopathology vary widely; no serious errors, [25] 0.26%, [26] up to 1.2% of histopathological reports [27]) but these were performed in academic or teaching institutions. In reality, the percentage is quite variable when considering sub-specialistic, inter-individual and intra-individual variability studies [28]. It has also been suggested in the USA that false negative rates of 5-10% may be an admirable goal in cytopathology, and rates below 15-20% are a possible standard [29]. We allowed 2% in consideration of typographical errors, and hypothesized that a valuable number of EQA diagnoses could be 30 (2% of 1500 diagnoses), keeping in mind an artificial assumption that the highest number of wrong diagnoses can be equivalent to the number of histological glass slides that run in a annual EQA program. Thus, we propose that 30 should be the minimal number of slides per year assessed in an EQA program by a specialist pediatric pathologist. In this sense, the UK scheme with 30 slides per year might be considered as a standard and may serve better as a revalidation method. Health services are awash with data, but safety is a category that needs to be continuously improved for health services. IT resources appear to be the greatest barrier to obtaining access to virtual images. This can be overcome if hospitals or government upgrade workstations to have compatible IT systems and allow access for quality assurance purposes. Smooth running quality assurance programs can improve this relationship and strengthen the link with the clinicians. Modernizing the pathology laboratory commences with virtual microscopy and services will provide a remarkable wealth for the child's growth in the 3 rd millennium. Image digitalization is a new tool to use for biopsy specimens that cannot be cut in 40-60 slides for all participants. Indubitably, digital imaging with virtual microscopy will be more closely linked to practice in the future. To date, glass slides mimic the routine practice worldwide. Thus, digital images for EQA are a compromise offering the possibility to extend the range of cases to include small specimens, such as endoscopic and fine needle biopsies. Another possibility is to manage the use of glass slides from small biopsies through a postal system, circulating slides sequentially between participants. In the recent UKNEQAS Meeting in Glasgow it was stated that the EQA remains an extremely valuable resource for clinical pathologists and needs a well-organized, rapid and manageable system to run efficiently [31]. It seems evident that EQA drives standardization and there are many examples supporting this fact, including those where peer-driven changes are influenced by EQA findings [31]. One aspect that was not considered in this study was the difficulty of the histological glass slides utilized in the circulations. However, in our opinion, there was no significant disparity in the difficulty of diagnosis because some recommendations have been proposed and followed in all four EQA schemes examined. In particular, the diagnosis should be made using the hematoxylin and eosin stained slide and no immunohistochemistry should be needed to arrive to the diagnosis. Lymph node cases usually represent a frequent source of difficulty and specific details have to be provided in submitting such cases. Thus, the method for selecting cases has a crucial impact on whether the cases are more difficult (and hence more educational) or more representative of the routine workload (and therefore more relevant to performance surveillance). Cases should be contributed by all participants in rotation, following agreed guidelines. Extremely 'simple' cases should be avoided, to be determined at meetings of the participants, but bizarre cases and case-report material remain inappropriate. The number of cases circulated must be sufficient to give reasonable confidence that serious sub-standard performance will be promptly identified. Inappropriate tissue slides, limited number of circulations, lack of secretarial support, and low number of participants may jeopardize the EQA as a revalidation method. The importance is to establish a quality management system, allocate privileged time for it and ensure a reasonable cost load for healthcare organizations. It is important to specify well defined implications for schemes and participants. Further, we believe that a periodically reviewed and updated quality policies manual in addition to continuing audit of performance should be standard in every histopathology department. The association with a CPA accredited laboratory (contractual agreement) should be considered. If an EQA scheme is not run in a department, it is recommended that the department is not given accreditation. Business and healthcare organizations regularly use the process of benchmarking to learn how others address policy issues and solve problems. The improvement of diagnostic skills in pathology is of paramount importance and the interest in programs that provide external proficiency testing, quality assessment and appropriate education programs to public and private laboratories of pathology and laboratory medicine is growing rapidly. There are some associations and companies supplying both quality assurance programs and supporting services for the benefit of pathology laboratories and personnel working within the pathology environment. The substantial advantage of internal audit and slide survey is intrinsically present in these programs intended to continually improve pathology services for the well being of communities. There is no direct evidence to support the validity of histopathology EQA, but there are various strands of indirect evidence that can be drawn together to underpin validity. Indirect evidence to support validity includes the response process used in the EQA. This reflects actual working practice undertaken by pathologists. EQA started originally as a 'hobby' for pathologist participants, but now it has acquired or is acquiring a central role as part of continuing professional development. It may be used as a revalidation method, because it represents clear of medical qualification, its' results may be given to the public, it may contain multiple sources of data and assessors and it represents a sampling procedure over several time frames rather than at a single point. However, the purposes and programs of EQA schemes can be different worldwide and in consideration of its possible use as revalidation method, it is strongly advisable that all EQA programs are unified as soon as possible.
v3-fos-license
2022-12-08T16:17:16.936Z
2022-12-05T00:00:00.000
254398037
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1080/02607476.2022.2152982", "pdf_hash": "5ba26f1b27c3c7109e23abd933bef0354892f49c", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41570", "s2fieldsofstudy": [ "Education" ], "sha1": "aec6d58e92b4482785c5eb377659efc0f43c3b40", "year": 2023 }
pes2o/s2orc
Emotional responses to challenges to emerging teacher identities in teacher education: student teachers’ perspectives on suitability ABSTRACT The following paper aims to investigate how student teachers relate to the suitability of their student teacher peers after experiencing challenges to their emerging teacher identity, resulting in emotional responses. A constructivist grounded theory study was conducted in which 18 student teachers participated. Data from 14 individual interviews and one focus group were analysed. Findings revealed that encounters between student teachers sometimes resulted in emotional responses. When the student teachers were emotionally challenged by their peers, their emerging teacher identity was challenged. In addition, the student teachers compared themselves with those peers whom they judged as unsuitable and constructed a self-image of being suitable. This comparative process was connected to three suitability norms: (1) being perceived as having the right values (2) being perceived as having social skills and (3) being perceived as committed to in-depth learning as a teacher. Introduction There is a common deficit model of thought on student teachers in Sweden.For example, there has been a recurrent negative debate and criticism in Swedish media in relation to teacher education and student teachers regarding low standards, deficiencies, and unsuitability (Edling and Liljestrand 2020).Kelchtermans (2019) discusses a dominant discourse characterised by deficit thinking on how newly qualified teachers lack specific competencies and therefore need individual help to compensate for their individual shortcomings, positioning newly qualified teachers as 'formally qualified but not yet fully capable' (Kelchtermans 2019, 86).Swedish teacher education was previously more competitive and selective, but during the last decades the teaching profession has decreased in attraction and status, resulting in fewer applicants (OECD 2015).The OECD concluded that it would be better if Swedish teacher education were to return to a more selective admission process to increase the status of the teaching profession. CONTACT Henrik Lindqvist henrik.lindqvist@liu.se Here, several important issues of suitability interconnect.There is a shortage of teachers, which means that there is a need for more people to be trained as teachers.Although there is a need for a large number of people to complete teacher education in Sweden, many of the applicants who start teacher education never graduate (Sveriges Radio 2019).Moreover, the admission of applicants to teacher education who subsequently fail to complete the programme triggers discussion among student teachers of what makes a person suited for a career in teaching.There is a tension between getting enough teachers and retaining them and also improving quality in schools. This study stems from a larger project investigating emotionally challenging episodes during teacher education (Lindqvist 2019;Lindqvist et al. 2017Lindqvist et al. , 2020)).One of the recurrent patterns we found in focus groups and interviews in the project was that student teachers were emotionally challenged by peers they deemed unfit for the teaching profession.Therefore, this recurrent pattern has been further explored and analysed in this study.This study aims to investigate which characteristics are considered by student teachers to be necessary for a person to be suitable for a future teaching career, and how deviation from this norm challenges their emerging teacher identity.This is investigated using the following research questions: (1) What do student teachers perceive as being emotionally challenging in their contact with fellow student teachers?(2) How do student teachers interpret their peers as triggering challenges to their emerging teacher identity?(3) How do student teachers interpret themselves in terms of their suitability for a career in teaching? Challenges to emerging teacher identities Student teachers have reported encountering challenges during teacher education (Deng et al. 2018;Hong 2010;Kokkinos, Stavropoulos, and Davazoglou 2016) that also influence their emerging teacher identities (Yuan and Lee 2015).Yuan and Lee (2015) showed that the student teachers' teacher identities were formed through contacts during field work, with mentoring teachers and student teacher peers.When an emerging teacher identity is forming during teacher education, the experience can include emotional ups and downs (Timoštšuk and Ugaste 2010;Yuan and Lee 2016) and student teachers report experiences of emotional flux (Teng 2017).Teacher identity has previously been depicted as being fluid, dynamic and multi-faceted (Beijard, Meijer, and Verloop, 2004).Chen, Sun, and Jia (2022) describe teacher identity and emotions as reciprocal and dependent on how student teachers appraise events and situations in relation to their goals, and this determines the intensity of their emotional response.In addition, Nichols et al. (2016, p. 2) state that 'teachers' emerging identities not only influence their actions and emotions, but their actions and emotions influence their identity formation". A crucial challenge of student teachers is the dissonance between ideals and reality, and the emotional response this could result in (Sumsion 1998).Student teachers have beliefs about the role they will play in students' lives.When their beliefs are challenged or compromised, this will affect their development of a teacher identity.How student teachers cope with challenging situations and the influence of negative emotions on identity is therefore crucial (Yuan & Lee, 2016).In addition, there are unspoken rules in teaching about what emotions are suitable (i.e.encompassing rationalist, calm and balanced features).For example, anger is usually not considered favourable in teaching, whereas caring is (Isenbarger and Zembylas 2006).Student teachers encounter these unspoken rules during their teaching practice.Emotional challenges are understood as relationships, situations, and interactions that student teachers perceive as distressful or unpleasant, and therefore necessary to cope with, that also influence student teachers' emerging identity.Hadar et al. (2020) describe student teachers' need to be prepared for a volatile, uncertain, complex, and ambiguous world.In their study, student teachers struggle with the uncertain conditions, and do not seem to receive enough preparation in social-emotional competencies.Lassila et al. (2017) found occasions of laughter, silence, or humour in interactions within peer groups as a response to emotionally loaded stories from their peers in teacher education.In these cases, student teachers seemed to avoid deep emotional reflection altogether (Lassila et al. 2017).In addition, Väisänen et al. (2017) reported that student teachers rarely offer support to other student teachers.As far as we know there is no other research that examines how student teachers reflect on and judge their peers' suitability when discussing emotionally challenging episodes.Given the rationale that teaching is not for everyone (Sirotnik 1990), student teachers encounter suitability norms triggered by emotional challenges related to their peers.In teacher education, student teacher peer groups have been discussed as important for coping with challenging emotions and facilitating self-regulation and social support (Karlsson 2013;Lindqvist 2019).Yuan and Lee (2015) show how student teachers have peer support to overcome emotional challenges arising from teaching practice.In contrast, there is also research showing that peer-group learning might also trigger emotionally challenging episodes (Väisänen et al. 2017) and further consolidate suitability norms, and influence student teachers' emerging teacher identity. Suitability norms in teacher education Suitability refers to ideas about having the 'right' values, attitudes, personality, and skills needed to fit into the teaching profession (cf.Kelchtermans 2019).It is connected to emotional challenges (Holappa et al. 2021) and therefore also influences student teachers' emerging teacher identity. Issues of suitability are relevant for teacher education, since the social comparative and contrasting process among student teachers might help create norms, adding to a deficit model of student teachers, and future newly qualified teachers.In teacher education programmes, norms are created, and social comparative processes are at work that are challenging for the emerging teacher identity, resulting in emotional responses.For example, how student teachers express vulnerability might be inhibited by norms regarding suitable teacher characteristics (Holappa et al. 2021).A Finnish study on student teachers' perspectives on their teacher education found that student teachers created their own definition of suitable teacher characteristics, since there was a lack of criteria as to what constituted a suitable or 'good' teacher.The characteristics that student teachers tended to use to define the suitable, 'good' teacher included empathy, reflectivity, sociability, self-confidence, predominant positive mood, determination, and being active (Lanas and Kelchtermans 2015). The concept of suitability for a career in teaching influences how student teachers think about themselves and others: 'what "kind of person" one is recognised as "being", at a given time and place, can change from moment to moment in the interaction, can change from context to context, and, of course, can be ambiguous or unstable' (Gee 2000, 99).It would therefore appear to be important to explore how student teachers interpret and value their peers and themselves in emotionally challenging episodes (cf.Holappa et al. 2021), and what they consider to be suitable conduct for future teachers while in teacher education.Holappa et al. (2021) conclude that dealing with emotions around unsuitability needs to be a part of teacher education.Even so, how suitability norms are constructed and result in challenges to emerging teacher identities that evoke emotional responses among student teachers is not previously described in the literature.We add the perspectives of student teachers that experience challenges when experiencing challenges to the prevailing and majority suitability norms, as a part of their emerging teacher identity. Method We adopted a constructivist grounded theory approach, since it is designed to openly and systematically explore people's perspectives and voices in how they interpret and understand social processes, shared meaning and interaction (Charmaz 2014).This is suitable because our interest in how student teachers socially represented emerging teacher identities are challenged.We navigated close to data and sought to stay true to the participants' perspectives and their empirical world (Charmaz 2014).The constructivist version of grounded theory views actions as interpretations that form the social world of which people are a part.In line with a constructivist version of grounded theory, we view the researchers as co-constructors in the process of gathering and analysing the data (Charmaz 2014). Participants and data collection This study stems from a larger research project, in which a total of 67 student teachers from six Swedish universities participated.Our previous analyses of the qualitative data (based on individual interviews and focus groups) have focused on emotional challenges and coping strategies among student teachers.However, during the analyses we found that in some of the interviews the participants recurrently and spontaneously discussed their encounters with peers as emotionally challenging, resulting in discussions of their peers' (un)suitability for a career in teaching.Therefore, guided by theoretical sampling (Glaser 1978), for the current study we selected 15 of the interviews, including a focus group interview, for further analysis.We started the analyses by going through the entire dataset to find the interviews in which student teachers discussed and judged the suitability of their peers and presented this as emotionally challenging.Grounded theory methods were used to analyse the data (Charmaz 2014).The included interviews and focus group data were gathered from a total of 18 participants.See Table 1 for participant demographics. Teacher education lasts from three-and-a half to five years, depending on the age group that will be taught.In teacher education there is a period of teaching practice at a school.The teaching practice is up to 20 weeks spread throughout teacher education.The student teachers usually change schools for teacher practice and come to meet different supervisors who also assess and value their performance.In Sweden, teachers and other school staff are not only expected to but must embrace, express and stand for democratic values and human rights, according to national curriculum and school policy documents (Skolverket 2018). The student teachers in the current study were in their last year of teacher education and were scheduled to start working as teachers within a year.In total, 14 individual interviews (of which nine were conducted face-to-face and six using a video conferencing tool) and one focus group interview were analysed.The focus group interview was done face-to-face and included four participants.The focus group interview lasted 79 minutes and the individual interviews ranged from 31 to 96 minutes.All interviews were recorded and transcribed.The names used in the findings are pseudonyms. The interview questions focused on emotionally challenging situations in teacher education that the student teachers had been exposed to.When the student teachers described their peers as the source of emotional challenges, follow-up questions were used to elicit more information about their perspectives of peers as creating emotional challenges.This means that all participants in the study stated that their peers created emotional challenges to their emerging teacher identity, triggering the evaluation of suitability.The participants evaluated the other student teachers based on work on courses, but also from spending time together in other situations, such as time between classes and in group work. Data analysis In line with a constructivist grounded theory methodology approach, initial, focused, and theoretical coding were used.Initial coding was carried out word by word, sentence by sentence and segment by segment (Charmaz 2014).During focused coding, the most significant and common initial codes were used to further guide the analysis.The codes generated in this phase were more selective and conceptual than the initial codes.Examples of these codes are shown in Table 2. Almost simultaneously with focused coding, theoretical coding was performed.In theoretical coding, we explored and analysed the relationships between our empirical codes using theoretical codes of dimensions (Glaser 1978).Constant comparisons within and between data and codes were made in each coding phase.Memos were written, compared, sorted and further elaborated to try out theoretical ideas, understandings and models (Charmaz 2014).We used principles of theoretical agnosticism and pluralism as guiding principles in the analysis in line with informed grounded theory (Thornberg 2012). Ethical considerations The study was granted ethical approval by the Regional Ethical Board in Stockholm .The student teachers participating in the interviews were informed as to how the data was to be used within the project.In addition, they were informed about their rights as participants before data collection, that their participation was voluntarily, and that their confidentiality would be secured by using assumed names. Findings The study's point of departure was that the student teachers recurrently reported being emotionally challenged by student teacher peers.Their experiences of peers, whom they perceived as unsuitable, evoked emotional distress by challenging their emerging professional identity.From their point of view, these peers are going to devaluate the whole teaching profession and give it a bad reputation.During the teacher training, the student teachers encountered unwritten rules of the teaching profession (i.e.suitability norms) that further complicated their understanding of the social world of teaching.They described emotionally challenging episodes in relation to being forced to interact and work with what they perceived as unsuitable student teachers, as well as from spending time together in other situations, such as time between classes and in group work.In addition, the student teachers compared themselves with those peers whom they judged as unsuitable and constructed their emerging contrasted teacher identity as suitable.This involved portraying what the breach of the norm is, as experienced by the student teachers, and does not represent an objective fact of how their student teachers' peers were.Rather, constructing an emerging teacher identity as suitable included an interpretative and comparative process that was connected to three suitability norms: (1) being perceived as having the right values (2) having social skills and (3) being committed to in-depth learning as a teacher (see Figure 1). Suitability norm 1: being perceived as having the right values The student teachers in the current study stated that student teachers need 'the right values' to be suitable for teaching, and that encountering peers who did not fit their suitability norm was experienced as challenging their emerging teacher identity.This included wishing not to engage with peers that they found unsuitable.This idea could also be interpreted as paradoxical to the inclusive ideals portrayed as necessary.The perception of suitable values was based on an interpretation process in which personal convictions, values and experiences, as well as experienced expectations from teacher education, seemed to play an important role.Lena described her interactions with other student teachers, who expressed opinions that challenged and transgressed democratic values and human rights, as challenging her emerging teacher identity. When your future colleague expresses things that you consider to be racist, or homophobic, that is very emotionally challenging.There are no measures, no admission test to show that you have democratic values when entering teacher education.But that's something I find extremely important and am passionate about.Even if you tell them they are wrong and they don't change, they will still become teachers. (Lena, grades 7-9) In addition, Lena had a hard time accepting that student teachers she assessed as racists would be allowed to teach children.In this interpretation process the student teachers judged their peers as suitable or not, based on whether they expressed values that were considered to be right or not.By this social comparative and contrasting process, student teachers could then construct and judge themselves and their emerging teacher identity as suitable in terms of having the right values to become a teacher. Suitability norm 2: being perceived as having social skills Student teachers considered social skills to be a requirement for a teacher.It was considered as indispensable to be able to talk in front of a group and establish relations with pupils. Peers who student teachers perceived to lack social skills during teacher education were judged as unsuitable, and created tensions in the notion of a preferable teacher identity. They don't have the social ability at all, and I think that's a bit tragic -to have come this far in teacher education and they will never get a job but have high student loans.I don't really worry about them, but more, like, imagine the pupils they will meet, how will that turn out?(Ida, grades 7-9) According to student teachers, social skills could to a certain extent make up for poor subject knowledge; however, having subject knowledge did not make up for poor social skills.To be professional, teachers were considered to need communication skills and to respect all pupils, even those who challenge them.Jörgen argued that a professional teacher must know how to interact and communicate with the age group of pupils they teach. Jörgen: Well, partly, but it's mostly social, I have to say. Jörgen: A lot of times when supervisors have two from the same class and then you see how the children look at them like questions marks.They don't know how to act towards the children, or they can't relate to their level and talk to them at their level.Instead, it's a grown-up talking to a child, and the language is totally different, and the children don't understand and the [student] teacher doesn't understand what the children mean, and it's really weird just from that.When you can't communicate and can't create relationships (Jörgen, grades F-3). In the interpretation process of judging peers as suitable or not, student teachers could compare themselves with those who they perceived as poorly socially skilled.Through this comparison they judged themselves to be high in suitability for a career in teaching by referring to their own perceived good social skills, as a part of their emerging teacher identity, and the unwritten rules of teaching. Suitability norm 3: being perceived as committed to in-depth learning as a teacher Student teachers argued that they and their peers needed to be engaged in teacher education to acquire essential knowledge and fulfil the requirements.They experienced frustration and challenges to their identity formation as a result of interacting with student teachers who acted differently.The degree to which peers displayed a commitment to in-depth learning was essential. Celia: In course evaluations, someone could claim not to have understood the difference between, well during the Reformation, Catholicism, and Protestantism.And that feels like, if you didn't know that beforehand, and didn't know the terms, and if you don't know now after, how did you pass the course and what will you teach children in your class?Sometimes you wish for higher demands in the education system. Celia doubted certain peers, whom she perceived were poorly engaged in their academic studies.She blamed the peers for not studying enough but also teacher education for having low demands and requirements.In addition, Beth had experiences of poor group discussions and group work in her class: Mia and Kajsa discussed low academic engagement among their peers, including free riders and those who were unprepared for their campus-based lessons and seminars.They discussed this as creating a negative peer effect that challenged their idea of how a future teacher should act. Mia: But of course, there's a reason that you were supposed to read what you're supposed to read.You might not have read in depth, but you . . . Kajsa: Skim at least. Mia: There is frustration when you get there and feel, 'No I'm not getting anything back.I might as well have stayed home and done something else'.Then enormous frustration emerges.And that, I feel, I have felt a lot in teacher education, that a lot of people have got away with not doing much for three years (Focus group, grades 7-9). Thus, their experiences of having peers in their class who were academically unmotivated and disengaged meant that the reported perception was that the whole quality of their teacher education was lowered by academically disengaged peers.Tom described experiencing challenges to his emerging teacher identity, when he thought about how lowachieving peers would work as teachers in the future. Tom: It's a bit frightening that they will work as teachers later because it's an occupation where you have to be well-read about everything and to perform all the time.I think one should know that when applying for teacher education.Because this feels as if it's the wrong place to be lazy. Interviewer: Frightening, how do you mean? Tom: No, well, if I had a child and got some of my peers as their teacher, I would change school or class immediately (Tom, grades 4-6). Tom took it further by expressing that lazy student teachers were not suitable to teach his future child, insinuating that some student teachers were unfit to teach at all.This exemplifies the ideal teacher as committed to life-long learning, as well as always engaging in the potentially exhaustive enterprise of always knowing everything for their pupils. Conclusions A shared experience among the student teachers in the current study was that they were forced to interact with student teachers they perceived as unsuitable in course work and could not entirely ignore that this challenged their identity development.The emotional response was also influenced by their appraisal of suitable actions related to ideal teacher identities.The main concern of the student teachers was making sense of the identity challenges they experienced meeting student teachers whom they perceived to be unsuitable.These experiences also played a crucial role in their interpretation process for establishing professional ideals and their own teacher identity, their conception of suitability for a career in teaching, and their evaluation of their own suitability for a career in teaching.Thus, student teachers were engaged in an interpretative and social comparative process of judging the suitability of both their peers and themselves for teaching because of the emotional challenge this involved.This also exemplifies the two key identity development processes present in the data: (1) the reciprocal relationship between emotions and identity development, and (2) the unwritten rules of teaching as portrayed through suitability norms. In the interpretation process of judging peers' suitability for teaching, student teachers were engaged in developing their own teacher identity, including self-evaluation of their own suitability through social comparisons. Discussion The current qualitative findings illustrate a sample of student teachers' emotional responses to challenges to the emerging teacher identity (cf.Chen, Sun, and Jia 2022).This included how they reflected upon and judged their peers and themselves.The study contributes to the international literature on student teachers' suitability, student teachers' interactions, emotions, and experiences during their teacher education (e.g.Holappa et al. 2021;Lassila et al. 2017), and their perspectives on teacher ideals and professional competence (e.g.Lanas and Kelchtermans 2015).For example, our findings add insight into emotional aspects of teacher education because challenges regarding identity positions were created in meeting peers who were determined unfit to teach.There are several aspects of personal convictions and necessary personal traits, seen as individual deficiencies to overcome, that connect to having teaching as a calling.In a study by Bennett et al. (2013), experienced teachers report having a spiritual and personal calling to teach that enabled their longevity in teaching, and as such having a calling is portrayed as a necessary personal motivation for teachers.This relates to being suitable, being perceived as having the right values, having acceptable social skills and being committed to in-depth learning. The negative debate and criticism in Swedish media concerning teacher education and student teachers in terms of low standards, deficiencies, and unsuitability (Edling and Liljestrand 2020) can be considered as a threat towards student teachers' identity.This also illustrates the unstable nature of identity (Gee 2000).The deficit model of thinking concludes that newly qualified teachers need individual help to compensate for individual shortcomings (Kelchtermans 2019), and illustrates the need to consider suitability norms from an international perspective.This means suitability norms are relevant as teacher education establishes suitability norms, according to their social context, which influences the unstable nature of identity (Gee 2000).This study corroborates the potential risks of peer learning as discussed by Väisänen et al. (2017), even though we also acknowledge the fact that peer learning is essential in all education.However, our study points to risks of the deficit model of thinking (cf.Kelchtermans 2019), and how this is reproduced among student teachers.There are groups of student teachers who are supportive of each other, but the support is based on how members are valued by the group. When student teachers judge how their peers perform in teacher education, they also evaluate their peers' commitment, how much they participate in activities, prepare, and contribute.Suitability norms such as empathy, reflectiveness, sociability, self-confidence, determination, and activity were also defined by student teachers in the study by Lanas and Kelchtermans (2015) and are partly expressed by the student teachers of this study. We agree with Holappa et al. (2021), who state that dealing with emotions around unsuitability needs to be a part of teacher education.We suggest that teacher education offers student teachers opportunities to disseminate their beliefs about suitability.A practical application of the study would be to use its findings to discuss the envisioned practice among student teachers given the complex settings, including the different teachers, pupils, and caregivers with whom they will interact as future teachers.Farrell (2022) show how journal writing and joint reflection can help articulate experienced negative emotions.Another implication from the study could be to focus on the discussions on teacher education courses, finding a disposition that could foster and nurture communication.One suggestion might be to use a set disposition to enable student teachers to evaluate and alter their teaching methods, focusing on student-active forms of education.For example, simulations with avatars could be used to develop communication with pupils and social skills.Samuelsson, Samuelsson, and Thorsten (2022) showed that student teachers describe enhanced efficacy beliefs related to teaching after practising in a simulation with avatars, significantly higher than student teachers practising with peers in role play.This could also further engage the discussions about what skills and attributes could be developed through training, foster engagement for indepth learning.In addition, practising role play in teacher teams has the potential to enhance student teachers' ability to reduce colleagues' misconduct (Shapira-Lishchinsky 2013).The student teacher establishes these necessary suitability norms as something inherent in teachers, and something you either have or don't have.In teacher education, becoming a teacher is central, as the educational programme focuses on changes in the emerging teacher identity (Flores and Day 2006).Here using these suitability norms and emotional experiences as a starting point could help to expand notions of teacher identity (also exemplified by Golombek and Doran 2014). Therefore, there seem to be ways to practise, and thus it is possible to deepen the ways teacher education provides nurture and practice to go beyond personal convictions and existing personal traits and to foster development among student teachers.Consequently, this might lessen experiences of working with peers as emotionally challenging to emerging teacher identities. Limitations Some limitations should be considered and evaluated when reading the findings of the study.Firstly, the analysis is based on interview data and contains no performative data.The participants may have portrayed an idealised narrative of themselves.Even if this is true, the participants also discussed their weaknesses, worries, self-doubts, and their future in teaching.Our contribution should be considered as an interpretative portrayal and does not assume to present a universal truth or a complete picture of the phenomenon (Charmaz 2014).Secondly, suitability was spontaneously discussed in 15 interviews.It was not discussed in other interviews that made up the entire dataset of the research project.In future research, suitability norms could be further investigated qualitatively with another sample of participants.This would increase our insight into the emotional challenges to student teachers' emerging teacher identity. Table 2 . Examples of codes.Even though we are there on the same terms and everything, you wonder if all people in teacher education really are suitable to be teachers in the future.You judge your friends, actually.When people express homophobic, transphobic values, for example, you might think that this person should not be working at a school. Establishing suitability norms among student teachers when emotionally challenged by peers.
v3-fos-license
2023-10-15T06:18:12.639Z
2023-10-13T00:00:00.000
264099267
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-023-44483-y.pdf", "pdf_hash": "67f251eff5244d96a7c8349f1d274349d93933f1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41571", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Environmental Science" ], "sha1": "ca963d1723dc31acb021e2f3f95e7ccd6a27e1f1", "year": 2023 }
pes2o/s2orc
The novel sterilization device: the prototype testing Currently, there are numerous methods that can be used to neutralize pathogens (i.e., devices, tools, or protective clothing), but the sterilizing agent must be selected so that it does not damage or change the properties of the material to which it is applied. Dry sterilization with hydrogen peroxide gas (VHP) in combination with UV-C radiation is well described and effective method of sterilization. This paper presents the design, construction, and analysis of a novel model of sterilization device. Verification of the sterilization process was performed, using classical microbiological methods and flow cytometry, on samples containing Geobacillus stearothermophilus spores, Bacillus subtilis spores, Escherichia coli, and Candida albicans. Flow cytometry results were in line with the standardized microbiological tests and confirmed the effectiveness of the sterilization process. It was also determined that mobile sterilization stations represent a valuable solution when dedicated to public institutions and businesses in the tourism sector, sports & fitness industry, or other types of services, e.g., cosmetic services. A key feature of this solution is the ability to adapt the device within specific constraints to the user’s needs. can simultaneously, rapidly, and precisely sterilize numerous surfaces without the need for specialized personnel, and without incurring the high costs associated with purchasing special devices and neutralizing the waste generated.The vaporized hydrogen peroxide and UV-C to sterilize reusable metal and nonmetal devices and it has limitations, mainly related with lower penetration capabilities.This technique is compatible with a wide range of materials in common use (e.g., polypropylene, brass, polyethylene) but is not intended to process liquids, linens, powders, or any cellulose or nylon materials.The system can sterilize instruments with diffusion-restricted spaces (e.g., scissors) and a single stainless-steel lumen based on lumen internal diameter and length (e.g., an inside diameter of 1 mm or larger and a length of 125 mm or shorter 13,14 . This work describes the design, construction, and evaluation of a novel mobile sterilization device.The research included the development and preparation of a working prototype of a mobile sterilization station, to assess its competitiveness in the field of clothing and personal protective material sterilization.Our proprietary solution involves the use of commonly available materials, which improves access to the sterilization equipment for non-specialized personnel.Furthermore, it is based on simple and user-safe sterilization factors that do not generate environmentally hazardous waste.The fundamental research, as well as research development activities (completed at the technology readiness level 8; TLR8) included (1) planning and constructing a prototype that generates dry H 2 O 2 vapors for sterilization, and (2) microbiological verification and optimization of the sterilization process using the prototype (sterilization parameter optimizations in terms of temperature, humidity, and treatment time), while focusing on the real-life conditions. Microbiological verification and optimization of the sterilization process using the prototype The first tests focused on optimizing the duration and temperature of the sterilization process.During these tests, various concentrations of H 2 O 2 and a range of temperatures were applied during the sterilization process (Table 1). On the basis of those preliminary results, 150 ppm of dry H 2 O 2 vapors, a temperature of 35-40 °C, and a treatment duration of 20 min were selected for further tests.In the first step of the decontamination process, 5 min of UV-C irradiation was applied.The UV-C irradiation was also applied at the end of the sterilization process to promote the chemical decomposition of dry H 2 O 2 vapors.Therefore, the total UV-C radiation dose was equal to 2.76 J/cm 2 .The microbiological quality control involved commercial standardized biological test with indicator bacterial strain spores Geobacillus stearothermophilus being compliant with ISO 11138-1 15 standard (Fig. 1a) and often occurring in the human environment microbial strains represents bacteria gram-positive, gram-negative and yeasts such as B. subtilis, E. coli and C. albicans, respectively.Simultaneously, the effectiveness of H 2 O 2 vapors for the sterilization was monitored using two types of commercially available chemical tests: Chemdye CD40 type-4 (Fig. 1b) and VH2O2 Process Indicator type-1 compliant with ISO 11140-1 standard 16 (Fig. 1c).Tested microorganisms exposed on various conditions in this study confirmed that the mobile sterilization station effectively destroyed bacterial and yeast cells on used materials (Table 1).The percentage of dead vs live cells was also analyzed by flow cytometry to determine the best parameters for sterilization process which leads to over 99.9% of inactivated microbial cells (Table 2). Table 1.The effect of hydrogen peroxide vapors on B. subtilis spores and E. coli cells placed on cellulose.OD 600 -optical density of a microbial culture at a wavelength of 600 nm, correlated with the number of microbial cells; mean OD 600 values determined after the scheduled incubation time of the test strips in TSB liquid medium; (K−)-negative control (test strip not exposed to the sterilization agent).The second set of biological performance tests evaluated the mobile sterilization station's ability to sterilize the surface of materials and devices placed in the working chamber as well as the air from the working chamber (using a MicroBio MB1 microbiological air sampler) during full sterilization cycles.Results shown that after 20 min of sterilization (150 ppm of dry H 2 O 2 vapors, at 35-40 °C) no microorganisms' growth was observed on culture media (Table 3, Fig. 2). The sterilization of cellulose-cotton materials using the mobile sterilization station was verified by testing discs (diameter = 1 cm, thickness = 2-4 mm) soaked with the selected microorganisms.The discs were placed in the device and sterilized using dry H 2 O 2 vapor at a concentration of 150 ± 5 ppm, under a total UV-C irradiation dose of 2.76 J/cm 2 , at a temperature of 38 °C.After completing a sterilization cycle, the microorganisms were rinsed from the discs with phosphate-buffered saline (PBS) and plated on non-selective Luria Bertani agar (LA) or Sabouraud agar media on Petri dishes.After incubation no microorganisms' growth was observed on plates with samples after completing the sterilization process, in contrast to control plates with samples without sterilization, where lawn-growth of microorganisms was observed (Fig. 3).The results for the control (discs not subjected to the sterilization process) are presented in Fig. 3. Focused research An important element of prototyping involved testing the developed sterilization station in real-life conditions (with volunteer participants). For this purpose, the mobile sterilization station operation was demonstrated for volunteer participants who constituted focus group.The study was conducted based on a proprietary questionnaire to assess the device according to user evaluations and to confirm the effectiveness of the microbiological sterilization of everyday items.Office items (school rulers, set squares, pens, pencils, keys) and personal protective equipment (masks, Table 3. Number of colonies on LA and Sabouraud media (after incubation at 37 °C for 48 h and subjecting to a stream of 10 L of air from the working chamber following 5-20 min of sterilization).helmets), were tested for this research.Participants from focus group who performed at least several cycles of sterilization process (operated the sterilization station) were asked to fill in the questionnaire form.The user survey was completed by 46 respondents, 22% of whom were women; 26 participants had completed some degree of higher education, two had completed primary education, and the remainder had a secondary education level; the median age of participants was 35 years (ranging from 22 to 57 years); 70% of respondents agreed that there is a need to use mobile sterilization stations, and over 75% believed that this solution would increase work safety.The overall assessment of the sterilizing unit was good (about 62%).The quality of workmanship, aesthetics of the device, ease of use, intuitiveness, and readability were also rated very highly.The worst-rated parameters were ease of spare part replacement and ease of transport.According to the respondents, the distinguishing features of the mobile sterilization station were: "ease of use", "quality of sterilization, sterilization time, and sterilization efficiency", and "the possibility of sterilizing teaching materials". Testing under real conditions The mobile sterilization station was tested under real conditions by members of the state fire brigade, who used the station for three months, while noting any comments and observations.After the testing period they provided feedback indicating that this equipment was very useful from their perspective.The three months of testing confirmed that the construction (materials used to build the machines and interface used to operate them) was adequate to satisfy the users' needs.It also indicated that the maintenance of the station is facile and not time consuming, and there was no damage or failure that reduced the usability of the device.Moreover, the sterilization process is relatively short, and therefore, the amount of wasted time is minimal.The microbiological controls (standardized biological quality controls containing G. stearothermophilus) verified the effectiveness of the sterilization process at the end of the test.It is worth noting that the mobile sterilization stations were transported between different users.Importantly, the device transport did not affect its sterilization efficiency, and the preparation for transport and device restart were easy for the users.Following our experiments, all prototype models of the mobile sterilization station were donated to the fire brigade station in Western Pomerania (Poland) and are still in use during the pandemic.At the time of writing, there have been no indications of any damages or device failures. Discussion Hydrogen peroxide is commonly used as an antimicrobial agent; it decomposes rapidly to water and oxygen, which are nontoxic, and it is therefore safe and effective to use for biologic deactivation purposes 11 .Vaporized hydrogen peroxide (VHP) is a form of hydrogen peroxide that exhibits effective lethality against a wide range of microorganisms while remaining nontoxic to human health 17 .VHP seems to be a more effective disinfectant than 0.5% sodium hypochlorite solution at eradicating Clostridium difficile spores and is recommended as a novel alternative for disinfecting patients' rooms 18,19 .It also helps to eliminate hospital infections caused by methicillin-resistant Staphylococcus aureus (MRSA) 20 .Verification of the usefulness of VHP at low concertation in our novel sterilization station was performed using microbiological tests on standardized biological quality control samples containing Geobacillus stearothermophilus, Bacillus subtilis spores, Escherichia coli, and Candida albicans.We confirmed the effectiveness of the sterilization process against selected microbial pathogens at a VHP concentration of 150 ppm.These microbiological tests were performed using both classical microbiological methods and flow cytometry, with the results of the flow cytometry analyses in line with the standardized microbiological tests.The VHP system is a relatively quick and user-friendly technology, and for this reason it was used in the prototype mobile sterilization station.However, it was evident that the VHP residue left in the sterilization chamber at the end of the process was chemical waste that is a danger to the user.Applying UV-C irradiation is also an effective method for converting H 2 O 2 into nontoxic derivatives.In fact, the UV-C irradiation also might be the main sterilization agent against Bacillus and fungal spores sprayed onto substrates, as described by Halfmann et al. 10,21 .It is worth noting that VHP disinfection leads to complete inactivation of some viruses, including poliovirus, rotavirus, adenovirus, murine norovirus 22 , and SARS-CoV-2 23 , although an extended cycle time was required relative to Indiana serotype (VSV) 7 .Moreover, a pioneering study conducted by Criscuolo et al. 12 demonstrated the deactivation of SARS-CoV-2 on different materials under UV-C irradiation and ozone exposure.However, the use of our prototype for inactivation of other viruses requires future evaluation. Our prototype was effective against the bacteria analyzed and C. albicans, and it is easy to construct and use by non-specially trained users.This prototype is a solution to the increasing global need for sterilization-dedicated equipment for non-medical units.The developed sterilization station combines multiple decontamination and sterilization factors, namely UV-C irradiation and VHP.Microbiological tests conducted under laboratory and real-life conditions confirmed that this device is effective against microbes that are commonly used to validate sterilization techniques.Overall, this study demonstrated that the mobile sterilization station can be used to sterilize small everyday items, as well as homemade masks.It must be pointed out that the device was projected mainly for items which do not necessarily have to be sterile, but need to be decontaminated after contact with a potential infectious agent that the staff may have come into contact with earlier.Therefore, recommendations and directions for use quality control (commercially available chemical tests are sufficient) are included in the user manual provided with the device. The present report further indicates that the developed mobile sterilization station is not only an effective sterilization tool, but it can also fill the market demand for new adaptable technological solutions. Conclusions Advancements in the sterilization of clothing and implementation of these solutions (device prototypes) will enable the wide application of novel techniques in the field for various industries (e.g., tourism, sport and fitness, beauty and wellness) and special services (e.g., fire brigade, army, police).The system developed herein can be adapted to the needs of the user (e.g., by streamlining the process, simplifying its operation, developing different versions of the device depending on the needs of the end user), and it could even be developed as an intelligent device when operating based on remote controls.The presented mobile sterilization station received a hygienic certificate and is ready to be Conformité Européenne (CE) certified. Prototype construction The device was made of 0.2-mm-thick stainless steel, which provided the structure with high resistance to corrosion and chemical agents (steel type: 1.4301/1.4404).The body of the device was mounted on a steel frame.The prototype was equipped with elements to facilitate transport, i.e., a wheeled chassis with four lockable swivel wheels.The prototype was built by Tungsten Inert Gas (TIG) welding.The welds and elements of the prototype body were ground and polished.The sterilization chamber (working chamber) contained two UV-C lamps with a total power of 36 W. Additional movable elements enabled sterilization of the entire arrangement (insertable and removable shelves, sterilization baskets, drainers, etc.).The device integrated proprietary solutions in the form of a cover for UV-C radiators, a container for the hydrogen peroxide solution, an ultrasonic vapor generator, and frames for mounting catalytic filters (utility model pending application No. W.130269 [WIPO ST 10/C PL130269U]).The control module was placed in the front portion of the device and was equipped with a touch screen.The application controlling the device function works in two modes: (1) user mode, where the START and STOP buttons appear, as well as the sterilization time progress bar, and (2) service mode, which allows the www.nature.com/scientificreports/working times of the UV-C radiators and H 2 O 2 vapor generator to be modified.In addition, this application has a closed-door sensor, which prevents the device from initiating operation when the door is open (Fig. 4). Optimization of sterilization conditions H 2 O 2 concentration and temperature measurements The H 2 O 2 concentration and temperature were monitored inside the working chamber of the mobile sterilization station using PEROXCAP HPP272 transmitters to measure the hydrogen peroxide vapors, humidity, and temperature (Vaisala Insight software, version 1.0.2.76, Vaisala Oyj, Vantaa, Finland).When using the designed system, it was possible to control the sterilization process in terms of the concentration of the sterilizing agent, i.e., H 2 O 2 vapor and temperature.The Geobacillus stearothermophilus (ATCC 7953) is the indicating bacterial spore-forming strain of safe microorganism in commercially available standardized test that may be used for laboratory purposes but also by non-specialized users.Bacillus subtilis strain (ATCC 6051) represents gram-positive and spore forming bacteria, Escherichia coli (ATCC 13706) represents gram-negative bacteria and Candida albicans strain (ATCC 3147) represents yeast.All used microorganisms are common in the environment and may occur i.a. on human skin. Growth media used in tests Growth of microorganisms was determined using culture media such as: Tryptone Soy Broth medium (TSB; 1.5% casein peptone, 0.5% sodium chloride, 0.5% soy peptone; pH 7.3; Merck, Darmstadt, Germany) for cultivation of tested bacterial strains in liquid culture. Commercial sterilization quality controls Standardized biological quality control samples, compliant with the ISO 11138-1 15 standard, containing Geobacillus stearothermophilus (ATCC 7953 spores; CFU = 1.3 × 10 5 ; BIONOVA Terragene, Santa Fe, Argentina) and a liquid culture medium with a color indicator were used.Briefly, on two levels (upper and lower) created using mesh shelves there were placed three doublets of vials.After sterilization process internal capsule filled with culture media supplemented with indicator was squeezed to release the liquid into the vial containing microbial spores.The vials were incubated for 24 h at 60 °C.Purple color in the medium after the incubation period indicated that the microorganisms were killed, whereas yellow medium indicated that they survived and underwent cell division, i.e., the sterilization was not successful.As a positive control vial without sterilization was used as described above.Moreover, control vials contained chemical label with indicator changing color when exposed to H 2 O 2 vapors; a violet color indicated that the vial was not exposed to H 2 O 2 , and a green indicated exposure to H 2 O 2 vapors.This test helped to assess the minimum time and concentration of H 2 O 2 vapors for successful sterilization process. Additionally, the effectiveness of the H 2 O 2 vaporization technique was monitored using two types of commercial chemical tests: Chemdye CD40 type-4 (ISO 11140-1 16 ; Terragene) and Excelsior Hydrogen Peroxide (VH2O2) Process Indicator Strips type-1 (ISO 11140-1 16 ; Excelsior Scientific, Cambridgeshire, UK).In both cases, the test field changing to a green color confirmed proper H 2 O 2 vaporization.About 20 test strips were placed in different locations of the chamber to evenly cover most of the chamber space. Microbiological tests The sterilization process was evaluated against various systems, i.e., exposing Bacillus subtilis spores and Escherichia coli cells to the sterilizing agent for various exposition time (up to 60 min), H 2 O 2 vapor concentrations (50-150 ppm), and temperatures (20-40 °C).Test where prepared on Whatman's paper discs (5 mm diameter) containing 2 × 10 6 CFUs of B. subtilis spores or 2 × 10 7 CFUs of E. coli cells.Dry discs were used in sterilization processes and after completion of the cycle the discs were floated with PBS buffer and vortexed to suspend bacterial cells in the buffer.As a control, discs without sterilization exposure were used.100 µL of PBS with suspend microorganisms were transferred into TSB medium and incubated for 72 h at 37 °C.During the incubation the growth of microorganisms in the medium was monitored.As a negative control, disc without sterilization cycle was used.Moreover, assessing the sterilization capacity of the device, cellulose-cotton discs (diameter = 1 cm, thickness = 2-4 mm) were soaked with the microbial cultures such as E. coli, B. subtillis and C. albicans and subjected to the sterilization process in the working chamber.About 20-30 disks were placed in different location of the chamber to evenly cover most of the chamber space.After completing the sterilization process, the microorganisms were rinsed from the discs with PBS (1 mL) and seeded on Petri dishes with LA or Sabouraud medium.All tests were performed in triplicate.Additionally, flow cytometric analyses of the microorganisms' viability were performed (for details, see "Flow cytometry analysis" section). Flow cytometry analysis In addition to the classic microbiological tests, the flow cytometry technique was used.PBS with suspended microorganisms extracted from Whatman's paper discs coated with tested bacterial strains (as described above) was used in flow cytometry assay.As a control, discs without sterilization exposure were used.Such prepared samples and controls were used in flow cytometry analysis to assess microbial viability, as well as proportion between live and dead cells, using BD Cell Viability Kits (BD Biosciences, San Jose, CA, USA).Briefly, 5 µL of dye solution containing thiazole orange (TO) and propidium iodide (PI) were added to 500 µL of cell solution.The samples were vortexed and incubated for 5 min in the dark at room temperature.The data was acquired on a flow cytometer BD Accuri TM C6 Flow Cytometer (Becton Dickinson, Franklin Lakes, NJ, USA) for 30 s at the fast flow rate (66 μL/min) with an SSC-H threshold of 10,000 to exclude debris.To validate the test, viable and artificially killed analyzed microorganisms were used.The results were calculated using BD Accuri™ C6 (version 1.0.264.21) and FCS Express (version 4.07.0020RUO Edition; De Novo Software, Los Angeles, CA, USA) software programs.Gating strategy is presented in Fig. 5.The air from the prototype was controlled using a MicroBio MB1 Bioaerosol Sampler (Cantium Scientific Ltd., Dartford, UK), placed in the center of the mobile sterilization station working chamber before and after sterilization process (0, 5, 10, 15, or 20 min).Tests were performed as follow: 10 dm 3 of air from the mobile sterilization station was directed onto Petri dishes with the LA and Sabouraud agar media (Merck) to determine total bacteria on LA agar medium as well as fungi and molds on Sabouraud agar present in the air, respectively.After, exposition both media were incubated for 48 h at 37 °C followed by counting of the colony forming units (CFUs) per 1 m 3 of air and calculated, according to Eq. ( 1), where L1 and L2 are the number of total bacterial colonies grown on LA agar medium and S1 and S2 are the number of fungi and molds colonies grown on the the Sabouraud medium.Microbiological testing was completed by placing Petri dishes with LA or Sabouraud agar medium containing freshly-spread cultures of bacteria B. subtilis (10 9 CFU/mL) and yeast Candida albicans (10 7 CFU/mL), respectively, in the sterilization station's working chamber.After sterilization process plates were closed immediately and incubated for 72 h at 37 °C, at which time-point the growth of seeded microorganisms was evaluated. Focused research and testing under real conditions Focused research and testing under real conditions was conducted with participation of volunteers (focus group and members of the state fire brigade, respectively) who operated the sterilization station and then answered the questionnaire.All testing procedures, including those requiring participation of volunteers approved by the Local Ethics Committee at the Regional Medical Chamber in Szczecin (approval no.05/KB/VII/2019), were approved by the "Socially responsible Proto_lab" project board under the West Pomeranian voivodeship and carried out in accordance with relevant guidelines and regulations.Participants were fully informed of the experimental procedures before giving their consent to participate. (1) Figure 1 . Figure 1.Quality controls used in the study.(a) Standardized biological quality controls containing G. stearothermophilus (ATCC7953) spores with CFU 1.3 × 10 5 and a liquid culture medium with a color indicator (additionally equipped with chemical strips to indicating exposure to H 2 O 2 vapors).Vials that were not sterilized are labelled with "K" (yellow medium indicates bacterial growth) and vials after completing the sterilization process show a violet color (indicating a lack of bacterial growth).(b) Chemdye CD40 type-4 commercial chemical tests strips confirming the effectiveness of the H 2 O 2 vaporization (test field colored green) in the tested sterilization station.(c) VH2O2 Process Indicator strips type-1 confirming the effectiveness of the H 2 O 2 vaporization (test field colored green) in the tested sterilization station. Figure 2 . Figure 2. Representative photographs of microbial cultures on solid medium after the sterilization process.Red dots represent colony forming units (pseudo-colored). Figure 3 . Figure 3. Representative results of cultures on a solid medium; microorganisms were washed off of the surface of discs before and after the sterilization process.After completing a sterilization cycle, the microorganisms were rinsed from the discs with PBS and plated on LA or Sabouraud Petri dishes. Table 2 . Percentage of dead B. subtilis and E. coli cells after exposure to hydrogen peroxide at 35-40 °C analyzed by flow cytometry.(K−)-negative control (test strip not exposed to sterilization agent).
v3-fos-license
2016-05-04T20:20:58.661Z
2015-04-23T00:00:00.000
1985637
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0123430&type=printable", "pdf_hash": "adacbc3f1a7fda965b0075d34670a08113b5d1c5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41572", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "sha1": "444d8b39ff9dc81fc4733a159fd66713422be42c", "year": 2015 }
pes2o/s2orc
Structure-Guided Engineering of Molinate Hydrolase for the Degradation of Thiocarbamate Pesticides Molinate is a recalcitrant thiocarbamate used to control grass weeds in rice fields. The recently described molinate hydrolase, from Gulosibacter molinativorax ON4T, plays a key role in the only known molinate degradation pathway ending in the formation of innocuous compounds. Here we report the crystal structure of recombinant molinate hydrolase at 2.27 Å. The structure reveals a homotetramer with a single mononuclear metal-dependent active site per monomer. The active site architecture shows similarities with other amidohydrolases and enables us to propose a general acid-base catalysis mechanism for molinate hydrolysis. Molinate hydrolase is unable to degrade bulkier thiocarbamate pesticides such as thiobencarb which is used mostly in rice crops. Using a structural-based approach, we were able to generate a mutant (Arg187Ala) that efficiently degrades thiobencarb. The engineered enzyme is suitable for the development of a broader thiocarbamate bioremediation system. Introduction Molinate (S-ethyl azepane-1-carbothioate) is one of the most intractable thiocarbamates [1] and is extensively used worldwide to control grass weeds in rice crops. Most of the known pathways for molinate degradation lead to the formation of partially oxidized metabolites, which are more toxic and persistent than the parent compound [2]. So far, there is only one described microbial system able to degrade molinate to innocuous compounds: a five strong bacteria consortium from which Gulosibacter molinativorax ON4 T is responsible for the initial breakdown of the herbicide by cleaving its thioesther bond, releasing ethanethiol and azepane-1-carboxilate (ACA) [3,4]. This reaction is catalyzed by molinate hydrolase (MolA) [5]. For molinate concentrations over 2 mM and in the absence of other members of the bacterial consortium, G. molinativorax ON4 T growth is hindered, due to the accumulation of sulphur compounds, namely ethanethiol [3,4]. By contrast, in the presence of other consortium members able to degrade the sulphur compounds, mineralization of the herbicide occurs even when the initial molinate concentration is close to its solubility (4 mM). Due to its role in the aforementioned molinate degradation pathway, a bioremediation tool with molinate hydrolase presents great value for environmental decontamination. Molinate hydrolase is a recently characterized metal-dependent enzyme [5]. Following successful identification and cloning of its encoding gene, recombinant molinate hydrolase was found to be cobalt-dependent, with zinc and manganese also able to confer activity to the enzyme [5]. Biochemical studies revealed that recombinant molinate hydrolase presents maximum activity at pH 7.5 and 30°C, showing very similar kinetic properties to native molinate hydrolase [5]. One of the most critical aspects of an enzyme is its substrate specificity. Alongside with molinate, other thiocarbamates are also used as pesticides. Similarly to molinate, thiobencarb is applied to rice crops (Fig 1) [2]. Molinate hydrolase proved to be unable to degrade other thiocarbamates such as thiobencarb, regardless of protein and/or substrate concentration [5], probably due to a steric effect. Therefore, the determination of the enzyme three-dimensional structure is of key importance for an eventual modulation of the enzyme specificity by site-directed mutagenesis. Herein we present the crystal structure of molinate hydrolase. This allowed us to theorize a mechanism for molinate binding and consequent hydrolysis at the thioester bond, which constitutes a rare catalytic mechanism. We also demonstrate that molinate hydrolase is inhibited by ethanethiol, explaining the inhibition of Gulosibacter molinativorax ON4 T growth when this molinate hydrolysis product accumulates. The disclosure of the crystal structure of the enzyme was crucial for the generation of single mutants that catalyse the degradation of other thiocarbamate pesticides, namely thiobencarb. Results and Discussion The overall crystal structure of molinate hydrolase As molinate hydrolase shares similarity with other metal-dependent hydrolases, the metal dependence of this enzyme was shown in a previous report [5], where it was determined a metal ion / monomer ratio of 1.1 ± 0.2 by both ICP-AES and ICP-MS. Incubation with chelating agents rendered an inactive form, which could be reactivated through the addition of divalent metal ions [5]. However, despite of several attempts, crystallization trials yielded crystals of the metal-depleted amidohydrolase. These results suggest that molinate hydrolase loses the catalytic ion during crystallization trials, a event that was reported before for other amidohydrolases [6]. Thus, we produced the SeMet-mutant and determined the three-dimensional structure of the enzyme by the single wavelength anomalous diffraction (SAD) technique using the anomalous signal from the selenium atoms. Later on we solved a crystal structure partially loaded with one of the potential catalytic metal ions, Zn 2+ , by co-crystallizing the recombinant enzyme with ZnCl 2 . The position of the metal ions was determined from single wavelength anomalous diffraction data collection at the Zn absorption K-edge. The anomalous signal was modest but the top positive peaks in the anomalous difference Fourier map were located in equivalent positions in each monomer and were consistent with metal coordination geometry and with the position of the metal ions in structure-related amidohydrolases. Nevertheless we performed site-directed mutagenesis of the metal-coordinating residues. Inactive mutants were obtained, confirming the structure of the active site. The Zn ions occupation was set to 0.4 and the overall structure was similar to the one of the metal-free enzyme (root-mean-squared-distance r.m.s. d. of the superimposed structures is 0.22 Å). The active site structure was also not perturbed by the depletion of the catalytic metal ion. Good diffracting crystals of recombinant molinate hydrolase were obtained in two distinct crystal systems, orthorhombic and monoclinic, with 4 and 8 monomers in the asymmetric unit, respectively. No significant differences were observed between the molinate hydrolase three-dimensional structures in the two crystal systems (r.m.s.d. of the superimposed structures is 0.17 Å). The structure of molinate hydrolase consists of a homotetramer (Fig 2) composed of two symmetry-related dimers with a 67°tilt angle between them. The two dimers are assembled into the active enzyme by interactions involving the C-termini and α-helices. Within the dimer, the monomers are tightly bound, mostly through extensive interactions involving the residues in the loops connecting the α-helices. Each monomer binds a single divalent metal and is formed mostly by α-helixes (Fig 3). They contain also a β-sandwich domain characterized by a four-stranded and a three-stranded antiparallel opposing β-sheets, and two short three-stranded parallel β-sheets. The most notable feature about the structure of molinate hydrolase is the absence of the 8-stranded β-barrel characteristic of the amidohydrolase superfamily. Despite the positional similarity of the active site when compared to other amidohydrolases, as will be detailed later, only six β-strands are conserved, forming two β-sheets. Those β-sheets are arranged to resemble an incomplete β-barrel, as can be observed after superposition with other amidohydrolases (Fig 4), such as N-acetyl glucosamine-6-phosphate deacetylase from Thermotoga maritima (PDB entry: 1o12; [6]). Both enzymes share high similarities in the active site geometry as will be discussed below. The structure of the active site The catalytic metal is coordinated by His282 and His302 (residues are numbered according to the recombinant protein sequence, including a 32 amino acid purification tag), which are hydrogen-bonded to the main chain carbonyl groups of Asp245 and Thr283, respectively, as well as three ordered water molecules. The water molecules are in turn hydrogen-bound to Lys240, His246 and Asp373 (Fig 5). His282 and His302 are located in the loops after strands β5 and β6, respectively, while Lys240 is in strand β4 and His246 and Asp373 belong to loops that follow the β-sheet assembly. Mutagenesis confirmed that each of the metal coordinating residues, His282 and His302, as well as Lys240, which is hydrogen-bonded to a coordinating water molecule, play a role in the enzyme activity. Single point mutations to alanine of the three residues yielded inactive forms of the enzyme, supporting a catalytic role for the zinc ions found in the crystal structure. The circular dichroism spectra of the mutants were undistinguishable from the spectrum of the recombinant enzyme, confirming that the mutants were properly folded (Fig 6). Molinate hydrolase belongs to the amidohydrolase superfamily [5] and displays >40% amino acid sequence homology with the herbicide degrading phenylurea hydrolases PhuA and PhuB [7]. His a -X-His b -X 83-140 -Lys-X 25-70 -His c -X 18-25 -His d -X 52-143 -Asp is a common metal binding motif of amidohydrolases [6,8,9]. It comprises two metal binding sites, the so-called M α site coordinated by the two imidazole side chains of His a -X-His b and the M β site ligated by histidines His c -X 18-25 -His d ; additionally, there is a highly conserved aspartate that either coordinates or is hydrogen-bonded to a hydrolytic water ligated to metal M β . Within this enzyme superfamily there are, however, examples of mononuclear and binuclear metal centres [6]. The binuclear metal centres are characterized by the simultaneous interaction of a carboxylated lysine or a glutamate with both metal ions, which are usually 3.6 Å apart. The molinate hydrolase structure revealed that the aforementioned lysine residue is definitely not carboxilated and that one of the histidines of the M α site is substituted by an asparagine, as previously reported [5]. These observations, together with the lack of a positive electron density at the M α site in the anomalous difference map calculated from diffraction data measured at the zinc absorption edge K, led us to conclude that molinate hydrolase has one catalytic metal ion per monomer located at the M β site. As seen in Fig 7A, the divalent metal is pentacoordinated by His282, His302 and three water molecules. The water molecules Overlay of the β-strands around the catalytic metal of molinate hydrolase (pink) and N-acetyl glucosamine-6-phosphate deacetylase (green; PDB entry: 1o126), an enzyme that shows the classic β-barrel core (green) of the amidohydrolase superfamily; the usual βstrand numbering of the β-barrel is presented. The C α of the metal coordinating residues were used for structure superposition. Catalytic metals are shown as spheres (zinc orange; iron yellow). are stabilized by Lys240, His246 and Asp373, respectively. This active site geometry is highly similar to that in N-acetyl glucosamine-6-phosphate deacetylase (AGD) from T. maritima (PDB file entry: 1o12; [6]), which has a single iron ion coordinated by two histidines (His176 The active site of recombinant molinate hydrolase (detailed view; background in cartoon representation; highlighted residues in stick representation, with carbon backbone in cyan, oxygen red and nitrogen dark blue; the Zn cofactor is shown as an orange sphere and water molecules as red spheres). The anomalous difference map computed from the diffraction data measured at the Zinc absorption edge K and contoured at 4σis shown in orange mesh, and the 2Fo-Fc electron density map at 2σis drawn as a blue mesh around the side chains of the active site residues. Engineering Molinate Hydrolase for Degradation of Thiocarbamates and His197), a glutamate residue (Glu115) and two water molecules (Fig 7B). Glu115 appears to be positioned as a potential bridging residue between two catalytic metal ions and, in fact, in the AGD homologue from Bacillus subtilis (PDB file entry: 1un7 [10]) there is a second metal ion at the M α site. The catalytic mechanism Molinate hydrolase shares strong similarities with phenylurea hydrolases regarding the active site geometry and the pH dependence of kinetic parameters, i.e., a maximum enzyme activity between pH 6.5 and 8.5 [5,7]. Thus, it is rather plausible that this enzyme follows the consensual catalytic mechanism characterized by a nucleophilic hydroxide, a metal-bound water molecule and an attack to the carbonyl group while a catalytic acid residue protonates the amide nitrogen-in our rare case the thiol sulphur atom [8,9,11] (Fig 8). The protonated intermediate is then decomposed by the cleavage of the C-N (C-S, in our case) bond. Product inhibition The effect of reaction products (azepane-1-carboxilate and ethanethiol) on molinate hydrolase activity was followed by an HPLC assay performed in the presence of different concentrations This inhibitory effect is likely to occur due to the interaction of the thiol group with the metal ion of the active site. This hypothesis was tested and confirmed by the observation that cysteine also inhibits the enzyme activity, while serine and methionine do not. In addition, other thiolated compounds, such as β-mercaptoethanol, also inhibited molinate hydrolase, suggesting that the coordenation of the thiol group with the catalytic metal renders the enzyme inactive. Thus, considering a mechanism of competitive enzymatic inhibition, the inhibition constant, K i , was determined to be 0.30 mM (GraphPad Prism version 6.00). The decrease in enzyme activity when this product accumulates is in agreement with the behaviour of G. molinativorax ON4 T . In previous studies it was demonstrated that this organism is not able to grow on molinate in axenic culture at molinate concentrations over 2 mM, due to the toxic effect of ethanethiol [3,4]. At higher molinate concentrations, G. molinativorax ON4 T only grows in the presence of other consortium members, which are able to degrade ethanethiol, and thus are responsible for the removal of this sulphur compound [3,4]. Structure-guided engineering of molinate hydrolase Molinate hydrolase, like many other amidohydrolases, has the active site access restricted by the many loops that link the β-strands and α-helixes that surround the catalytic site. Some of these loops contain bulky residues that may act as gatekeepers. We selected Arg187, Phe253 and Phe346 as the most promising residues to play this role (Fig 10). Single point mutations to alanine of the three residues were produced. The activity of the aforementioned single mutants towards the degradation of molinate and thiobencarb was assessed at 25°C, pH 7.4 (Fig 11). Mutant Phe346Ala was unable to degrade molinate (data not shown). The other two mutants catalyzed the reaction although with distinct kinetics, Arg187Ala was significantly more efficient than Phe253Ala. When compared to the recombinant enzyme activity [5] and analyzed by the Michaelis-Menten model, Arg187Ala shows lower affinity towards molinate (K m = 2.37 mM against K m = 0.275 mM from the recombinant) while the turnover number remains very similar (Arg187Ala: k cat = 38.1 min -1 and recombinant: k cat = 39.7 min -1 ). Thiobencarb is not processed by molinate hydrolase (Fig 11) as we have reported earlier [5]. By contrast, both mutants Arg187Ala and Phe253Ala degraded this thiocarbamate. Phe253Ala was consistently less efficient than Arg187Ala mutant, which may be due to a depletion of the catalytic metal associated with this specific mutation. Mutant Arg187Ala showed a k cat / K m ratio of 44.2 ± 6.5 mM -1 min -1 which is in the same order of magnitude of the correspondent value obtained with molinate, an indication that Arg187Ala catalyses the reaction of both substrates (molinate and thiobencarb) with similar efficiency. Due to the low solubility of thiobencarb it was not possible to increase the substrate concentration to enable the accurate estimation of k cat . We focused our study on thiobencarb because it is also applied in rice fields and is structurally diverse from molinate. Moreover, thiobencarb is a chlorinated thiocarbamate, a particular class that attracts high environmental concerns. Herein, we confirmed that the Arg187Ala mutant is able to degrade both thiocarbamates. Hence, both recombinant and Arg187Ala mutant enzymes are excellent prospective candidates for application in the bioremediation of rice field runoff water. Conclusions In summary, we determined the crystal structure of molinate hydrolase, which allowed us to propose a catalytic mechanism for the reaction of molinate in light of the general structural and catalytic accumulated knowledge within the amidohydrolase superfamily. Furthermore, we disclosed why molinate excess in axenic G. molinativorax ON4 T culture inhibits its growth; molinate hydrolase is inhibited by ethanethiol, which is one of the products of molinate hydrolysis. The free thiol groups show propensity to occupy the free coordination sites of the catalytic metal. The knowledge of the enzyme three-dimensional structure allowed us to design mutants able to degrade other thiocarbamates. One mutant was identified, Arg187Ala, that is able to catalyze the reaction with at least molinate and thiobencarb. The engineered specificity modulation seems to be accompanied by a decline of the affinity towards the original substrate, molinate. Nevertheless, mutant Arg187Ala can efficiently degrade structurally different Engineering Molinate Hydrolase for Degradation of Thiocarbamates thiocarbamates, thus paving the way for the development of a multi-herbicide decontamination system. Production and purification of recombinant and mutant molinate hydrolase To express the recombinant SeMet variant, the biomass of a freshly streaked plate from a fresh transformation of E. coli B834 (DE3) with expression vector pASKmolA (obtained as previously described [5], was resuspended and grown for 2 h, at 37°C, 130 rpm, in 5 mL of LB medium containing 100 μg ml -1 ampicillin; appropriate amounts of the starter culture were used to inoculate the main culture (minimal media) in order to have an initial OD 600nm of 0.06. For the expression of recombinant molinate hydrolase and mutant forms, E. coli JM109 or B834 (DE3) were transformed with expression vectors pASKmolA and pASKmolAmt240/ pASKmolAmt282/ pASKmolAmt302/ pASKmolAmt187/ pASKmolAmt253/ pASKmo-lAmt346 (according to the mutated residue to alanine). The biomass from a freshly streaked plate was resuspended in 5 ml LB medium supplemented with 100 μg ml -1 ampicillin. Each resuspended plate was used to inoculate 1 L of main culture (LB with 100 μg ml -1 ampicillin). Main culture, identical for all expressed forms of the protein, was grown at 30°C, 130 rpm until OD 600nm of 0.7; at that point, cells were subjected to cold shock and temperature was lowered to 15°C. Induction was performed at OD 600nm of 1.0 with 0.2 μg ml -1 anhydrotetracyclin. Cells were harvested (3985 x g, 20 minutes, 4°C, Avanti J26XPI, Beckman Coulter, rotor JLA 8.1000) after 18-20 h of incubation at 15°C with 130 rpm orbital shaking. Crystallization conditions After purification, protein samples were concentrated up to 5-10 mg ml -1 (quantification performed by absorbance at 280 nm) and the initial buffer Tris HCl 100 mM pH 8.0, 150 mM NaCl was exchanged with Tris HCl 20 mM pH 8.0. The protein aliquots were centrifuged (13 000 rpm, 30 min at 4°C) prior to the crystallization trials which were carried out by both sitting-drop and hanging-drop vapour diffusion techniques at 20°C, using 24-well plate by mixing 1.5 μl of the protein solution with 1.5 μl of the reservoir condition and equilibrated against a 500 μL reservoir. Several commercial crystallization screening kits were tested. Crystals were obtained in one week using two different conditions: (1) 0.095 M sodium citrate tribasic dihydrate, pH 5.6-6.4, 19% v/v 2-propanol, 19% polyethylene glycol 4000, 5% v/v glycerol and (2) 0.1 M BIS-TRIS, pH 6.5, 45% polypropylene glycol P400. They could be directly flash frozen for data collection and it was observed that the better diffracting crystals were those grown using condition (1). The initial X-ray diffraction studies with the SeMet derivative of recombinant molinate hydrolase allowed the structure determination of the metal-depleted form of the enzyme. Additional crystallization trials of the recombinant molinate hydrolase with crystallization condition (1), supplemented with either 4 mM or 8 mM zinc chloride were performed. Crystal structure determination of SeMet derivative of molinate hydrolase Data collection and processing-A Single Wavelength Anomalous Dispersion (SAD) data set to 2.85 Å resolution was measured at the Selenium absorption K-edge from a flash cooled crystal on ESRF beamline ID14-4. The peak energy was selected from a fluorescence scan using CHOOCH [12]. Diffraction images were processed with the XDS Program Package [13] and the diffraction intensities converted to structure factors in the CCP4 format [14]. A random 5% sample of the reflection data was flagged for R-free calculations [15] during model building and refinement. A summary of the data collection statistics is presented in Table 1. The crystal belonged to one of the orthorhombic space groups I222 or I2 1 2 1 2 1 with unit cell dimensions a = 128.44, b = 230.18, c = 264.83 Å. Matthews coefficient calculations [16] suggested the presence of 8 molecules in the asymmetric unit, with V m = 2.24 Å 3 Da -1 and a predicted solvent content of 45%. Structure solution and crystallographic model building-The Se heavy atom substructure was determined with Shake-and-Bake [17] as implemented in the BnP interface [18], using the SAD data to 3.5 Å resolution. Based on the most likely asymmetric unit contents, 168 Se sites were expected, and the calculations were performed on both possible space groups. A few solutions (10 in 1000 trials) were obtained only in I2 1 2 1 2 1 as indicated by a bimodal histogram of the minimal function (R min ) for 1000 trial substructures. Since the refined heavy atom occupancies did not show a sharp drop near the expected number of sites, the top 100 sites were input to a maximum-likelihood heavy-atom parameter refinement using autoSHARP [19]. The autoSHARP calculations eliminated some sites and located others, the final total number being 81. The centroid SHARP phases were further improved by an optimizing density modification procedure using SOLOMON [20], which suggested a solvent content of 63.8%. These results prompted a reinterpretation of the asymmetric unit contents, and it was concluded that there were only 4 monomers instead of 8 as initially expected, corresponding to V m = 4.48 Å 3 Da -1 and a predicted solvent content of 72.5%. Phasing and phase refinement statistics are listed in Table 1. The electron density map obtained was of excellent quality and about 1824 residues of the expected 1984 could be built and sequenced automatically with Buccaneer/REFMAC [21,22]. The autobuilding procedure gave values of R and R-free of 0.252 and 0.275, and the model was completed with Coot [23]. Data collection, processing and refinement of the zinc-bound molinate hydrolase Crystals from the zinc-bound molinate hydrolase were either isomorphous to the orthorhombic SeMet recombinant molinate hydrolase crystals or belonged to the monoclinic space group C2. The position of the bound zinc ions was determined from a single wavelength anomalous diffraction data collection to 2.27 Å resolution at the Zn absorption edge, determined experimentally on ESRF beamline ID23-1 (λ = 1.265 Å; Grenoble, France). The diffraction images were processed similarly to the SeMet derivative dataset. The crystal belonged to space group C2, with cell dimensions a = 367.48, b = 98.86, c = 131.20 Å and β = 109.59°. Initial molecular replacement phases were generated with PhaserMR [26], using as initial model one monomer of the SeMet derivative structure; in total, 8 monomers were present in the asymmetric unit. The top 8 positive peaks in the F + -Fanomalous difference Fourier map were assigned to the Zn 2+ ions; it was observed that they were located in equivalent positions in each monomer. Despite the modest peak height, between 5 and 7σ, its position was consistent with metal Engineering Molinate Hydrolase for Degradation of Thiocarbamates coordination geometry and with the position of the metal ions in structure-related amidohydrolases. Furthermore, site direct mutagenesis of the metal coordinating residues confirmed a decline in enzymatic activity. The occupation of the metal ions was set to 0.4 during refinement to adjust their isotropic atomic displacement parameters to their local environment. Data collection and refinement statistics are shown in table 2. The final model was obtained after further cycles of building and refinement, carried out with Coot [27] and REFMAC5 [28], respectively. Non-crystallographic symmetry was taken into account during refinement. Figures of protein structures were generated using PyMol [29]. Atomic coordinates have been deposited in the Protein Data Bank with an accession code: 4UB9. In vitro mutagenesis To confirm the metal-dependent active site and for the structure-guided engineering of molinate hydrolase, site-directed mutagenesis with Pfu DNA polymerase (Thermo Scientific) or PfuTurbo DNA polymerase (Agilent Technologies) was performed. Mutagenic primer sequences (forward and reverse; sequence provided upon request) were designed with with DpnI restriction enzyme to achieve pASKmolA template degradation, following transformation to E. coli DH5α. Selected colonies were picked and grown overnight at 37°C, 150 rpm in 10 ml LB supplemented with 100 μg ml -1 ampicillin. Mutant plasmid DNA aliquots were prepared using GRS Plasmid Purification Kit (Grisp Research Solutions) and the desired mutations were confirmed by sequencing (STABvida, Portugal). Mutant proteins correct folding was evaluated by circular dichroism (CD). CD spectra were recorded by a J-815 (Jasco, Japan) spectrometer, equipped with a Peltier temperature control system, set at 20°C. Protein samples at 0.1 mg ml-1, diluted in TrisHCl 10 mM pH 8.0. Spectra recorded in a 1 mm pathlenght quartz cell (Hellma Analytics, Germany) from 260 to 190 nm, at 50 nm min-1, D.I.T. of 2s, data-pitch 0.2 nm and 16 accumulations per measurement. Spectra smoothed using the Savitzky-Golay algorithm and corrected for the blank sample. Enzyme activity assays and evaluation of inhibitors Recombinant and active-site mutant molinate hydrolase activity was assayed by following substrate depletion through high performance liquid chromatography (HPLC), as previously described [4]. Assays were carried out using 0.1 μM recombinant molinate hydrolase or activesite mutants and 0.5 mM molinate, in 50 mM phosphate buffer pH 7.4 Molinate depletion was followed for up to 30 min at room temperature (~25°C). The initial velocity of molinate degradation was calculated from linear regression of the substrate concentration versus time plot. The effect of ACA and ethanethiol on molinate hydrolase activity, in concentrations up to 10 and 6 mM, respectively, was studied by adding these reaction products to the activity assays. ACA was obtained as previously described [3]. The inhibition constant, Ki, was determined using the competitive enzyme inhibition algorithm embedded in GraphPad Prism version 6.00. The effect of the thiol group on molinate hydrolase activity was further confirmed by UV spectrophotometry. Control reaction was performed by incubating 1 μM of recombinant molinate hydrolase with 0.5 mM molinate and substrate depletion was followed at 220 nm (molinate maximum absorption wavelength). Then, 2-mercaptoethanol (3 mM), L-cysteine (1.5 mM), L-serine (1.5 mM) and L-methionine (1.5 mM) were added individually, to further confirm the effect of the thiol group on enzyme inhibition. Enzymatic kinetic assays for mutants Arg187Ala, Phe253Ala and Phe346Ala were performed by HPLC. Protein concentration was fixed at 0.1 μM, while substrate concentrations were varied (molinate, 0.1 to 1 mM; thiobencarb 0.02 to 0.08 mM). As before, the initial velocity of molinate degradation was calculated from linear regression of the substrate concentration versus time plot. Then, kinetic parameters were calculated using non-linear regression of initial speed values for increasing substrate concentrations and a E t value of 0.1 μM, using GraphPad Prism version 6.00.
v3-fos-license
2017-07-08T13:29:50.509Z
2017-02-01T00:00:00.000
7473254
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s40557-017-0160-5", "pdf_hash": "e1691329480c051a51ea074b450cba967aae550d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41575", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e1691329480c051a51ea074b450cba967aae550d", "year": 2017 }
pes2o/s2orc
Effectiveness of workers’ general health examination in Korea by health examination period and compliance: retrospective cohort study using nationwide data Background Our study evaluated the effectiveness of the Workers’ General Health Examination by health examination period and compliance. Methods A retrospective cohort of the health examination participants in 2006 (baseline year: N = 6,527,045) was used. We identified newly occurring cardio-cerebrovascular disease over 7 years (from 2007 to 2013). After stratification by age, sex, and national health insurance type, we identified 7 years’ cumulative incidence of cardio-cerebrovascular disease by health examination compliance and estimated its relative risk by health examination period and compliance. Results The compliant group presented a lower cumulative incidence of cardio-cerebrovascular disease than the non-compliant group; this result was consistent across sex, working age (40s and 50s), and workplace policyholder. Relative risk of cardio-cerebrovascular disease by health examination period (1 and 2 years) showed statistically significant results in ischemic heart disease for male participants. Of men in their 40s, office workers (over a 2-year period) presented statistically higher relative risk of ischemic heart disease than non-office workers (over a 1-year period: 1.03; 95% confidence interval, 1.02–1.03). However, there were no consistent results in ischemic cerebrovascular disease and hemorrhagic cerebrovascular disease for men or cardio-cerebrovascular disease for women. Conclusion A 1-year period of Workers’ General Health Examinations in non-office workers had a more significant prevention effect on ischemic heart disease than a 2-year period in office workers among working age (40s–50s) men. It is, however, necessary to consider that prevention of cardio-cerebrovascular disease can be partially explained by their occupational characteristics rather than by health examination period. Background Periodic health examinations are systematic scheduled screenings performed for more than one risk factor or disease in more than one organ system [1]. The purpose of periodic health examinations in Korea is to detect target diseases at an early stage by screening the asymptomatic general population and ultimately reducing the mortality rate of the target diseases [2]. One of the periodic health examinations currently being conducted in Korea, the Workers' General Health Examination (WGHE) has a similar purpose. The purpose of WGHEs is to improve labor productivity and worker's health protection by finding ordinary or occupational diseases at an early stage and offering participants appropriate follow-up action [3]. The health status of workers is a main factor affecting company productivity. Therefore, it is important to effectively perform WGHEs, not only for corporate profit and worker's health status but also for national health [4,5]. The effectiveness of periodic health examinations was estimated in several studies. In Japan, comprehensive periodic health examinations have occurred for many years. Cardiovascular disease specific mortality [6] was lower among participants than non-participants in Japanese health check-ups; overall mortality [6][7][8] was also lower among the participating group. Prevention of cardiovascular disease is also the main target of WGHEs in Korea [2]. Conversely, in Korea, study of periodic health examination effectiveness, especially WGHEs, has been insufficient. Although several studies analyzed periodic health examination effectiveness in Korea, most of them only targeted National General Health Examinations (NGHEs), one of the periodic health examination now performed by the National Health Insurance System (NHIS). One simulation analysis by NHIS of NGHE cost-effectiveness [9] showed 0.76 increase in quality adjusted life year per diabetes patient who participated in NGHE. Other reports showed statistically lower mortality [10,11] and cardio-cerebrovascular disease risk [10] among NGHE participants than non-participants. However, no study considered health examination period (1 vs. 2 years) as an independent variable in the analysis. Distinguishing WGHEs and NGHEs was not meaningful in the previous studies, because WGHEs were replaced by NGHE results after 1995 [12]. However, WGHEs retains its own characteristics under the labor act from the perspective of workers' health [12] and occupational health surveillance [13]. Furthermore, WGHE periods are classified into 1-year (non-office worker) and 2-year (office worker) for workplace policyholders; the influence of period on WGHEs has not yet been studied. Therefore, effectiveness analysis of WGHEs and study of this unverified variable is necessary. The present study evaluated WGHE effectiveness by health examination period and compliance. For this purpose, we created a retrospective cohort based on the health examination participants in 2006 and identified newly occurring cardio-cerebrovascular disease (CCVD) during 7 years (from 2007 to 2013). Study population The responsibility for workplace policyholders' health examinations was transferred from the Ministry of Labor to Ministry of Welfare in 1995 by the Occupational Safety and Health Act [12,14]. Therefore, the targets of this study, WGHE examinees (now performed by the Ministry of Labor), are the same as workplace policyholders with NGHEs (now performed by the Ministry of Welfare). Thus, we regarded workplace policyholders' NGHE results (from 2002 to 2006) as WGHEs. Moreover, we were also provided regional policyholders' NGHE data during the same years and combined them in the analysis. We designed a retrospective cohort study based on 2006 NGHE results. From the total NGHE participants in 2006 (N = 15,053,761), 8,408,218 participants were identified (Fig. 1). We excluded participants with inadequate NGHE results or NHIS benefit claim records. Participants aged over 70 or less than 20 were also excluded. Participants with past CCVD in the NHIS benefit claim record were excluded to identify newly occurring CCVD. The final study population (N = 6,527,045) were confirmed. Then, we stratified this study population by age, sex, and national insurance type. In this procedure, we classified national insurance types into workplace and regional policyholders. Workplace policyholders were divided into office and non-office workers for more detail. Public officers and public educational personnel and staff were excluded from the analysis because of few examinees. Definition of health examination compliance The study population in the present research was divided into two subgroups, compliant and non-compliant, by WGHE participation rate. The classification criterion was whether examinees participated in all health examination chances during 5 years (from 2002 to 2006). When examinees participated in all health examination chances, they were classified into the compliant group. Conversely, when examinees omitted at least one health examination chance, they were classified as non-compliant (Table 1). Because health examination periods differed by national health insurance type, classification criterion for the compliant group varied by national health insurance type. Non-office workers were defined as compliant when they participated in 5 total health examination chances (from 2002 to 2006) because their health examination period was 1 year. Office workers and regional policyholders were defined as compliant when they participated in 3 total health examination chances because their health examination period was 2 years. Outcome ascertainment WGHE effectiveness was assessed by the number of newly occurring CCVD (ischemic heart disease, ischemic cerebrovascular disease, and hemorrhagic cerebrovascular disease) after 2006. Each CCVD case was identified by matching participants' datasets with NHIS' benefit claim records through participant's temporary identification numbers that omit any personal information. To identify newly occurring CCVD, we excluded participants who had CCVD before 2006. After that, we identified newly occurring CCVD from 2007 to 2013. However, diseases caused by other reasons than vessel ischemia or hemorrhage (such as recurrent disease or diseases by trauma) were not considered in this analysis. The International Classification Disease 10 (ICD 10) code of CCVD is presented in Table 2. Statistical analysis This study was performed in the following order. As a first step, we divided the study population into two subgroups by health examination compliance (Table 3) and identified each group's 7-year cumulative CCVD (Table 4). This presented possible differences in the preventive effects of health examination between the compliant and non-compliant groups. We calculated cumulative incidence in percentages by the fraction of number of newly developed CCVD over number of WGHE participants in baseline year 2006. For the second step, we identified relative CCVD risk by health examination compliance (Tables 5 and 6). For this purpose, we targeted non-office workers who have the most chances for health examinations (1-year period) and calculated the relative risk of the compliant group (reference: non-compliant group). The results were classified by sex and age (40s and 50s). Finally, we identified relative CCVD risk by health examination period (1-year vs. 2-year; Tables 5 and 6). For this purpose, we calculated the relative risk of the 2year group (regional policyholders and office workers) based on the 1-year group (non-office workers). Participants in this analysis included only the compliant group. In this procedure, all relative risks were calculated after stratification of study population by sex and age (40s and 50s). Relative risk (RR) and 95% confidence interval (CI) for CCVD were estimated by SAS syntax: PROC GEN-MOD and SAS ver. 9.3 was used in all statistical analysis. Baseline characteristics of study population A total of 6,527,045 participants were enrolled in the cohort study population from the original 2006 NGHE data ( Table 3). The non-compliant group was larger than the compliant group. The proportion of men in the compliant group was larger than the proportion of women. The age group with the highest proportion in the compliant group was the 20s; the others were lower in the order: 30s, 40s, and 50s. The proportion of those aged in the 60s in the compliant group was very low. Among the three national insurance types, non-office workers were the highest proportion of the compliant group followed by office workers and regional policyholders. Of regional policyholders, participants in all health examination chances for 5 years were relatively few (9.7%) compared to other national insurance types. Distribution of cardio-cerebrovascular disease by compliance The distribution about cumulative CCVD incidence by health examination compliance is presented in Table 4. The cumulative incidence of ischemic heart disease (IHD) in the compliant group was lower than noncompliant group among non-office male workers in their 40s. Office worker results were similar. In addition, male non-office workers who had 1-year chances for health examinations showed a lower incidence IHD gap between the compliant and non-compliant groups (0.93%) than office workers with 2-year chances for health examinations (1.69%). Conversely, regional policyholders in the compliant group showed higher IHD incidence than in the non-compliant group. The above results were consistent in men and women in both the 40s and 50s age groups. Ischemic cerebrovascular disease (ICVD) results were similar to IHD. The cumulative incidence in the compliant group was lower than non-compliant group in nonoffice male workers in their 40s; this result was consistent in office workers. In addition, the ICVD incidence gap between compliant and non-compliant groups male non-office workers (1-year period; 0.4%) was lower than office workers (2-year period; 0.55%). Conversely, regional policyholders' cumulative ICVD incidence in the compliant group (5.85%) was higher than the noncompliant group (4.55%). Hemorrhagic cerebrovascular disease (HCVD) incidence in the compliant group of office and non-office workers was low. The low HCVD incidence rate among the compliant group of workplace policyholders was the same as the IHD and ICVD incidence rate analyses. In regional policyholders, by contrast, HCVD incidence in the compliant group was lower than the non-compliant group in a different pattern than IHD or ICVD incidence. Relative risk of cardio-cerebrovascular disease by compliance Relative CCVD risk by health examination compliance is presented in Tables 5 and 6. Male non-office workers in the compliant group showed lower CCVD risk than in the non-compliant group (Table 5). This result for male non-office workers was consistent with other diseases, ICVD and HCVD, and also consistent across both age groups (40s and 50s). However, relative CCVD risks for women in the compliant group did not show statistically significant results (Table 6). Relative cardio-cerebrovascular disease risks between 1-year and 2-year groups The relative risks of CCVD by health examination period (1-and 2-year) are presented in Tables 5 and 6. In this analysis, only IHD for male participants showed statistically significant results, whereas women did not present consistent results. Male office workers in their 40s (2-year period) presented statistically higher relative IHD risk than non-office workers (1-year period). The result was the same for men in their 50s. However, there were no consistent results in ICVD and HCVD for men and CCVD for women. Relative risks of IHD, ICVD, and HCVD in regional policyholders (2-year period), by contrast, were statistically higher than non-office workers (1-year period). These results were consistent in both sexes (Tables 5 and 6). Discussion "Health examination compliance" is a new variable reflecting the participation rate of periodic health examinations for several years. Using the consistency of annual participation as a health measure has never been attempted in past research. Previous research related to periodic health examinations in Korea usually analyzed single-year participation [9,10,15]. It is possible to assess not only single-year health effects of periodic health examinations but also its multiyear health effects when we use health examination compliance as an analysis variable. We identified that the compliant group has lower cumulative CCVD incidence than the noncompliant group (Table 4) consistently in both sexes and workplace policyholders. Moreover, we identified that the relative CCVD risk in the compliant group was statistically lower than the non-compliant group for male non-office workers (Table 5). Therefore, we suggest that health examination compliance positively affects CCVD prevention among workplace policyholders. Further analysis compared the cumulative CCVD incidence between health examination periods (1-year vs. 2- year). We identified that the relative IHD risk of male office workers (2-year period) was statistically higher than that of male non-office workers (1-year period; Table 5). Although the analysis was limited in that the results did not show statistical significance or consistent results in both sexes and diseases, participants who received 1-year health examination showed better preventive effects than 2-year health examination for IHD in working age (40s-50s) men. There are several studies of periodic health examination effectiveness; however, a consensus remains lacking. One meta-analysis using only randomized controlled trials published from 1963 to 1999 (14 trials) revealed that periodic health examination has no beneficial effect on total mortality (RR = 0.99, 95% CI: 0.95 to 1.03) and cardiovascular mortality (RR = 1.03, 95% CI: 0.91 to 1.17) [1]. Another systematic review using 23 observational studies and 10 randomized controlled trials published from 1973 to 2004 also reported that periodic health examinations may be related with increased use of preventive medical service and reduced patient worry, but additional research data is needed to estimate its long term benefit [16]. Conversely, several investigations conducted in Japan showed that periodic health examinations had positive effects on total mortality (hazard ratio [HR] = 0.74, 95% CI: 0.62 to 0.88 [6]; HR = 0.70, 95% CI: 0.56 to 0.88 [7]; HR = 0.83, 95% CI: 0.69 to 0.99 [8]) and cardiovascular disease mortality (HR = 0.65, 95% CI: 0.44 to 0.95) [6] for men. The effectiveness of mass periodic health examinations is still controversial because of the difficulty of large clinical trials with periodic health examination [6]. There are no studies using health examination period as an independent variable to our knowledge despite the controversial results. The relative CCVD risks between office and non-office workers showed subtle differences ranging from 0.93 to 1.03 (Tables 5 and 6). These results may be caused by a large study population (N = 6,527,045) not the effect of health examinations. However, such a subtle difference might represent a meaningful result from the perspective of public health and prevention. Moreover, male nonoffice workers' (1-year period) CCVD incidence gaps between the compliant and non-compliant groups were lower than office workers' (2-year period; Table 4 for both IHD and ICVD). Thus, giving more participation chances for health examinations can narrow health effect gaps between subgroups classified by compliance. Two perspectives are possible for why differences in effectiveness of health examinations were not identified in either ICVD or HCVD but were for IHD. One possibility is that ICVD and HCVD actually do not differ in effects by health examination period, unlike IHD. Another possibility is disease characteristics such as peak age and etiology of IHD and stroke (including both ICVD and HCVD). Although both IHD and ICVD have the same cause (arteriosclerosis), each disease shows differences in peak age and incidence as vessel ischemia from different organs (heart and brain) [17]. The peak age for IHD is the 50s-60s and 36% of IHD patients are under 45 years old [17,18]. Conversely, ICVD occurs at a relatively older age than IHD. ICVD is rare before age 40. ICVD prevalence doubles every 10 years after age 55, so the highest prevalence (about 27%) is identified at over 80 years [19]. Therefore, a 7-year follow up period might be insufficient time for ICVD to detect effectiveness of health examinations, because ICVD occurs at relatively older ages than IHD (50s-60s). Further, HCVD's pathophysiology itself is fundamentally different from IHD. Blood vessel rupture is the main cause of HCVD. In addition, the incidence of HCVD is 24.6 per 100,000 person-years; this value is one-tenth of IHD's incidence (434 per 100,000 person-years) [17,19]. HCVD's relatively low incidence makes it difficult to draw statistically significant results, while IHD incidence analysis presented significant results. Statistical significance was not consistent for women's CCVD incidence by health examination period. Two perspectives are also possible for this result. One possibility is that health examination has no preventive effect for women; another possibility is the difference in disease epidemiology between the sexes. IHD occurs 10 to 20 years later in women than men and IHD occurrence Relative risk and 95% confidence interval for cardio-cerebrovascular disease were estimated by SAS syntax: PROC GENMOD in women is rare before menopause [20]. Women's occurrence age for stroke is also later than men and the incidence rate is 33% lower than men [17,21]. Etiology of ischemic stroke also differs between the sexes. Large vessel atherosclerotic stroke and associated coronary and peripheral artery diseases are more common in men and cardiac embolism-related stroke is more common in women [22]. Therefore, a 7-year follow up period might be insufficient time to detect IHD and stroke in women because women's CCVD incidence is lower than men and occurrence is later than men. Further research with long-term follow up periods can determine differences of health examination effectiveness between the sexes. Relative CCVD risks in regional policyholders with 2year health examination periods were higher than nonoffice workers with 1-year periods; these results were statistically significant in both sexes. However, careful attention is needed in this analysis. Selection bias by the healthy worker effect (HWE) [23] is possible between non-office workers and regional policyholders. Healthy workers have greater potential to initiate their career in better companies and continue to work for longer than unhealthy workers [24]. Therefore, unhealthy workplace policyholders are likely to be retired from their workplace and to be regional policyholders. As a result, it is possible that health status differences occurred between the two selected groups; workplace policyholders may be relatively healthier than regional policyholders. Health examination compliance may be confounded by HWE in the same manner as well. Healthy workers are more likely to get health examinations stably than unhealthy workers due to better working environment [24]. Therefore, the variable, health examination compliance, may be confounded by healthy workers' stable participation in health examination. The 1-year period health examination had more preventive effects on ischemic heart disease than the 2-year period. It is necessary to identify the reasons for the difference in CCVD risk by period in further studies. Although there were several reports that described the mechanism of periodic health examination effectiveness, most were about its possible benefits. Participant's poor health habits (e.g., smoking, alcohol drinking, irregular meals, no regular exercise) might be changed through medical counseling during periodic health examinations and periodic result notifications [1]. Identifying abnormal results (e.g., high blood pressure, glucose, cholesterol) in the early stages of disease also may lead to early intervention and health management [1]. It is also possible that WGHEs had a positive effect on medical accessibility by improving the delivery of medical intervention; the more health examination opportunities Relative risk and 95% confidence interval for cardio-cerebrovascular disease were estimated by SAS syntax: PROC GENMOD they have, the more chance for medical intervention they have [16]. This study has some specific limitations. The first is HWE, mentioned in detail above. The second is inaccuracy of benefit claim records and health examination results in NHIS. In this research, we utilized ICD 10 codes from NHIS benefit claim records instead of hospital medical records. Benefit claim records request treatment charges to NHIS. Over-rated diagnosis coding is possible to avoid cutbacks in benefit claim records [10] and diagnosis can be inaccurate in some instances [10,25]. Third, the 7-year follow up period was insufficient to successfully evaluate WGHEs. This limitation can be one reason that statistically significant results were not consistently shown in ICVD, HCVD, and women, as previously mentioned. Finally, there are possible confounding factors between office and non-office workers. Although we stratified the two subgroups by sex, age, and national insurance type, several confounders in evaluation of health examinations remain. Socioeconomic status such as income, education, and residential district, and lifestyles such as smoking, alcohol consumption, and exercise are well known determinants of health examinations [26][27][28][29]. In addition, there are several reports showing that white collar workers have greater tendency to chronic diseases than blue collar workers. Several studies reported that the risk of chronic diseases such as dyslipidemia, hypertension and metabolic syndrome were higher in white collar than blue collar workers [30,31]; this result was consistent even though they worked in the same work place [30]. The reasons chronic disease risks differ by job style were analyzed in several studies; sedentary work of white collar workers can influence worker's health status negatively by physical inactivity [30,32]. WGHE results include lifestyle questionnaire data (e.g., smoking, alcohol drinking, regular meals, regular exercise, etc.), but we did not analyze the questionnaire data in this study. Other significant variables such as working hours and level of sedentary work were not available from the questionnaire data. Because of above limitations including possible confounding factors, our study is limited in comparing office and non-office workers. However, definitions classifying office and non-office workers in our database are somewhat different from those of blue and white collar workers in the above articles. According to The Occupational Health and Safety Act, sedentary workers who work in the same territory or are exposed to similar occupational environments to manual workers (e.g., sedentary workers whose offices adjoin their firm's factory) are classified as non-office workers, although their job style is only paperwork. As a result, our data (e.g. office and non-office worker) did not reflect past job classification categories (e.g., white and blue collar or non-manual and manual worker). This is one possible reason our study lacked some above confounding factors between white and blue collar workers. It is necessary to consider the job classifications in this study to interpret the present results. Despite these limitations, our research has several valuable findings. First, this study was performed with a real dataset provided by NHIS, not by simulation techniques. More than 6 million participants were the target of analysis using nationwide health examination data. Initial studies usually used simulation techniques; research using real datasets were not attempted [9,15,33]. Since then, Yoon et al. [10] analyzed a real dataset and Jee et al. [11] adjusted a cohort study design to NHIS nationwide data. Second, we presented a 5-year continuous participation rate, not just a single-year participation rate, by defining the new concept of "health examination compliance." Several reports presented single-year participation rates [14]; however, our research presented multiyear participation rates for the first time. In 2006, 77% of workplace policyholders participated in NGHEs [12] but the compliant group who participated in all health examination chances for 5 years (2002 to 2006) was only 24% of the total cohort group (Table 3). Single-and continuous-year participation rates need to be included in further analysis of periodic health examinations. However, the most important point is our evaluation of WGHE effectiveness by health examination period for the first time. WGHE periods in Korea differ between office (2-year) and non-office (1-year) workers. There are non-office workers who participate in health examinations every year and office workers who participate in it biennially; thus, it is possible that their health effects differ. Previous studies only analyzed health examination effectiveness by participation and did not consider health examination periods as a descriptive variable [9][10][11]. For the first time, we showed that 1-year WGHE periods in non-office workers had more significant prevention effect for IHD than 2-year periods in office workers among working age (40-50s) men. However, prevention of cardio-cerebrovascular disease can be partially explained by their occupational characteristics rather than their health examination period. Our study result should influence national health policy and support the necessity of further research. Additional studies adjusting variables such as lifestyle and socioeconomic status of participants and long-term follow-ups are needed based on this study. Conclusion Our study showed that 1-year periods for Workers' General Health Examinations in non-office worker had more preventive effect on ischemic heart disease than 2-year periods in office workers among working-age (40s-50s) men. In addition, the compliant group showed a lower 7-year cumulative cardio-cerebrovascular disease incidence than the non-compliant group. However, prevention of cardio-cerebrovascular disease can be partially explained by their occupational characteristics rather than their health examination period. Efforts to conduct more systematic effect evaluation on Worker's General Health Examinations should be made by using adjusting various determinants of health examination and longterm follow-ups, based on this study result.
v3-fos-license
2021-07-29T16:02:48.094Z
2021-05-12T00:00:00.000
236525917
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ccsenet.org/journal/index.php/res/article/download/0/0/45262/48184", "pdf_hash": "3cb8e6b4606a6c77a7275e994be6d39657e6a65b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41579", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "sha1": "3cb8e6b4606a6c77a7275e994be6d39657e6a65b", "year": 2021 }
pes2o/s2orc
Some Considerations on the Issue of Economic and Social Sustainability To be implemented and analyzed, according to the good rules of relationship with nature, sustainability must be equipped with a theoretical scheme able of helping to understand the dynamics of this relationship together with the opportunities offered to improve the development of the economic system. Essentially, it’s about acknowledging that, just like physics, also the economy is subject to some general and abstract laws. This is the case of the core inflation value, defined by Central Banks as a value close to 2%. So, if the economy moves along the track indicated by this value, we have confirmation that growth is regularly developing. This core inflation value is implicitly defined without a clear specification. We can therefore admit that it’s an ideal value like the great universal constants, which reports about an economic system that develops according to the rules of natural compatibility. According to this point of view, the core inflation close to 2% is essentially a utopia, because it can only be achieved if the global economic growth moves in full accordance with the nature around us. It follows that even if we can verify on field the realization of a base value close to 2%, actually we are not in the best conditions, especially if the global economy is suffering from deflation as today. The deflation, that is the tendency of prices to fall, is part of the complex messages sent by the nature and economic systems to signal that the economy is not doing well and has become unstable. Both inflation and deflation are messages that never contribute to the economic development course, but they are born and evolving in parallel with the appearance of the economic cycle in daily activity. In summary, a mechanism that has the responsibility, by imposing pauses on the system, to reduce the instability of the systems and to facilitate the return to the natural development condition. A correction system based on the economic conjuncture that obviously distinguishes the stability by the way that the economy grows and develops in a linear and constant inclination depending on differentials. The Instability of the Economy and Its Messages to the Community The sustainability in economics must be understood as the relationship between humanity with its economic and social activities, and the nature around us. According to this evaluation parameter of economic and social relations, we should admit that the relationship with nature may sometimes not be optimal. But it's in some way undermined, indeed, by the pressure exerted by human beings in the non-rational use of natural resources. If this relationship with nature is deviated, the economy has lost the path of compatibility with nature and is thrown into an instability sub-world. We can certainly say that a large part of the global economy today is into this condition of economic instability. But with what consequences at the level of everyday life? The first and decisive exception is the presence inside the economic systems of a non-linear but sinusoidal conjuncture development. In other words, the economic growth follows the ups and downs of the conjuncture cycle. A recovery phase in any case is followed by a recession or reduction phase in gross product. Not only that: if the instability continues; the pace of the economy changes for the worse in the sense that the recovery phase is increasingly reduced when the equilibrium relationship with nature is worsening. In the meantime, the declining trend in Domestic Product (GDP) is extended over time and the economy contracts, then causing serious social and economic effects. Therefore, the economic instability is very costly and to return to the stability and compatibility phase it's often necessary to support the natural process of correction. It follows that the normal operations of fiscal and monetary policy to boost the weak economy can give appreciable results if they do not oppose frontally to the natural work of correcting the unstable economy. Now, it's true that in the post-2008 financial crisis the use of extraordinary amounts of deficit spending avoided the worst damage of an uncovered financial crisis. However, we must say that this effect -certainly positive -was obtained because the financial policy on that occasion played a concomitant role in the natural process of rebalancing the economic systems. And particularly, the fiscal policy seems called upon to limit the consequences of a very unbalanced distribution of wealth to the detriment of that part of community less favored or even at a poverty level. Thus, the fiscal policy seems called upon to play a significant role in order to return to the least favored and poor part of the community a part (at least) of the wealth, which otherwise would flow (as usual) towards the wealthiest top part of society. Obviously, there are some hidden sides of the maneuver. The first and directly observable, is the undue growth of public debt, which in turn means weakness for the most fragile systems, therefore more exposed to the recession wind. And unfortunately, this isn't the only dark side. The fact remains that fiscal policy is called upon to play an essential role which at least partially will compensate for the serious imbalances caused by the concentration of wealth in a few hands and the parallel spread of poverty. Nevertheless, also the abnormal process dragging wealth upwards and then depressing the most disadvantaged population groups is another symptomatic effect of the imbalance of economic systems in their relationship with nature. Now, how can we say that an economic system is running in accordance with the natural compatibility? How can we say that we are on a correct path with the evolution of the planet which is feeding us? Well, the first element to be observed is precisely the development trend: if it's growing correctly and constantly, or instead according to a sinusoid that changes sign and direction. So, if there is instability, then we will have the conjuncture cycle pace as a companion, together with a series of events such as anomalous wealth distribution and damage to the most disadvantaged classes. Unlikely, in the case of stability, understood as a long and compatible relationship with nature, the development of the economic system tends to rise slightly and steadily over time. In other words, if the development path is compatible, we will have an economic growth that follows, without interruptions, a path slightly upward and constant over time. As saying that the advantages offered by the natural path of development compatibility are those sought by every good government really interested in social welfare. The problem is that once the path of imbalance and incompatibility has been taken, the suggestion is always to resort to the usual monetary and financial policy maneuvers, which are anyway unable to bring the deviated system back on the path of a correct relationship with nature. The real problem is that the issue of sustainability and relationship with nature is often ignored. A relationship relying on the axiom of the use of natural resources. The resources offered by the nature should be used rather than exploited. The ultimate goal of human beings should be to preserve the natural resources for future generations. An irrevocable rule that finds its most direct expression in the stability of economic systems. Systems that, in the compatibility context, can grow at a steady, slightly upward pace. As saying that an orderly and constant development is possible, if we preserve the natural resources for future generations. Therefore, to preserve the natural resources without exploiting or wasting them is an issue of respect for the rule that sees the future generations as those who have the task of following the path of knowledge of mother nature. In other words, humankind would always have the transcendent task of interpreting and understanding the universe around us: starting with the Egyptians and passing through the Maya who deified the sun, to finally get to a growing group of researchers seeking today the rules that allowed the construction of the universe and that are governing the solar system. Moreover, the entire community will be interested by the repercussions that the efforts of study and research can bring in terms of social and economic prosperity. The Instability Symptoms Globally Pervading the Economy In case of instability, the economic systems are subject to the conjuncture cycle, that is a corrective mechanism introducing the recession into the scenario. The recession is essentially a corrective mechanism which, when left to act, can allow the reduction of the anomaly and therefore the gradual return to stability and compatibility with nature. The correction mechanism par excellence is therefore the recession, although in the instability world other phenomena with different purposes take shape and evidence, such as the monetary anomalies. As I have already explained in previous works (Cossiga, 2019), monetary anomalies such as inflation or deflation, are not active parts of the economic mechanism; that is, they do not participate in the economic development. They are instead messengers of an altered and unbalanced economic system, designed to signal at the community level that the economic system has left the path of balance with nature. From this point of view, they cannot be subject of reaction and opposition because they are just 'ghosts' created by the instability. While it's necessary, instead, to correct the reasons that led the economic system to diverge from the right path. On the other hand, the path indicated by the correct relationship with nature is certainly attractive. In fact, if it is followed without deviations, it will allow us to count on a constant and slightly rising economic growth. A picture that we can define in terms of the "tomorrow as today" expectations, precisely because in the stability world the conjuncture cycle anomalies are excluded, in order to avoid periodic and recurring recessions or financial crises. The rupture of the relationship with nature, on the other hand, entails the appearance of the conjuncture with its erratic rhythms in the scenario. Then, the symmetry of a constant growth is dissolved to set in motion a risky path with efficiency loss and reduction of development capacity, which finally stops or anyway declines. (Geithner, 1914) The intrinsic rule of the conjuncture cycle is exactly to curb the drive for constant development when the diverted economy risks compromising the self-correcting ability of nature. So, remain opaque all the attempts to solve anomalies in the economy with the usual remedies of fiscal and monetary policy, if the goal is the return to the stability steady growth. It would remain an unlikely goal, without correcting the instability underlying reasons or without correction of the status quo. On the other hand, are far from irrelevant those monetary phenomena, such as inflation or deflation, which do not participate in the economic action, and are instead real messages informing the community that economic things are bad. These messages come from the market in explicit form, through the price trends (inflation or deflation) or in implicit form, as widespread sensations on the favorable or unfavorable economic condition. Messages that are directed to the community, to favor the adaptation of its behavior to the changed conditions of the economic context, and above all to make evident a growing critical position towards the leaderships governing the economy. We must in fact remember that the community is the terminal point of the relationship between humanity and nature. In the sense that the community is the most sensitive to every change in economic conditions. How to explain, on the other hand, the peculiar ability of the common man or our neighbor to summarize at a glance the economic situation; in order to agree with the action carried out by the economy management; or to show a critical attitude so to be able to understand, in a simplified and synthetic way, what to do to get out of the wrong path and return to the compatibility balance. In other words, it could be said that the community decides, following the explicit or implicit messages coming from the economic system, what scenario the economy will develop in the future. A demonstration that the relationship between humanity and nature is strongly linked and that everything happens because of the shared will of the whole community. According to this pattern of reflections, we can try to interpret the long story that has fueled a growing inflation at a global level since the 1970s, which was only apparently solved in the 1990s and then has turned into deflation, also widespread at a global level. Now, it seems evident that between the two realities, the world in inflation and the current one in deflation, there isn't a great discontinuity but instead a sort of continuity. In the sense that the world in inflation has somehow adapted to the world in deflation of the current scenario. But apart from the reasons for this transformation of monetary messages from the economic system, it is worth considering that since the seventies the economy at a global level is running in conditions of imbalance and incompatibility. In other words, the monetary messages are changing to inform the community that things are going wrong in the economy but doesn't change the underlying problem about the general instability of economic systems. The long fight against inflation, previously held under a cautious control, did find its turning point for the initiative of the Fed president Volker and the US president during the late 1970s. The Fed decided to make the interest rates surge until 21.5%. Under the strong tightening of monetary policy, the North American economy fell into recession in the years 1980-81. Inflation which, at the end of seventies, was running at 15%, then was tamed and contracted to 5% to finally decline to 2% for the whole of the nineties. 1 Inflation was tamed and the entire global economy followed the US road to eradicate inflation. However, thrown out the door, it came back in the form of deflation during the 2000s. We could assume that when the world learned to eradicate inflation and to block the reappearance of the nominal price run, then logic and modality of the message to community Inflation emerged as an economic and political challenge in the United States during the 1970s. The monetary policies of the Federal Reserve board, led by Volcker, were widely credited by curbing the rate of inflation and expectations that inflation would continue. US inflation, which peaked at 14.8 percent in March 1980, fell below 3 percent by 1983. The Federal Reserve board led by Volcker raised the federal funds rate, which had averaged 11.2% in 1979, to a peak of 20% in June 1981. The prime rate rose to 21.5% in 1981 as well, which helped lead to the 1980-1982 recession, in which the national unemployment rate rose to over 10%. Volcker's Federal Reserve board elicited the strongest political attacks and most widespread protests in the history of the Federal Reserve (unlike any protests experienced since 1922), due to the effects of high interest rates on the construction, farming, and industrial sectors, culminating in indebted farmers driving their tractors onto C Street NW in Washington, D.C. and blockading of Eccles Building. The US monetary policy eased in 1982, helping lead to a resumption of economic growth. res.ccsenet.org Review of European Studies Vol. 13, No. 2;2021 was changed. Because we're anyway talking about message: a message that is suggesting to the community that things in the economy are going bad. When the deflation appeared, soon globally spread, starting from those countries most fragile from a financial point of view. Inflation or Deflation Are Just Messages, Images and They Cannot Be Adjusted by Direct Actions The fight against high inflation rates was done with the maneuvering of interest rates, through the monetary policy. An indirect action that works through interest rates, raising them in sequence, according to the inflation movements. The high interest rates discourage many economic activities due to the rising cost of money, then causing a cyclical reversal and the falling of economic system into recession. The recession of the economy produces the effect -somehow unavoidable -of contracting the run of nominal prices. If monetary tightening is maintained, at the same time the recession increases, and a parallel reduction is produced in the inflation rate. In some way, therefore, the action of monetary policy moves in symmetry with the natural correction, which -through the conjuncture cycle -aims to correct the imbalance of the economy. Let us not forget that the conjuncture cycle introduces into the economic system the periodic recession phase, which is aimed to reduce the economic instability. In other words, it's about pausing the development mechanism to allow the reduction of the instability in the economic systems. Now, it's certainly not so easy trying to understand why the deflation message came after inflation had been eradicated. We can imagine that the hard fight against the raising of nominal price increase, has put in place the deflation mechanism as a replacement. However, I would not insist on this aspect, because both deflation and inflation are messages, mere images of reality that do not participate in any way in the development of economic events. Moreover, they are images of reality even stronger when the instability of economic systems is widespread. Apart from the hypotheses on the reason for the replacement messages, it must be said that what really matters is not so much the type of message, but rather what inflation or deflation indicate to the community. That is, the persistent global instability of economic systems. From the point of view of the defenses that monetary policy may have to protect the economic development, there is no lack of technical resources to be deployed. With quite different consequences, however, in the case of inflation or deflation. It has already been said that monetary policy uses the monetary tightening to fight the inflation and, therefore, to induce the recession. The monetary policy in this case moves in accordance with the trend of economic inversion in the event of economic instability. Otherwise, in the case of deflation, the monetary policy is not helped by the reduction of interest rates, so reaching the zero point, due to the simultaneous decline in nominal prices. In some ways, the rate reduction until zeroing is a supine adaptation to the condition of falling prices. In other words, it cannot make any contrast action to push up the falling prices. However, left to act under the grip of falling prices, the economy can turn towards recession, also in this case according to the market trend to rebalance, with a pause, the distorted development of the economic system. The monetary policy, however, in concert with fiscal policy, aims to avoid the fall into recession, to contain the social and economic damage due to the unemployment increase and the income contraction, especially for disadvantaged families at risk of poverty. Obviously, we can certainly agree on the social objective. However, the question remains of the effectiveness of joint interventions committed to keep interest rates low over time and, at the same time, to inject liquidity into the market to an unprecedented extent. The declared goal of the Central Banks is to support with the new liquidity the recovery of the economy, otherwise in decline. All that, assuming that the resilience of the economic situation represents a valid support for the recovery also of nominal prices. Basically, if we manage to give some strength to the economy, the prices will consequently escape the deflation grip as well. A hypothesis to be verified of course, which leaves behind a potential dangerous tail. In fact, the support to the economic conjuncture requires that fiscal and monetary policies play a role of mutual collaboration. However, there is no lack of adverse legacies. Public deficit spending has reached an unprecedented dimension in most States to mitigate the blows of the serious financial crisis of 2008-2009. Again, under the adverse conditions created by the pandemic, East and West are once again in agreement to widen the deficit disbursement of public budget. Well, there is no doubt that the intervention of economic policy made it possible to avoid a serious deterioration of economies during the last decade, especially by alleviating the social damage due to the unemployment abnormal increase and the rising poverty. And don't forget the negative legacy of public debt increase, with implications that still need to be examined and evaluated. (Cardoso, 1992) In addition, the debt issue becomes a minefield for the concurrent action of the monetary policy, which seems to play a role in supporting the fiscal policy, while its action should be to act according to a long-term perspective. In fact, Central Banks are not subject like governments to the evaluation of voters and to the requests of social representation. Actually, by res.ccsenet.org Vol. 13, No. 2;2021 relying on the objectives of their action to defend the employment 2 , the Central Banks end up moving in the wake of the fiscal interventions, with the credit promotion and largely providing new liquidity. Well, we cannot ignore that during the bad storm of the financial crisis, the promotion measures of the Central Banks are essential to allow an attenuation first and then a recovery of the economic cycle. Review of European Studies Even in this case, however, is not lacking the hidden negative aspect, which is added to that already observed for the public debt increase. In fact, the liquidity unlimited increase and the dropped to zero interest rates fuel the prospect of loans at a zero or almost zero cost, which encourages the debt of companies and households. In other words, the system gets into debt beyond any real probability of repaying the borrowed sums. Moreover, is activated in this way a speculative credit that can push the capitalization of stock exchanges and assets far beyond the correct market value. 3 (Raines, 2008) Therefore, there could be some unexpected implications of intervention policies, which however are not advising against their strategic use in the event of a serious crisis, as also in the case of a weak economic situation. Well, some doubt is advisable about the hypothesis that the upturn could be a remedy for deflation. Actually, there appears to be no relationship between economic recovery and declining deflation. Also because the trend towards deflation affecting today the global economies, means in summary that the trend towards the global recession is spreading. In fact, the message sent through deflation basically means that the economy is far from the compatibility path; moreover, means that the correction through the recession is a necessary pause in order to regain the growth stability. Therefore, it would make no sense to say that the rising economy disarms deflation. Now, on a practical level it's clear that forcing the economy while in deflationary conditions means going against the natural order, which proposes a pause to rebalance the economic system, with ambiguous results, anyway. In the USA, President Trump attempted to force the weak cycle at the beginning of his term through a tax legislation that reduced taxation for the three-year period 2019-2021, mainly benefiting the richest taxpayers. After a cycle rebound during the first year of its implementation, the global economy then met the pandemic, and the widespread lockdown for some months has generalized everywhere an increase in unemployment and recession. On the other hand, even the rising economy recorded over the two-year period 2018-2019 was rather an advance of the economic conjuncture than a lasting and effective recovery of the US economy. Some Considerations on the Inflation in the Case of Economy Tending to Deflation About a possible rising inflation (anyway low) in the case of a positive cycle, the experience shows that a modest inflation rise may occur during the economic recovery. In this case, we can see that inflation is approaching 2%, that is the base 2 The Federal Reserve's Dual Mandate. The monetary policy goals of the Federal Reserve are to foster economic conditions that achieve both stable prices and maximum sustainable employment. 3 We have also made important changes with regard to the price-stability side of our mandate. Our longer-run goal continues to be an inflation rate of 2 percent. Our statement emphasizes that our actions to achieve both sides of our dual mandate will be most effective if longer-term inflation expectations remain well anchored at 2 percent. However, if inflation runs below 2 percent following economic downturns but never moves above 2 percent even when the economy is strong, then, over time, inflation will average less than 2 percent. Households and businesses will come to expect this result, meaning that inflation expectations would tend to move below our inflation goal and pull realized inflation down. To prevent this outcome and the adverse dynamics that could ensue, our new statement indicates that we will seek to achieve inflation that averages 2 percent over time. Therefore, following periods when inflation has been running below 2 percent, appropriate monetary policy will likely aim to achieve inflation moderately above 2 percent for some time. In seeking to achieve inflation that averages 2 percent over time, we are not tying ourselves to a particular mathematical formula that defines the average. Thus, our approach could be viewed as a flexible form of average inflation targeting. Our decisions about appropriate monetary policy will continue to reflect a broad array of considerations and will not be dictated by any formula. Of course, if excessive inflationary pressures were to build or inflation expectations were to ratchet above levels consistent with our goal, we would not hesitate to act. The revisions to our statement add up to a robust updating of our monetary policy framework. To an extent, these revisions reflect the way we have been conducting policy in recent years. At the same time, however, there are some important new features. Overall, our new Statement on Longer-Run Goals and Monetary Policy Strategy conveys our continued strong commitment to achieving our goals, given the difficult challenges presented by the proximity of interest rates to the effective lower bound. In conducting monetary policy, we will remain highly focused on fostering as strong a labor market as possible for the benefit of all Americans. And we will steadfastly seek to achieve a 2 percent inflation rate over time. value indicated by Central Banks as a signal for a healthy and balanced economy. Now, it appears quite evident that in a world struggling with widespread deflation and therefore with economies generally unstable and in search of compatibility, the possible position of inflation near the base value is only an algebraic fact, which in no way represents a symptom of a balanced economic state. Actually, we must remember that a base inflation close to 2% is just an ideal value, which postulates an economic system for a long time balanced and which develops in a compatible way with the nature embracing all of us. Therefore, the base inflation close to 2% can be configured as an absolute value, like all the other great constants of physics and mathematics. Given this characteristic of absolute constant, the base inflation is close to 2% but doesn't come close to zero in the case of a stable economy, because the prices are the messengers of the economy. Therefore, the small difference of base inflation from zero is just the symptom of a sort of "background noise" produced by the economic activity when moving in full balance with nature. Background noise that is reflected on stable and balanced prices through a small alteration, which on the theoretical level has been set close to 2%. Therefore, assuming that the base value of inflation cannot be zero, it isn't equally clear and obvious how and why the base value could be set close to 2%. That is, why having reached this state of stable and long-lasting prices, we could reasonably say that the economy reached the level of balance and compatibility with the development of nature. The condition to be respected in order to affirm that we have actually reached the balance point of the economic system, is linked not only to the real achievement of the basic objective but also to the economy firmly linked to that balance point. Alternatively, having only briefly touched the base value of inflation close to 2% isn't at all a confirmation of the good administration of economy. Rather, it can be a symptom that the economy is unstable and then subject to deflation, which is pushing values down: we take note that deflation is a continuous and constant correction of values downwards. Therefore, the constant trend of increasing inflation according to an acceleration that depends on the instability degree of the economic system, is continuously narrowed by the parallel action of deflation. In other words, the economic system inflation is somewhat cut and reduced by deflation, which pushes the price wave towards minimum or even negative levels in relation to the instability degree of the economic system. Therefore, the base value close to 2% of the baseline inflation of a stable and compatible economy depends only on the background noise of the economic system, which is assumed to be equal or close to 2%. The real inflation created by the stable and compatible economic system is therefore zero. It follows that the baseline inflation close to 2% is an ideal value, anyway difficult to be found in the normal economic events, which cannot be defined with an algebraic criterion. We can consider it as a sort of constant that ideally represents the point of maximum consistency between the human adventure in the economic field and the planet evolution together with the nature embracing us. A value universal and hypothetical, without any evidence in the real life that must be related to the constants of physics. The economic mechanism does not transmit any impulse to prices, in the sense that the compatible economic activity does not generate any signal for the price formation. Experience seems to show that the inflationary motion generated by the unstable economic system is a continuous process that has a background acceleration depending on the unstable condition of the economy. This background acceleration therefore undergoes an alteration in relation to the state of economic instability. Thus, we can see variations in the nominal run of increasing or decreasing prices according to the degree of the economic system instability that worsens or tends to recover. Moreover, the conjuncture cycle motion affects this mechanism, in the sense that the background noise generated by economic activity on prices can be changed by virtue of the cycle direction. Not surprisingly, the background noise can be variable: this happens in the case of an unstable economy and is one of the anomalies of the instability sub-world. Again, not surprisingly: because the constant level of background noise is a specific prerogative of the long-lasting economic compatibility and is placed, as already said, at the level indicated by Central Banks close to 2%. The phenomenon is obviously relevant especially in the case of an economy with a tendency to deflation. As already mentioned, deflation acts in subtraction from the underlying motion of current inflation; therefore, usually the economy suffering from a tendency to deflation shows a rate close to 1%, which becomes negative when the economic system instability grows. The economic conjuncture, where the conditions exist for a cycle recovery, can alter the inflation rate, which therefore undergoes a modest increase under the pressure of the new rising tensions in the economy. It is therefore a simple algebraic motion of prices that may therefore undergo a slight increase (e.g. up to 2-2.5%), which could just be an expression of the variation in the background noise. With the cycle reversal, the inflation rate tends to fall into the previous position close to zero. Central Banks connect the success of economic support to a better price climate, which would thus tend to climb the deflation slope in the presence of a robust economy. As we have said, a solid economic growth has an impact on prices tending towards deflation, which therefore show a modest rise. However, as we said, the evident factor acting in this case res.ccsenet.org Vol. 13, No. 2;2021 on prices is the background noise, which can increase its intensity to touch or even exceed the inflation base value close to 2%. In the case of an unstable economy, such as one struggling with deflation, the background noise of the economy possibly overlapping the value of base inflation close to 2% is nothing more than a confirmation that the economic system remains in deflation. This is excluding any hypothesis that touching the value of the price constant, could be a positive signal on the way towards economic compatibility. Review of European Studies It's obvious that the call into question of a potential (low) inflation rise by many Central Banks, under the impulses of the recovering economy, does not mean that the slightly rising price index could somehow prelude to a passage into a world free from deflation. When reporting this possible modest result of the economic downturn on weak prices -which anyway occurred -no interpretation of the phenomenon is given in any way, which is then presented just as an objective fact: in the secret hope that a continuation of the phenomenon could over time get us out of the trouble. Unfortunately, this is not true, for the simple reason that the deflated system naturally tends to go into recession in order to overcome the consequences on economy, caused by the deviation from the compatibility path. The Economic Cycle Is a Sign that the Economy Deviated From the Stability Path We have already said that, in the case of instability, the background noise assumes a variable value depending on the economic strength, increasing with the recovery and decreasing with the recession. This anomaly clearly distinguishes, also for its consequences, the world of stability from the sub-world of instability. In fact, in the stable world the background noise tends to a base value fixed and constant over time close to 2% without any possible alteration, providing the guarantee that the economy is moving within compatibility. Well, this dystonia appearing in the unstable world is the symptom and the source of the most complex motion involving the economy: the conjuncture cycle. Therefore, having left the path of a constant and balanced growth, defined by a potential for constant and moderate growth, the instability sub-world develops according to a sinusoidal motion of ups and downs, recoveries and recessions. As already mentioned, this procedure opens the doors to the recession, which over time is somewhat able to contain imbalances and to bring the altered system back to stability. It's the interruption of the background noise constant that defines the new levels, increasing or decreasing, assumed by the economic situations over time: a variable value which regulate or give the signal for the formation of the basic inflation that, being altered, characterizes the systems according to their instability state. So, we can argue that with the increase in the background noise of the economy, the economic system tends to accelerate the current inflation motion. A process that expands according to an acceleration that is independent from the economic situation, but nevertheless is correlated to the instability degree. In the case of deflation, we must assume that the tension produced by the background noise of the economic system gradually decreases, so imposing to the nominal prices a downward path. The inflation underlying motion is therefore tending to contract and gradually pushes the price nominal values towards a minimum. It could be considered somewhat slow the work of the normal acceleration process, for the progressive decrease of inflation from the peak reached. Therefore, we can see an intermediate period between the inflation wave and the deflation stagnation during which the inflation can drop to lows of 5% or even less. At this point, experience seems to confirm that inflation was defeated and therefore we moved back to a price run under control. This more or less long period of relative stability, however, tends to end with the further decline of inflation base rate below the base value close to 2%. As saying that even with a long or very long parenthesis we passed from the phase of increasing inflation to the phase of deflation. Basically, the prices passed from the inflation of sixties and seventies to the stagnation of nineties and to the start of a pronounced trend towards deflation: their long adventure seems to describe quite well the price course during the last fifty years, at least in the industrialized West and in the USA. It could therefore be assumed that during this half century we have only been able to record a change in the presentation model of monetary anomaly, from inflation to deflation. As saying that the long battle undertaken by the monetary policy to correct the variability of nominal prices, in short, was unsuccessful. Or rather that the long battle started since the seventies by the global economy to bring the accelerated wave of nominal prices back to controlled values, did finally manage to calm the impetus of rising prices, though with mixed results. But this apparent success seems to be rather the cause that shifted the run direction of nominal prices from acceleration to deceleration. As already mentioned, deflation and inflation are messages that the system, altered and out of compatibility path, sends to inform the community that things in the economy are going bad. Not only are they just messages but, as such, they do not participate in the economic events. They are therefore not directly attacked because they are just ghosts, a sort of mirrors in which the economic state is reflected, and their image doesn't participate in any way in the system motion. It could also be said that the transition period from the inflated to the deflated world, which lasted about twenty years, may have been a period of relative calm in the relationship with nature. This interpretation seems to be supported also by the clear improvement of the global economic framework, by the step forward towards a global redistribution of resources between continents, with China and India in evidence, by the development acceleration in the USA with the Reagan presidency and in the nineties during the Clinton presidency, and last but not least by the strong presence in international trade of Germany, together with China and Southeast Asia. Even on the basis of this interpretation of the final twenty years of the twentieth century, we can say that since the beginning of the twenty-first century a new and more pressing instability, which is the origin of the formation of the worldwide real estate speculative bubble, replaces the compatibility acquired in the previous period. This bubble burst in America at the end of 2007 and then showed its full potential the following year in Europe, Latin America, and part of Asia. Essentially, the formation of the speculative bubble and the subsequent financial crisis, not lower than that of 1929 in terms of impact on the economic motion, must be understood as the failed attempt to continue the happy season of the previous twenty years. A season that instead was over and a period of pause would be therefore needed. Basically, when monetary and fiscal policies learned to contain the inflation outbreak, which was troubling the industrialized West for over twenty years, the monetary message changed its sign: from an accelerated price motion to a decelerated price motion. It's difficult therefore not to connect the two phenomena of acceleration turning into a price deflation trend, though if remains unclear the underlying reason that could justify the economic situation passing to slowly falling prices. Nevertheless, in any case both monetary messages, inflation or deflation, are the precursors and companions of the economic cycle. Both are precursors of a development cycle that is weakening and is then introducing the periodic recession of the economic system on the development path. Instead of a constant and slightly rising growth, which is the salient feature of the economy when is stable and compatible with nature, the basically growing development line begins to twist, so losing its strength. The logic governing the change is aimed to introduce into the development path a pause, a recession, which would limit the economy strength. A pause that, also on a logical and simply intuitive level, implies the opportunity to limit those development forms that are partly in opposition to the compatibility path. We can define the compatibility relationship between humanity and nature around us as a model involving the nature preservation for future generations, rather than the exploitation and destruction of natural resources. Assuming that natural resources are the source of life, the preservation logic should be interpreted as the endless chain of life from generation to generation. Not for its own sake, but as the intergenerational journey of mankind in its search for the secrets guarded by the nature around us. In some way, the life is protected by the nature embracing us: in this way, we have the possibility to continue our research in the scientific and cultural fields. If the compatibility relationship is flawed, it does not mean that the economic system will automatically show the instability signs typical of the instability sub-world. Because there is certainly a sort of elasticity in the balance of the man-nature relationship, which allows not to stop the development and progress of science and culture, despite that flaw. It must be added that this continuity in the science and knowledge evolution is indispensable, on the assumption that the civilization degree is affecting also the relationship with nature. In the sense that a greater degree of culture and civilization should mean that we are moving on a development line that is increasingly consistent with our basic obligation to respect the nature. Thus, we must admit that the development line of the economy is marked and also limited by the obligations imposed by compatibility. But this limitation does not affect instead the push towards research and knowledge that wouldn't be stopped by some troubles in the relationship between humanity and nature. As saying that, having reached a certain stage of civilization, a constant acceleration of research can be observed over time together with its effects on the economic world. In the sense that the science and culture development is the essential prodrome to gradually extend the paradigm of respect for nature to the whole humankind. Now, since the scenario of the global economy shows some diversity in the development and participation degree of the scientific progress, it's clear that the natural tension is moving towards a gradual rebalancing of the differences in the civilization level between the continents. This is the key to read the great run that since the last century has marked the Asian continent development, in particular China and India. An unprecedented development profile of the economy, culture and civilization level that has no equal and that finds its deep motivation in the natural tendency towards rebalancing the potential of culture and scientific research between the continents. It must be added that the rebalancing process between continents is far from completed. While reached in fact an unexpected height in some Asian countries that are today protagonists of the world development, it's still in progress for many African countries, for Latin America and for the Near East. Since the progressive rebalancing of the development potentials of the economy and culture is an irreversible natural process, it's easy to believe that it will continue according res.ccsenet.org Vol. 13, No. 2;2021 to times that we cannot imagine; nevertheless, that process will gradually be implemented according to the times of nature, which do not correspond to our human times. The Economic Cycle as Rebalancing Asset, Rather Than Negative The relationship between humanity and nature is currently unbalanced. However, the life relationship seems to offer some flexibility that could somewhat mitigate the consequences due to the cracking of the compatibility relationship. In the sense that experience shows that abrupt interruptions or at least the sharp slowdown in development can start a period of relative tranquility and increasing economic development. So, after the Second World War, the reconstruction and recovery phase after the catastrophe was long and lasted until the 1960s. The same happened after the long season of the inflated world that continued until the mid-1980s. From the relative disappearance of inflation and accelerated nominal prices, began a season that lasted until the beginning of 2000 with a relative economic calm and a sustained development. These brief considerations give us the occasion to consider the recession phenomenon in economic systems as a natural tool to correct the instability and as a mechanism to regain the compatibility of altered economic systems. In other words, the economic system experiencing difficulties in the compatibility field seems to use the recession pause as a weapon to regain the balance. According to this point of view, the economy that entered the instability sub-world, would immediately show a degradation of development potential and the tendency of economic systems to recession. Therefore, between the periods marked by greater balance and the periods marked by instability, a big difference will result from the different ability level of the economy to follow a growth path. This diversity is measured by the presence or absence of the economic cycle, considering that the recurring economic cycle puts the recession on the growth path of the economy: that is, the periodic pause with a variable frequency which is imposed in order to clear away all the waste that is causing instability. The economic cycle is not characterized by a constant progression of recovery and reversal phases, but rather by a great variability in its behavior. For the simple reason that the shape and acceleration of the cyclical phases are governed by the instability degree of the economic system. In the sense that it is a sort of drug given in different doses according to the deviation degree recorded by the economy in relation to the compatibility path. It is therefore inevitable that to expect a constant development of cycles is an optical illusion, due to the experience we maybe had during the initial phases of instability. Thus, in the 1960s, the Western world emerged from a long season of sustained and constant growth, which was extended since the end of the Second World War. That season was over, and inflation was beginning to appear into the world economy. A modest inflation under control, at the time was the signal that the economy could not afford the previous growth rates. The world, after the reconstruction phase, was entering again the instability sub-world. During this initial phase, the inflation remains under control and the economic cycle seems to follow a constant behavior, with rhythmic phases of peaks and reversals. In other words, in the initial phase of instability, the losses in efficiency and growth rate are somewhat contained. However, since the early seventies the inflation was in acceleration, a sign that the untreated system instability was increasing. The acceleration of inflation caused by the first oil crisis could have had a temporary impact, in the case of an almost stable economic system. But in the context of that time marked by a widespread instability, the boost offered by the unexpected and massive increase in oil price triggered the high inflation, also due to the attempts to contain the damage deriving from the unexpected oil increase with the use of public spending. With the result of accentuating the instability degree of economic systems. Thus, the unexpected oil increase has become a factor causing an accelerating inflation. Therefore, a temporary factor of rising prices due to the oil cost had become an engine of the nominal inflation acceleration, now difficult to keep under control. Therefore, the economic earthquake imposed by the dramatic oil increase wasn't the engine of inflation, but just a strengthening factor of the phenomenon, due to the attempt to hide the real effects caused by the cost of raw materials on the economy. Actually, even a massive increase such as the one that caused the first oil crisis should have imposed a generalized price increase that would led to a new, once again stable price structure. However, in an unstable reality and with an accelerating inflation due to the misguided attempt to maintain high the level of a weakened economic growth, the oil cost increase was a shock, which we tried unsuccessfully to calm down by containing its impact on the economy. In short, the cure was worse than the disease, so that the pressure on development and the attempts to mitigate the impact of oil crisis on the economy have instead widened the instability of economic systems. That was the real reason for the inflationary pressure increase on prices, which then continued for a period of twenty years at a global level. Inevitable therefore, throughout the inflation period, the fall of the economic potentials and the trend towards recession, with some peaks at the outbreak of oil crises, in the mid-seventies and at the end of the decade. We must therefore admit that the falls into recession are accompanied by unexpected or anyway different conditions that modify the current framework, so becoming an opportunity for a corrective reorganization of the economy. It's therefore difficult to try to escape from the natural correction mechanism, which uses the economic cycle and the recessions to contain the deviation from the compatibility path. As in the seventies and more recently on the occasion of the serious financial crisis in the first decade of the 2000s, the conditions will be created -also due to instability -that will make even a hard recession, required for the correction. On the other hand, precisely because of this need to clean up the altered system, the size and power of the recessions will be of varying strength, therefore more or less intense in the elimination of virtual excess of forced development. Based on these brief considerations, we can expect that the exit from the pandemic, which now has a global dimension, may be different in the various economic systems. According to this theoretical view, the weaker countries are financially the harder they will be hit. As saying that the term "weak finance" is addressed to those countries showing a more marked tendency to deflation. In other words, the 'deflation thermometer' can reveal, in summary, the instability degree and therefore the correction level needed to restore compatibility. The fact that the economic cycle is not constant and repetitive over time but is a sinusoid with a dimension variable and controlled by the instability of systems, obviously brings some value to the thesis that the conjuncture cycle is a cure. That is, not just a typical factor of economic life but a superstructure that comes into action only in the case of a deviation from the compatibility path. Of course, the presence of a sub-world in which the economy enters to be purified through quarantine would mean however that economic life, just like the matter and its forces, is subject to inflexible rules of natural order; rules that are the necessary prerequisite for the human survival on this planet. All this, to say that some natural rules and behaviors concerning economic and social life cannot be explained according to our deductive reasoning, precisely because is lacking a source to which we can refer as engine of the economic mechanism. On the other hand, according to this point of view, it's really meaningful that the events, starting from the economic cycle and the messages to the community on the economic state (inflation and deflation), are repeated over time with different intensity, but according to a highly repetitive scheme and without any apparent direction. A repetition that is never constant but undergoes variations over time, as we have seen at the time of great inflation in the seventies. In those years, the change in the inflation rate was simultaneous with the change in the economic cycle, which obviously was at its maximum coinciding with the two oil crises. A coincidence of the cycle negative peak with the unexpected increases of the oil price, which wasn't due to the emphasis of nominal prices, but to the greater economic instability, which was also linked to the unexpected rise of the oil price. Source: Elaboration on OECD data Natural Stability and Economic Control Policies Since the base price value is altered, inflation or deflation are the messages informing the community that the economy is out of the right path. In any case, therefore, the economic systems show a current inflation depending on the level of the economic instability. In the case of deflation -which concerns us closely -this current inflation is low or extremely low and can even drop below zero in the case of severe financial instability. Well, in the case of speculative excitement of the economic cycle, in a context of low inflation or tending to deflation, the underlying current inflation may undergo a modest upward push, so that inflation (e.g. at 1%) under this modest impulse can get around 2%: therefore, we are quite near to a base inflation close to 2% although there is no way to express a compatible balanced value. We are rather facing a mere algebraic result that is certainly unable to give any answer to the search for balance and compatibility in the economy. Nor, on the other hand, can this represent a relief for Central Banks struggling in the search for a stable and real escape from the deflation grip. (Shilling, 2001) Deflation is not directly attackable because it's just a message about the troubled economic system. Therefore, the problem of deflation becomes an unsolved problem if it's not understood that the negativity is not lying into deflation, but rather in the instability of systems. Moreover, we should understand that to solve the problem, the common remedies based on fiscal and monetary policies may not be able not only to solve, but even to mitigate the complicate issue of frozen prices. In other words, trying the classic way of monetary policy measures to reduce the money cost to zero and making new liquidity available, can be a good aid to avoid or contain the damage of an economic depression. However, this kind of actions wouldn't be able to solve the deflation problem. We should consider that deflation is a signal that the economy has become unstable. If we let the natural correction unfold its action, the pause or the recession can over time mitigate the unstable state of economic systems. In fact, the unstable system undergoes a deep modification: instead of the constant and slightly rising growth, typical of the natural development in balanced systems, it's subject to the conjuncture cycle, which introduces the recession into the scenario, intended as a natural mechanism to correct the instability. The deflation trend can therefore be interpreted as a signal that the economic system tends to recession, to have a pause in the economic growth process. Well, forcing economic systems to avoid the recession may be contrary to the natural trend and therefore this could mean to extend the unbalanced state over time. According to this point of view, only apparently drastic, the therapies based on fiscal policy interventions to support the economic situation cannot be rejected too strongly: even at a first rapid analysis, a contrary over-reaction should be avoided. In fact, we are moving into a reality that is focused on the belief that the economy grows according to an alternating motion. Therefore, it seems correct to try to stimulate development and "redress" the economic cycle in order to make it grow according to a midline, so avoiding as much as possible the recession negative wave. Following this approach, the recession would not be the price to pay for the deviation from stability path, but rather an accident on the development path, almost a disease, to be treated by any means. Well, according to this perspective, the fiscal policy becomes the essential tool to keep the uncertain trend of the economy under control. Therefore, when the economic situation abruptly collapses under the unexpected blows of a financial crisis, as happened in the years 2008-2009, it becomes essential to face the impending decline with its serious social effects. The theoretical discussion on the reasons that blocked the economic development, at that point are sidelined. Although we must note that is precisely this indestructible desire of the communities that the government of the economy may guarantee a "tomorrow no different from today", which after all pushes to activate any possible intervention to control the fall of the economy into the crisis. (Krugman, 2008) This scheme of controlled economy seems, therefore, inspired by a remarkable rationality, though we should consider that the economic waves, especially those deriving from speculative processes, are actually created by the intervention policies. Nevertheless, those interventions with the aim of reducing the troubles of the economics are actually blocking the natural correction and thus impose a scenario of cyclical alternations, faced with a natural economic landscape which is following a constant and linear development. It would be precisely this deep and ancestral common wish for the economy smoothly evolving and for a "tomorrow no different from today" that confirms the real meaning of this aspiration. In all living beings there would therefore be the innate notion of constant and linear economic growth, so confirming that the compatible economy does not need any director or economic guides, but can follow a stability path over time, if the environment compatibility is respected. Confirming the thesis that the economic conjuncture is a modality of the instability sub-world, there is also the peculiar skill we can observe in our neighbor as also in the common man. Well, our neighbor is not an active protagonist in economic matters and yet he shows to have a summary and concise knowledge, but anyway adequate, of the economic state in a particular country. He can take the same information that everyone has through the prices of their daily purchases and through the implicit sensations becoming more real and sensitive as the economy turns towards instability. Therefore, the primary information comes from the monetary messages (inflation or deflation) continuously sent by the system when the stability path in the economy is lost. On the other hand, if we investigate more deeply, we will be able to observe that our neighbor, even if not involved in the work of economy management, is able to formulate, though in general and synthetic terms, a sort of program to return to stability and compatibility in economics. It's actually only for these innate abilities to perceive the economic condition, that we can justify the basic idea of participatory democracy which entrusts the community with the role to choose the leadership entrusted with the economic governance. It's through these community skills about the economic state, that the collective thinking expressed by a community is used to conduct the economic surveys on production and consumption. All this seems to be based on the ability of a community to foreshadow the difference between the ideal condition of linearity and constancy in development and the present state with the alternations of the economic conjuncture. A present situation that is outlined according also to messages not explicit but based on feelings somewhat generalized inside the community. Well, this singular ability essentially makes the community the terminal of the relationship between humanity and nature; terminal that obviously collects the signals on the divergences of the economy from the natural state, as a synthesis of the feelings developed by all the members within the community. Well, we must admit that the economic activity moves according to physical laws: these rules are the limits that monitor the consistency of the economic course with the goal of preserving natural resources for the survival and subsistence of future generations. Are obviously included the rules that avoid the degradation of the natural system, by modifying the otherwise constant linear path of development, with the introduction of the conjuncture cycle. (Cossiga, 2018) How else to explain the corrective behavior of the economic system moving with the simplicity of a motion that from uniform becomes alternate and, in this way, inserts a periodic pause in the development mechanism to relieve the economic system from the waste produced by incompatibility. Not only that, because the introduction of alternating recoveries and recessions varies over time according to the degree of economic instability. The correction therefore relies on a sort of internal clock that modifies and possibly extends the recession times, according to the level of instability degree defined by the natural instrument. We should consider that the simplicity of the corrective system has no other aids to perform its function. In fact, everything moving around the mechanism, which makes the growth process of economic systems vary from linear to alternate, does not perform any cooperative work. It's only a representation of the no longer linear motion of the economy, in order to provide a generally noticeable message about the anomalies of the process. They are only representations, therefore, just like mirrors in which the economic motion is reflected without any possible participation. Therefore, the system elaborates a series of mirror functions that are responsible for providing synthetic information, even res.ccsenet.org Vol. 13, No. 2;2021 with some advance, about the state of the economy, through the price engine, which measures -like a thermometer -the state of malaise of the system. Certainly not for its own sake, but to inform the common man that the economy derailed from the stability path. And again the purpose is well defined: to provide all the community members with advanced information on the quality of the economy governance, to move when necessary the protest and to dissent from the policy implemented by the leadership. Review of European Studies So, we have an intervention with two levels that are not overlapping because they are only the representation of each other. This second collateral tool doesn't participate in the correction, but has obviously the purpose of inducing a change in the economy governance, when it's damaged the general rule set in the DNA of every living being, which postulates a linear and constant growth of the economy. The Policies to Control the Conjuncture Cycle and the Concomitant Actions of Natural Correction As we said, when the deflation message appears, the community has a first report that things in the economy are going bad and that instability must be treated. Without interventions by the action policies to support the conjuncture cycle, the economic system would slowly tend towards recession to have a sort of purification pause which would allow the recovery of compatibility and balance. The economic cycle would not in fact be a curse taking away the opportunities for a linear and constant development. It's instead a safeguard mechanism which, if left to act, can allow to recover the linear and constant growth path that we have lost. It's a bitter medicine but can become even worse if we act in contrast to the natural mechanism. Anyway, it's painful on a social level because increases poverty and mainly affects work. And thus, the result is the accentuation of inequality in income and wealth within the community. For these reasons, the intervention and support policies for the economic situation have the aim of mitigating the cycle especially when it's violent and profound, in an attempt to skip this phase, that actually we know to be curative, and thus starting a mere imitation of the linear and steady growth of the balanced economy. Now, it's evident that the defense against the deep crisis becomes inevitable to avoid the social and economic damage that would push back development motion. On the other hand, in the midst of a negative economic trouble, such as the one we experienced in the 2008-2009 financial crisis, the defense implemented by fiscal policy, with the unparalleled expansion of deficit public spending, has shown a great effectiveness in fighting the cycle collapse. Moreover, in perspective it has given some breath to the economy, so allowing a recovery in activity, though at moderate levels, anyway lower than in the previous decade. However, a question remains about the possible dark side of the continuous and increasing use of economic support policies, because it's clear that the support measures are basically opposed to the natural corrective action. This means that if we do not let the conjuncture take its course, the economic system will not be cleansed from the instability waste. Therefore, sooner or later the correction mechanism will come back again, maybe with a greater strength. But in the meantime, we will certainly see a general trend towards deflation. As saying that the economic system is warning the community that the emergency is not over and that the instability burden didn't decrease. Therefore, it's evident that under the crisis blows the only thing that can be done is to try to repair what is possible, even if in this way the natural correction would be hindered. We should note that in the case of a sharp decline in the economic situation, as during the financial crisis of the last decade, it may be assumed that the rapid economic decline is only partially due to the action of instability correction. There is in fact the danger of a cascade fall of production activities, that would be originated in sequence due to the market progressively dried up and to cash problems as well. (Roubini, 2011) For these reasons, we can believe that the policy supporting the cycle, in the event of a serious financial crisis, plays the role of reducing the risk of a serious economic fall, though the natural corrective function is left to partially play its role. Corrective function that is at least partially achieved through the recession, which is anyway doing some work and then is affecting the economic system despite the intervention of support policies. In other words, it could be assumed that a fiscal policy supporting the falling cycle could play a double role. First, it could be enabled a controlled recession that should somehow cooperate to correct the growth path. In addition, it could be developed a function mitigating the excess fall of the cycle, through interventions aimed at saving and preserving companies and activities still efficient but on the edge of closure due to the sudden lack of cash and the wide market fall. We can add that the good resilience after the sharp fall may be partially due to the correction work that the controlled recession guaranteed in any case. On the other hand, the declining efficiency of economic systems after the 2008-2009 financial crisis was probably caused by the residual unsolved instability. In the same way, this parameter of the lower efficiency of post-crisis economic systems, can be related to the various degrees of residual instability inside the various economic systems at a global level. As we said, in the case of a deep involution of the economic system, the support policy manages to limit the damage, but cannot avoid a recession. Therefore, a natural correction process is initiated, though partial. It follows that the attempts -often partially successful -to keep the system in balance, thus avoiding the natural involution of the cycle, are intended just to imitate the natural condition of compatibility. In the assumption, therefore, that we are able to have the control of economic growth in order to follow the natural path. Now it's quite evident that, by operating in this way, the unstable system is condemned to a virtual development, but this improper surplus will be subject to a progressive elimination through an increased number and intensity of the cycles. The supporting policies can therefore move in opposition to the natural cycle correcting the system that had left the compatibility path. In other words, the unstable economic system periodically requires some pauses to process the anomalies of incompatible development. In this way, the treatment progressively applied allows the unstable system to continue on the development path, although attenuated. The problems will become more relevant if the cycle control policies manage to give essentially the apparent feeling that we can bypass the economic cycle: in this way, the economy would achieve the objective of a relative calm in its path. Now, it's quite clear that there is no way to reactivate the growth without cycles if the economy has been unstable for a long time and the instability degree is reflected by the price deflation. It follows that the choice of a continuous support to the economy, though with some appreciable results, doesn't lack a negative legacy, which sooner or later we will have to pay. It can be summarized in this way: the refusal of the recession can be compared to the refusal of a medicine, therefore without remedies the economic system remains unbalanced and with a tendency to get worse. As already said, the cycle appears if the economy leaves the natural development path; thus, the sequence of the economic cycle is introduced and in particular the recession which imposes a pause on the development mechanism. This pause is equivalent to a cure that can allow, if left to act, the instability reduction and the return to a constant and linear progression. The Anomaly of Current Scenario and the Role of Support Policy During the COVID-19 Spread The global economy is now struggling with this problem, which is causing many troubles. The deflationary trend pushes growth down, so cooling the prospects; the communities perceive the feeling of malaise raised by this general situation and as a result has grown the mistrust towards the governments that cannot find a way to resolve the economic instability. This feeling of malaise is somewhat mitigated by the prevailing concern about the pandemic problem that is currently worldwide spread. The pandemic, on the other hand, is then leading to a mandatory pause all the economic systems: so it seems that it can play the role to pause the economic systems, which is usually a role delegated to the conjuncture cycle. This is not only a pause for the economy, but also for the governments who find, in the search for recipes and requirements for a healthy behavior during the pandemic, an advantageous way to hide the problems troubling the economy for its weak performance. Moreover, the modification of the economic outlook with increased unemployment and economic malaise offers to the supporting economic policy a role, somewhat original though common in periods of serious crisis. The recent lockdown and the new fears for next future have created an unexpected poverty increase due to job losses and reduced activities. Economic policy was therefore called to play a substitution role, so the income cut by the pandemic would be replaced with an alternative form of social income. I consider this an essential function on the social level, which however creates an increase in public debt, committed to maintaining the social peace to contain a further and serious increase of income and wealth inequalities. An unexpected condition, to the benefit of governments in office who saw their consensus declining due to low growth and deflation tendency; but today they find an increasing consensus for the actions being taken against the pandemic and in favor of social peace. In fact, we should consider that the social spending implemented through the increase of public deficit spending, does not meet with great criticism because it deals with an essential issue, such as providing an income to families now poor because without work. Moreover, the rising social spending also plays the role of supporting the rapidly declining economic situation, thus mitigating those basic reasons that were causing the consensus drop of current government. It's a paradoxical situation or nearly so, that modifies the short-term political but also social perspectives. As saying that the pandemic represents a sort of plugging of the political situation, obviously as long as there is a danger to public health. It should not be surprising this political pause, because every criticism towards the excessive economic support policies disappeared and at the same time the serious decline of economic situation is justified by the general framework, which in no case is attributable to the government currently in office. Of course, we are referring to the hypotheses of fight against the pandemic, which is carried out perhaps successfully but at least with commitment. Thus, the government is basically evaluated for the results achieved in the fight against the pandemic and no longer for the actions more relevant to the government activity, that is the success or failure of the actions relating to economic development and employment trends. Also, this asymmetry is quite consistent with the natural process of the economic system requalification. Because the natural system for the instability correction, essentially becomes the mechanism that takes over the management of deviated economic system. In fact, we should remember that the natural system lets the economic situation follow its fate in order to correct the instability. This means that the natural mechanism at the end agrees with the lockdown and the activity reduction produced by the pandemic. Because the activity decreases or rather its contraction produced by the pandemic is essentially forcing globally the economic systems into an anomalous recession, that is unexpected and unpredictable but dependent on natural factors outside the correction mechanism. We are therefore in the middle of a strange situation in which the conditions are being created for a natural deployment of the anomalous economic trends, which are in line with the corrective needs of unstable systems. This leads us to believe that after the pandemic, there should be a global generalized decline in gross product, essentially in line with the natural process of correcting altered systems. We should therefore believe that, after this unpredictable but natural economic decline, we can expect a global efficiency recovery of economic systems. This means that in the coming years, when the pandemic will be over, we could count on the efficiency qualitative improvement of the economic systems. That is, we'll be able to count on a period of sustained economic growth that could be somehow compared to the 80s and 90s of the past century, in terms of duration and continuity. A sustained growth in the near future, then: but what is this hypothesis based on? The basic idea is that if we let the natural correction system work, as a result we should have a reduction in the imbalance of economic systems. An unwanted but obligatory correction is underway at a global level which, however, strengthens the hope to recover in the future the economic and social losses and disadvantages produced by the pandemic. The first symptom of the economic improvement should be seen already during 2021 through the price system that should relieve the deflation negative burden. In other words, as the rising deflation warns that the economic system has become more fragile and tends to recession, in the same way by mitigating the deflation we should see an improvement in the economic situation. On the other hand, the price decline observed in recent years, when the phenomenon of deflation acquired a global dimension, is a clear signal of the weakness and fragility of an economic system. As saying that, where the phenomenon of development degradation has been greater according to a greater deflation, we can equally believe that the expected deflation reduction may well be a good means of development revival. Discussion In our walk through the previous pages we talked according to a relationship between humanity and nature for the safeguarding of natural resources for the future generations. In the sense that the economic development must respect and safeguard the nature, since the natural evolution of natural system perfectly complies with the preservation of resources offered by the planet. It follows therefore that also humanity should possess the necessary qualities for this basic rule to be respected. There would be then a mutual interest between the humankind that wants to continue living in its future generations and the nature that ensures that this wish is realized. I mean, there must be a mutual interest and thus a safeguard power working both at humanity and natural level. A complex relationship is therefore outlined with reciprocal messaging actions between nature and mankind, warning that the system has gone out of its natural path and has taken the unbalanced way. A reciprocal messaging then, where prices are the means to warn about the anomaly in the economic system behavior. On the other hand, the human communities are a sensitive terminal of the relationship with nature, so that individuals and groups are informed both explicitly and implicitly that the economy has derailed from the development path. The community is therefore invested with the responsibility of calling the leading groups to direct the economy in a way increasingly coordinated with the natural compatibility goal. Nevertheless, it is singular that the messages sent by the nature to the community are of a monetary nature, even if these manifestations in any way participate in the economic activity. Both inflation and deflation are connection instruments between humankind and nature: they are messages informing about the status of relative relationships. In summary, the nature provides us with a series of messages about the state of the economy, when the economic system has lost the compatibility correct path. These are direct messages such as price trends, deflation, inflation or, otherwise, implicit messages: like a sort of sensations that everyone is receiving, then processed to help us understand whether the economy has entered the instability sub-world. These are therefore messages perceived by the common man, by the man next door, by our neighbor, whothough far from the economic matters -appear fully able to give a valid opinion on the government conduct and on the public management of the economy. The community, which is continuously experiencing the market problems, appears therefore to be able to provide as a whole a judgment on the economic management and to offer or not its consent to the government in charge about the management of economy and social sector. Thus, the nature offers a series of messages to the communities of each country, which in turn should use these messages to roughly define the most useful project in order to bring the economy back to its natural stability. A project obviously embryonic, that may or may not be in accordance with the electoral programs of various coalitions in the running for next electoral deadline. It will therefore be the task of the coalitions competing for the renewal of institutional offices, to give the right shape and content to their programs to make them as close as possible to the wishes of community. Thus, strength is given to the principle asking rulers to look further for the community welfare, without fragmenting their action in short-term interventions that are useless even to consolidate the consensus to the current leadership. The complex of relations between mankind and nature, made of reciprocal exchanges, is a fiduciary trust given to the community as ultimate holder of this relationship: essentially, it is a linear mechanism that seems paradoxically quite complex in theory. On the practical level of relationship implementation, however, the problem lies in the terminal (i.e. the community), which may not respond with sobriety and timeliness to the indications coming explicitly and implicitly from the economic system. This happens because the implementation of the mechanism requires a time even long, which is derived from the synchrony established in the relationships and sensations coming from nature and from the way they are received by the community organism. During this sometimes-long period of synchronization, the system stability can further deteriorate. The emergence of new figures coming from the instability world can therefore be related to this problematic balance in the relationship with nature, because the natural corrective system cannot stop to wait the continuous recomposition of the balance between nature and humanity. Due to an extended instability, we can believe that some alarming phenomena such as speculative events, which were common in the previous century and appeared also in the current one, are becoming an operational reality unexpected and brutal. The imbalance between the reception of messages and their implementation, can compromise the ability of communities to express, during the electoral consultations, the candidates suitable to fully understand in the best way the natural message received through the generic and synthetic indications of the community. As saying that the difficulties, when they do not allow the community to express itself fully and in the established times, can become a factor producing unexpected new 'monsters', quite worrying though often even little noticed. This is the case of the widening unequal gap in wealth and income distribution among the population. An unfair and devious mechanism that is felt only by perceiving the continuous poverty increase, a sad reality despite the increase in some cases of wealth and overall incomes. Thus, a general social malaise is growing, which in some way must be corrected, but which is obviously the result of a persistent instability within economic systems, when the relationship between humankind and nature for some reason is interrupted, withered, diluted over time.
v3-fos-license
2021-04-29T05:19:27.532Z
2021-04-01T00:00:00.000
233425963
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/14/8/2079/pdf", "pdf_hash": "3cae627f2bcffd179c14f6fc62e2b302086fdb39", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41580", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "3cae627f2bcffd179c14f6fc62e2b302086fdb39", "year": 2021 }
pes2o/s2orc
Dimensional Analysis and Optimization of IsoTruss Structures with Outer Longitudinal Members in Uniaxial Compression This study analyzes the buckling behavior of 8-node IsoTruss® structures with outer longitudinal members. IsoTruss structures are light-weight composite lattice columns with diverse structural applications, including the potential to replace rebar cages in reinforced concrete. In the current work, finite element analyses are used to predict the critical buckling loads of structures with various dimensions. A dimensional analysis is performed by: deriving non-dimensional Π variables using Buckingham’s Π Theorem; plotting the Π variables with respect to critical buckling loads to characterize trends between design parameters and buckling capacity; evaluating the performance of the outer longitudinal configuration with respect to the traditional, internal longitudinal configuration possessing the same bay length, outer diameter, longitudinal radius, helical radius, and mass. The dimensional analysis demonstrates that the buckling capacity of the inner configuration exceeds that of the equivalent outer longitudinal structure for the dimensions that are fixed and tested herein. A gradient-based optimization analysis is performed to minimize the mass of both configurations subject to equivalent load criteria. The optimized outer configuration has about 10.5% less mass than the inner configuration by reducing the outer diameter whilst maintaining the same global moment of inertia. Introduction Composite lattice trusses are high strength, lightweight structures that are being developed and implemented in disciplines including aerospace structures, automotive bodies, and civil infrastructure [1][2][3]. In addition to an excellent strength-to-weight ratio, these structures demonstrate substantial damping, stiffness, flexural capacity, and corrosion resistance [4]. Possessing adaptable geometries, these structures can be reconfigured to serve as beams, struts, columns, shells, and the cores of sandwich composites [5]. IsoTruss ® structures are a distinct variation of open-lattice composite grid columns. The general structure is comprised of longitudinal and helical members that are aligned with anticipated load criteria to maximize strength-to-weight [6]. Longitudinal members are straight, continuous members that span the overall length, whereas helical members wind piece-wise linear around the structure to form a continuous helical-like member. All members are made of fiber tows encased in resin, and consolidated with external wrapping techniques such as braided sleeves, coiled sleeves, Kevlar wrapped sleeves, or polyester shrink-tape sleeves [7]. Various fiber and resin constituents have been used, including graphite, fiber-glass, and basalt tows with diverse epoxy resins [8]. Structural properties such as the number of nodes (i.e., the number of longitudinal members), the number of carbon tows in each member, and the materials are selected according to the distinct design criteria. The structural performance of composite grid columns, including IsoTruss structures, has been widely studied to identify and understand the governing failure modes. Loaded in axial compression, these columns generally fail in material failure, global buckling, local buckling modes, and strut crushing [9][10][11]. Buckling is a prevalent failure mode that has been studied using experimental, numerical, analytical, and optimization methods. Finite element (FE) methods are a prevalent numerical approach that is broadly used to assess and compare the structural proficiency of diverse configurations with various material properties [12][13][14][15][16]. Buckling models of composite structures have been developed within FE applications to capture both the linear and nonlinear modes. Linear eigenvalue buckling models are used to predict critical buckling loads of global and localized buckling [17,18]. Nonlinear models are enhancing the fidelity of buckling analyses, facilitating greater understanding of post-buckling capacities and the influence of shear deformations [19][20][21][22]. Analytical methods such as mathematical expressions are often used to verify experimental data and validate results predicted by numerical models. While the fidelity of these expressions are limited by the corresponding assumptions, the expressions provide a baseline to characterize interrelations between design parameters (e.g., material properties and structural geometry) and performance criteria (e.g., ultimate capacity or structural efficiency) [23,24]. Such expressions have been derived for composite structures using traditional mechanics principles including strain energy formulation or classical laminate theory, and are being augmented to account for transverse curvature and individual member strains [25][26][27]. Optimization methods are often used in the preliminary design phase of composite structures to maximize strength-to-weight and other desirable characteristics [28][29][30]. The optimization objectives and constraints are defined with various methods, including the use of analytical expressions that demonstrate sufficient fidelity [23,26,31]. Both gradient-free and gradient-based frameworks have been employed in preceding studies to maximize structural efficiency. Gradient-free methods, such as the non-sorting genetic algorithm II (NSGA-II), are used frequently to optimize structural configurations and facilitate multi-or single-objective optimization of both discrete and continuous design variables [32,33]. Gradient-based methods are used in other studies to perform sensitivity analyses in addition to mass minimization [26,31]. In preceding research studies, many configurations of IsoTruss structures with inner longitudinal members have been analyzed by manufacturing experimental specimens and performing physical testing [34,35]. Implementing numerical methods such as FE analysis and optimization studies has expedited the design process, facilitating the preliminary assessment of alternative configurations [24,26,[36][37][38][39]. This study is part of a broad research initiative to develop and implement numerical and optimization methods for the preliminary design of IsoTruss structures. The following studies by Opdahl et al. preceded the current work to develop numerical techniques for analyzing IsoTruss structures with inner longitudinal members: a linear eigenvalue buckling FE model was validated with experimental testing and verified with analytical expressions [24]; an analytical expression was derived to predict the local/shell-like buckling mode [26]; trends between design parameters (i.e., outer radius, radius of longitudinal members, radius of helical members, and bay length) and the shell-like buckling mode were characterized in a dimensional analysis [39]; the mass of an inner longitudinal configuration was minimized in an optimization study using both gradient-based and gradient-free optimization algorithms [26]. The purpose of the current study is to adapt the aforementioned numerical, dimensional, and optimization methods (developed for inner longitudinal configurations [26]) to the design of IsoTruss structures with outer longitudinal members. The outer longitudinal configuration (OLC) possesses the same geometric characteristics as the inner longitudinal configuration (ILC) except that the longitudinal members are placed at the outer diameter of the structure, spanning between the nodes. Figure 1 is the end view of an IsoTruss structure. A side view of the OLC is shown in Figure 2. Refer to the works presented by Kesler and Opdahl for more explanation of IsoTruss orientation and geometry [26,40]. OLC and ILC structures of equal bay length, outer diameter, and member radii are equivalent in mass. By pushing the longitudinal members to the outer diameter, the global moment of inertia of the structure is increased without increasing the mass. Hence, the OLC is inherently more resistant to global buckling than the ILC of equal dimensions. On the other hand, the placement of the longitudinal members in the OLC increases the span of the longitudinal struts, thereby increasing the susceptibility to local buckling. Due to inherent manufacturing complexity, experimental testing has not been widely performed on the OLC, therefore, there is limited physical data to demonstrate the structural performance and buckling behavior. The current study produces data from dimensional analysis (akin to that performed by Opdahl and Jensen [39]), FE modeling, and optimization techniques (based on the framework presented in [26]) to explore four subtopics. First, the data are used to characterize trends between the OLC design parameters and the buckling capacity. Second, FE predictions are plotted with analytical predictions to verify the accuracy of an analytical expression presented herein. Third, the relative performance of the OLC with respect to the ILC is analyzed via dimensional analysis. Finally, the OLC and ILC are optimized with respect to mass (via gradient-based techniques) to indicate the distinct advantages of each configuration under the same loading criteria. Methods Three methods of analysis are implemented in the current study to analyze the buckling behavior of the OLC and compare its relative performance to the ILC. First, a dimensional analysis is performed to characterize the interrelations between the governing design parameters and the critical buckling load. The parameters are reduced to three non-dimensional independent Π variables via Buckingham's Π Theorem (BPT). Likewise, the critical buckling load is also reduced to a non-dimensional term via BPT. Next, FE methods are used to predict critical buckling loads for diverse structural configurations. FE analyses are performed in ANSYS WorkBench based on the validated methods discussed by Opdahl and Jensen [24]. The predictions are used to assess the relative accuracy of analytical expressions for local buckling in the OLC. Finally, the optimization techniques presented by Opdahl [26] are implemented to optimize the OLC and ILC with respect to mass. These methods are expounded in the subsequent sections. Dimensional Analysis The governing design parameters of the OLC are the same as those identified by Opdahl and Jensen [39] to govern the buckling behavior of the ILC (i.e., longitudinal radius [r L ], helical radius [r H ], bay length [b], outer radius [R], and Young's modulus [E z ]). Therefore, the three independent Π variables derived therein are used in the current study, and are provided in Equation (1) for reference. Kesler and Opdahl provide additional figures and description of these governing design parameters [26,40]. In the current study, both global and local buckling modes are considered, hence, the critical buckling load, P cr , is selected as the dependent variable of interest in place of the shell-like buckling load used by Opdahl and Jensen. While the global length, L, is not explicitly defined as a design parameter in BPT, it is implicitly incorporated in the FE predictions of the critical buckling load of the global buckling mode. Trend Analysis Trend analyses are performed for the OLC in the same manner as those presented by Opdahl and Jensen [39] for the ILC. That is, a trend analysis is performed for each independent Π variable with respect to the dependent Π variable. Each trend analysis consists of three sets of FE analyses, and each set of FE analyses has different design parameters to demonstrate how the interrelations may vary with respect to different geometric dimensions. Each set of geometric dimensions is distinguished by the ratios Π 3 -to-Π 2 , Π 3 -to-Π 1 , or Π 1 -to-Π 2 for the trend analyses of variables Π 1 , Π 2 , and Π 3 , respectively. The independent Π variables and the Π ratios of each FE set are presented in Table 1. The values of the variables were selected to provide Π ratios that are round numbers within the design space of the long, light-weight IsoTruss structures typical of the Rackliffe et al. [34] specimens. Trend analyses are also used in the current study to compare the relative performance of the OLC and the ILC. A trend analysis is performed for each independent Π variable of the ILC configuration using the design parameters of Set 2. The results of each ILC analysis are plotted with the corresponding results of the Set 2 OLC analysis. Finite Element Models The FE analyses consist of static structural analyses and eigenvalue buckling analyses to predict the critical buckling load and mode of each distinct configuration. The boundary conditions were defined as fixed-free at the ends of the IsoTruss structure, and the compression load was defined as 500 N (112 lb.). The density of the FE mesh was 10 m −1 (0.25 in. −1 ). The fixed design parameters that correspond with each set of trend analyses (i.e., number of tows per longitudinal or helical member [N t ], number of bays [N b ], and overall length [L]) are summarized in Table 2. Parameter Units The FE models demonstrate two general buckling modes: global buckling and local buckling. The global buckling mode follows the typical model and expression of Eulerbuckling of a cantilever column. Figure 3 shows the global buckling mode of an IsoTruss with inner longitudinal members, produced from an FE model. The local buckling mode occurs over the longitudinal members such that the struts buckle either inward or outward symmetrically with a wavelength of two bays. Optimization Techniques The gradient-based techniques presented by Opdahl are implemented in the current study to optimize the OLC with respect to the same bounds and constraints as those imposed on the ILC by Opdahl [26]. The code employs the built-in optimizer 'fmincon' to minimize mass using a gradient-based algorithm. The framework executes the optimization in two stages. First, the optimizer minimizes the mass, treating all design variables as continuous. Second, the discrete variables (i.e., the number of bays and the number of longitudinal tows) are rounded to integer values, fixed as input variables, and the outer diameter is re-optimized as a continuous variable. Algorithmic differentiation is implemented within the analysis to supply the gradients of the objective and constraint functions to the optimizer. The sensitivity derivatives and Lagrange multipliers are produced with the optimized solution. The problem definition therein includes a constraint for the eigenvalue of the longitudinal strut buckling mode, λ l , and shell-like buckling mode, λ sb , that are typical for the ILC. Contrary to the shell-like buckling mode exhibited by the ILC, the local baylevel buckling of the OLC demonstrates complete radial symmetry, with the longitudinal struts all buckling either outward or inward at a given point along the longitudinal axis. The shell-like buckling equation for the ILC local buckling mode was replaced with an equation that predicts local buckling in the OLC. This local buckling mode, defined by the bay buckling load, P b , is implemented to replace P l and P sb of the ILC. The analytical expression that is used to predict local buckling in the OLC is shown in Equation (2). The boundary constraints imposed by the helical struts are approximated as pinned joints with the effective length factor, µ b , of one. While the constraining influence of the nodes on the local buckling mode is approximated in the current work as a pinned joint, other studies explore the boundary constraint as a function of design parameters such as member radius, material properties, and/or the inclination angle [34,36,41,42]. Opdahl documents the derivation of a boundary constraint coefficient, µ sb , for the ILC that is a function of the geometry and material of the longitudinal members [26]. As the stiffness of the longitudinal members increases, the rotational stiffness of the node increases, decreasing the validity of the pinned-joint assumption. The influence of the helical members at the nodes is expounded in the discussion section based on the results of the analyses performed herein. Additional exploration should be performed in a subsequent study to enhance the fidelity of the analytical expression shown in Equation (2). The global buckling load, P g , is predicted using the Euler-buckling equation for a cantilever column. The moment of inertia coefficient, c, is selected based on the derivation by Winkel [36] for outer longitudinal members (see Equation (3)). 0 (for fixed-free column) .0 (for 8-node IsoTruss with outer longitudinal members) The adjusted problem definition of the optimization analysis performed in the current study is summarized mathematically in Equation (4). 9 4] [N b N t L D] < [100 13 8] (4) Results The results from the FE analyses and analytical predictions are presented as four subtopics in the subsequent sections. The first two subtopics focus on characterizing the buckling behavior of the OLC. First, the FE analyses of the OLC are used in trend analyses to assess the interrelations between each independent Π variable and the dependent Π variable. Second, the analytical predictions of the OLC critical buckling loads are plotted with the FE predictions. The plots indicate the extent to which the analytical expression adequately predicts critical buckling with respect to the FE results. The next two subtopics compare the performance of the OLC with that of the ILC. First, data collected for the OLC and ILC trend analyses are plotted together to indicate the relative performance of the configurations within the design space of the trend analyses. Second, the analytical expression for bay-level buckling in the OLC is implemented in the gradient-based optimization routine presented by Opdahl [26] to compare the OLC and ILC structures that are optimized for mass. Trend Analyses of OLC Data from the OLC trend analyses are first used to characterize trends between the non-dimensional, independent Π variables. The independent Π variables Π 1 , Π 2 , and Π 3 are plotted against Π 0 in Figures 6-8, respectively. Local buckling loads are represented in the plots by solid markers, whereas global buckling loads are represented by markers that are unfilled. The dotted lines represent the best-fit curves. Figure 6 indicates that increasing Π 1 induces a quadratic increase in Π 0 . It follows that increasing the radius of the longitudinal members induces a quadratic increase in the critical buckling load. Figure 6 also indicates that the Π 0 vs. Π 1 curve shifts downward as the ratio of b-to-r H increases. The general quadratic expression that relates Π 1 to Π 0 is provided in Equation (5). The coefficients of the quadratic expressions (i.e., α and β) vary with the ratio of Π 3 -to-Π 2 . The coefficients and R-squared values that correspond to the curves shown in Figure 6 are provided in Table 3. The expressions are derived such that the ordinate intercept is set to zero. Figure 7 indicates that Π 0 increases with respect to increases in Π 2 . It follows that the critical buckling load, P cr , increases with respect to increases in the radius of the helical members, until global buckling becomes the governing buckling mode. Once global buckling occurs, the curve flattens with respect to Π 2 , as shown in the curve b/r H = 100 where Π 2 is approximately 0.015. As the b-to-r L ratio increases, the Π 0 vs. Π 2 curve shifts downward. The generalized quadratic expression that relates Π 2 to Π 0 is provided in Equation (6). The coefficients of the expression (i.e., α and β) vary with the ratio of Π 3 -to-Π 1 . The coefficients that correspond to the curves shown in Figure 7 are provided in Table 4. The expressions are derived such that the ordinate intercept is zero. The corresponding R-squared values are also provided in Table 4. Figure 8 presents the interrelations of Π 0 and Π 3 for three values of the ratio r L -to-r H . As the r L -to-r H ratio increases, the Π 0 vs. Π 3 curve shifts downward. The curve of best-fit that characterizes the trends between Π 0 and Π 3 is a power curve, provided in general terms in Equation (7). The coefficients, α and ξ, of the power curves vary with respect to the r L -to-r H ratio. The coefficients are provided in Table 5 with the corresponding R-squared values. Analytical vs. FE Predictions of OLC In this section, the analytical predictions of the critical buckling loads of the OLC, P cr anal , are compared with the FE predictions, P cr FE . Figure 9 plots the analytical predictions and FE predictions of Π 0 vs. Π 1 . The corresponding percent deviation (calculated via Equation (8)) of the analytical predictions with respect to the FE predictions is plotted against Π 1 in Figure 10. Similarly, Figures 11 and 12 compare the predictions of Π 0 vs. Π 2 and illustrate the corresponding percent deviation, respectively; and, Figures 13 and 14 compare the predictions of Π 0 vs. Π 3 and illustrate the corresponding percent deviation, respectively. Solid lines represent the FE predictions whereas the dashed lines indicate the analytical predictions. Percent Deviation = P cr anal − P cr FE P cr FE · 100 (8) Figure 9. OLC Π 0 vs. Π 1 Analytical and FE predictions [26]. Trend Analyses of OLC vs. ILC In this section, the buckling capacity of the OLC and ILC are compared to assess the relative performance of the two configurations. The independent Π variables of the OLC and ILC Set 2 configurations are plotted with respect to Π 0 in Figures 15-17. Figure 15 demonstrates the interrelation of Π 0 and Π 1 of both the ILC and OLC, where the b-to-r H ratio is 200. Both configurations demonstrate a quadratic relation between Π 1 and Π 0 . While the ILC curve indicates a greater buckling capacity than the corresponding OLC curve, the difference of the OLC curve relative to the ILC curve decreases dramatically from −60% to −4% as Π 1 increases from approximately 0.006 to 0.015. Figure 16 demonstrates the interrelation of Π 0 and Π 2 of the ILC and OLC structures, where the b-to-r L ratio is 115. The plot once again demonstrates that the ILC possesses greater buckling capacity than the OLC for the Set 2 design space. As r H increases, the critical buckling load of the ILC increases quadratically, whereas the critical buckling load of the OLC increases more proportionally. At approximately Π 2 of 0.010, the ILC buckling mode transitions from local to global buckling, and the critical buckling load plateaus. Conversely, the OLC continues to be controlled by local buckling. This can be attributed to the placement of the longitudinal members at the outer diameter. With the longitudinal members at the outer diameter, the unbraced length of the longitudinal struts is increased compared to the ILC equivalent. In addition, the global moment of inertia of the OLC is greater than that of the ILC when both IsoTruss structures have the same number of nodes and outer radius (see Winkel [36]). Thus, the OLC is more susceptible to bay-level buckling and less susceptible to global buckling than the ILC equivalent. The difference of the OLC curve relative to the ILC curve decreases to less than −1% when Π 2 approaches 0.014. Figure 17 demonstrates the interrelation of Π 0 and Π 3 of the ILC and OLC, where the r L -to-r H ratio is approximately 1.75. The critical buckling load of the ILC once again exceeds that of the OLC in each case. The difference between the OLC design point and the ILC design point increases from −2% to −17% as Π 3 increases from 0.88 to 1.83. Optimization of OLC vs. ILC This final section incorporates the analytical expressions for the OLC in the gradientbased optimization routine. The OLC structure is optimized for mass with the same bounds as the ILC structure presented by Opdahl [26]. The constraints are also maintained the same, with the exception of local bay-level buckling. With the longitudinal members placed at the outer diameter of the structure, the OLC is not susceptible to shell-like buckling. The local bay buckling shown in Figure 4 is the same failure mode as longitudinal strut buckling in the OLC. Table 6 presents the dimensions and mass of the optimized OLC and ILC structures. The optimized OLC has about 10.5% less mass than the optimized ILC for the prescribed constraints and bounds. The OLC optimum has more bays than the ILC optimum, however, the outer diameter of the OLC optimum is approximately 16% smaller than that of the ILC optimum. Even though the outer diameter has been reduced, the global moment of inertia is the same between structures. The Lagrange multipliers, λ m , are shown with respect to the lower bound, upper bound, and structural failure constraints (λ g , λ sb , λ l , and σ u , respectively). A local sensitivity analysis of each optimized configuration was performed by calculating the sensitivity derivatives of the mass and the constraints with respect to each design variable. The Jacobian matrices of the optimized OLC and ILC structures are presented in Equations (9) and (10), respectively. Row 1 of Equarions (9) and (10) indicate that the design variables of both configurations are positively correlated with the mass of the overall structures. Row 2 of Equarions (9) and (10) imply that the outer diameter has the greatest relative effect (i.e., inversely) on the global buckling load, while the number of bays is negligible. Row 3 of Equation (9) and Row 4 of Equation (10) indicate that the longitudinal buckling load of each configuration (manifest in the OLC as local bay-level buckling) is inversely related to the number of bays and number of longitudinal tows (i.e., positively correlated with the bay length). The sensitivity of the ILC with respect to the number of longitudinal tows is steeper than that of the OLC. Row 4 of Equation (9) and Row 5 of Equation (10) imply that the ultimate material stress is only affected by the number of longitudinal tows, and is inversely related. The sensitivity of the material stress of the OLC optimum is much steeper than that of the ILC optimum with respect to the number of longitudinal tows (for these particular optima). Influence of Helical Members One of the most prominent themes from the analyses of the current study is the unprecedented contribution of the helical members to the critical buckling load of the OLC structures. The influence of the helical members can be assessed by the OLC interrelation curves, the plots comparing the analytical and FE predictions, and the percent deviation curves. Figures 9, 11 and 13 demonstrate the extent to which the independent Π variables influence the dependent Π variable predicted from both analytical and FE predictions. The shapes of the curves shown in Figures 9 and 13 indicate that the independent Π variables, Π 1 and Π 3 , induce similar effects in Π 0 whether predicted using analytical or FE methods. Figure 9, conversely, indicates that changes in Π 2 affect the analytical and FE predictions differently. While the analytical predictions are not affected by changes in Π 2 , the FE predictions indicate that increasing Π 2 increases the FE prediction of Π 0 . Likewise, the percent deviation curves indicate that the radius of the helical members has significant effect on the deviation between analytical and FE predictions. Figure 10 demonstrates that as the ratio b/r H decreases, the percent deviation curve is shifted downward, indicating an increase in the percent deviation. Thus, if all design parameters are fixed, and the helical radius is increased, the percent deviation will also increase. Figure 12 demonstrates that as Π 2 increases along each curve, the percent deviation also increases until global buckling is induced, at which point the critical buckling load is not changed with respect to the helical radius. These results indicate that the analytical expression Equation (2) can be improved by incorporating the helical radius. One method would be to include the helical radius in the calculation of the boundary constraint coefficient, µ b . The analytical expressions for ILC strut buckling and ILC shell-like buckling both include derivations for boundary constraint coefficients as shown by Opdahl [26]. The strut buckling derivation calculates the flexural rigidity of the helical struts at the nodes, whereas the shell-like buckling derivation incorporates the bending energy from the intersecting helical members. Figures 18 and 19 are images produced from FE Models of OLC Set 2 structures. The figures have the same design parameters except the helical radius. Figure 18 has two carbon tows in the helical members, whereas Figure 19 has thirteen carbon tows in each helical member. By increasing the number of carbon tows, the rotation at the IsoTruss nodes is noticeably decreased, thereby increasing the flexural rigidity and localizing the deflection to the buckled longitudinal strut. The colors of the figures represent deflection. Figure 18. Local buckling of Set 2 OLC with two carbon tows in helical members [26]. Figure 19. Local buckling of Set 2 OLC with thirteen carbon tows in helical members [26]. While the boundary constraint of Figure 18 acts similar to a pinned connection, the boundary constraint of Figure 19 approaches the behavior of a fixed connection. The rotation of the helical constraints at the nodes of the OLC are magnified for clarity in Figures 20 and 21. Note that the helical members with two carbon tows ( Figure 20) show enough rotation at the nodes to resemble a smooth inflection point, whereas the nodes of the thirteen-tow helical members ( Figure 21) do not rotate as much, and flatten the longitudinal member at the nodes. Figures 20 and 21 were reproduced with legends that indicate the total deformation corresponding to the buckling mode. The main purpose of the image is to indicate the reduced rotation at the nodes due to the increase in helical radius, hence, a color bar of the stress cloud diagram was not included in the current work. It is recommended that a boundary constraint coefficient be derived for bay buckling of OLC structures that incorporates the flexural rigidity demonstrated in the images. OLC vs. ILC Performance The relative performance of the OLC and ILC with respect to buckling is assessed from the comparative trend analyses and the optimization study. The trend analyses presented in Figures 9, 11 and 13 each demonstrate that the buckling capacities of the ILC structures exceed that of the corresponding OLC structures that possess the same outer radius, and are not independently optimized. Furthermore, Figure 11 demonstrates that the Π 0 of the ILC structure increases quadratically with respect to Π 2 until it transitions to global buckling. Conversely, the Π 0 of the OLC structure increases at a more shallow rate. The curves meet at approximately Π 2 = 0.014 where the OLC local buckling load corresponds with the ILC global buckling load. While the design space of the trend analyses favored the ILC, the optimization analysis favored the OLC where both configurations were optimized with respect to mass. The optimized OLC has a shorter bay length, which increases the total mass due to the longer helical member length, but the OLC also has a smaller outer diameter and fewer longitudinal tows. The bottom line is that the OLC strength-to-weight exceeds that of the ILC, in part, by reducing the outer diameter. The outer diameter is approximately 16% smaller than the ILC configuration, and the overall weight is reduced by about 10.5%. The influence of the outer diameter in the ILC and OLC buckling behavior could have been manifest in the dimensional analysis if a trend analysis had been performed with respect to the outer diameter. One such analysis could be performed by plotting the Π variables P cr /(E · r 2 L ) versus R/r L where R varies for a fixed value of r L . Conclusions The purpose of the current study is to characterize the buckling behavior of 8-node IsoTruss structures with outer longitudinal members. A dimensional analysis is performed to analyze the interrelations between the governing design parameters and the critical buckling load. The critical buckling loads of diverse geometric dimensions are predicted using finite element (FE) modeling in ANSYS WorkBench. The best-fit curves that indirectly relate the longitudinal radius, the helical radius, and the bay length to the critical buckling load are characterized as quadratic and power expressions. The FE predictions are also plotted with analytical predictions to assess the accuracy of the analytical expression for bay-level buckling with respect to FE methods. Changes in the longitudinal radius and the bay length induce similar trends in the FE and analytical predictions. Increasing the helical radius, however, does not induce the same trends in the analytical and FE predictions. While increasing the helical radius increases the FE prediction, there is no change in the prediction from the analytical expression. Trend analyses are also performed on corresponding 8-node IsoTruss structures with inner longitudinal members. The buckling data of the inner longitudinal configurations (ILC) are plotted with the data of the outer configurations (OLC) to analyze the relative performance of the configurations with respect to buckling resistance. Each plot indicates that the ILC has greater buckling resistance than the outer longitudinal counter-part within the design space of the trend analysis where the dimensions of the ILC and OLC are equivalent. The relative performance of the OLC and ILC is also analyzed by optimizing both configurations with respect to mass. The optimized structures are subject to the same bounds, and the constraints are defined by analytical expressions that predict the relevant buckling modes of each configuration. The optimized OLC has about 10.5% less mass than that of the optimized ILC. Recommendations First, a boundary constraint coefficient should be derived for the analytical expression that predicts local buckling in the OLC. The coefficient should incorporate the flexural rigidity of the helical members at the nodes, thereby capturing the effect of the helical radius on the buckling stability. Once derived, another trend analysis of Π 2 can be performed to determine if the analytical expression and FE model predict similar trends in the local buckling load by varying r H . The improved analytical expression could be re-implemented in the gradient-based optimization code to improve the accuracy of the bay-level buckling constraint. Second, additional research should be performed to delineate the design spaces where the ILC and OLC are preferred. While the results of the trend analyses indicate that the ILC has greater resistance to buckling than the OLC counter-part, the optimization analysis indicates that the optimized OLC has less mass than the optimized ILC. The advantage can be attributed to the fact that the OLC has a greater global moment of inertia than the ILC of equivalent outer radius. The design space could be delineated by performing a trend analysis with respect to the outer radius and the bay length. Data Availability Statement: Some data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.
v3-fos-license
2022-04-12T01:16:20.237Z
2022-04-09T00:00:00.000
248085011
{ "extfieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.umbjournal.org/article/S0301562922006329/pdf", "pdf_hash": "952ad6cd7aff808d1c254bfa17a449a33aa79854", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41585", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "952ad6cd7aff808d1c254bfa17a449a33aa79854", "year": 2022 }
pes2o/s2orc
Ultrasound Signal Processing: From Models to Deep Learning Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions. Conventionally, reconstruction algorithms where derived from physical principles. These algorithms rely on assumptions and approximations of the underlying measurement model, limiting image quality in settings were these assumptions break down. Conversely, more sophisticated solutions based on statistical modelling, careful parameter tuning, or through increased model complexity, can be sensitive to different environments. Recently, deep learning based methods, which are optimized in a data-driven fashion, have gained popularity. These model-agnostic techniques often rely on generic model structures, and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge. These model-based solutions yield high robustness, and require less parameters and training data than conventional neural networks. In this work we provide an overview of these techniques from recent literature, and discuss a wide variety of ultrasound applications. We aim to inspire the reader to further research in this area, and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on model-based deep learning techniques for medical ultrasound. Introduction Ultrasound (US) imaging has proven itself to be an invaluable tool in medical diagnostics. Among many imaging technologies, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), US uniquely positions itself as an interactive diagnostic tool, providing real-time spatial and temporal information to the clinician. Combined with its relatively low cost, compact size, and absence of ionizing radiation, US imaging is an increasingly popular choice in patient monitoring. Consequently, the versatility of US imaging has spurred a wide range of applications in the field. While conventionally it is used for the acquisition of B-mode (2D) images, more recent developments have enabled ultrafast, and 3D volumetric imaging. Additionally, US devices can be used for measuring clinically relevant features such as: blood velocity (Doppler), tissue characteristics (e.g. Elastography maps), and perfusion trough ultrasound localization microscopy (ULM). While this wide range of applications shares the same underlying measurement steps: acquisition, reconstruction, and visualisation, their signal processing pipelines are often specific for each application. It follows that the quality of US imaging strongly depends on the implemented signal processing algorithms. The resulting demand for high-quality signal processing has pushed the reconstruction process from fixed, often hardware based implementations to the digital domain (Thomenius, 1996;Kim et al., 1997). More recently, this has led to fully software-based algorithms, as they can open up the potential to complex measurement models and statistical signal interpretations. However, this shift has also posed a new set of challenges, as it puts a significant strain on the digitisation hardware, bandwidth constrained data channels, and computational capacity. As a result, clinical devices, where real-time imaging and robustness are of utmost importance, still mainly rely on simple hardware based solutions. A more recent development in this field is the utilisation of deep neural networks. Such networks can provide fast approximations for signal recovery, and can often be efficiently implemented due to their exploitation of parallel processing. After training, these networks can be efficiently implemented to facilitate ultra-fast signal processing. However, by inheriting generic network architectures from computer vision tasks, these approaches are highly datadriven and are often over-parameterized, posing several challenges. In order to converge to a well-generalised solution across the full data distribution encountered in practice, large amounts of (unbiased) training data are needed, which is not always trivial to obtain. Furthermore, these models are often treated as a 'black-box', making it difficult to guarantee the correct behavior in a real clinical setting. To overcome some of the challenges of purely data-driven methods, an alternative approach is to try to combine model-based and data-driven methods, in attempt to get the best of both worlds. The proposition here is that the design of data-driven methods for ultrasound signal processing can likely benefit from the vast amounts of research on conventional, model-based, reconstruction algorithms, informing e.g. specific neural network designs or hybrid processing approaches. In this review paper, we aim to provide the reader with a comprehensive overview of ultrasound signal processing based on modelling, machine , and Deep Beamforming (Khan et al., 2020). learning, and model-based learning. To achieve this, we take a probabilistic perspective and place methods in the context of their assumptions on signal models and statistics, and training data. While other works (Shlezinger et al., 2020;Monga et al., 2021;Van Sloun et al., 2019a;Al Kassir et al., 2022;Liu et al., 2019) offer an excellent overview of the different aspects of AI applied to ultrasound image processing, the focus of this paper is to put the theory of both signal processing and machine learning under a unifying umbrella, rather than to showcase a general review of deep learning being applied to ultrasound specific problems. To that end, we cover topics ranging from beamforming to post-processing and advanced applications such as super-resolution. Throughout the paper we will distinguish between three types of approaches that we cover in separate sections. • Model-Based Methods for US Signal Processing: Conventional model-based methods derive signal processing algorithms by modelling the problem based on first principles, such as knowledge of the acquisition model, noise, or signal statistics. Simple models offer analytical solutions, while more complex models often require iterative algorithms. • Deep Learning (DL) for US Signal Processing: Deep learning (DL) solutions are fully data-driven and fit highly-parameterized algorithms (in the form of deep neural networks) to data. DL methods are model-agnostic and thus rely on the training data to expose structure and relations between inputs and desired outputs. • Model-Based DL for US Signal Processing: Model-based DL aims at bridging the gap by deriving algorithms from first-principle models (and their assumptions) while learning parts of these models (or their analytic/iterative solutions) from data. These approaches enable incorporating prior knowledge and structure (inductive biases), and offer tools for designing deep neural networks with architectures that are tailored to a specific problem and setting. The resulting methods resemble conventional model-based methods, but allow for overcoming mismatched or incomplete model information by learning from data. In all cases, data is needed to test the performance of (clinical) signal processing algorithms. However, in deep learning based solutions specifically, we observe an increasing need for training data when prior knowledge on the underlying signal model is not fully exploited. A schematic overview of these approaches is given in Figure 1, including examples of corresponding techniques in the case of ultrasound beamforming. We begin by briefly explaining the probabilistic perspective and notation we will adopt throughout the paper in a preliminaries section, after which we provide background information on the basics of US acquisition, which can be skipped by experts in the field of ultrasound. Following this background information, we will dive into model-based US signal processing, in which we will derive various conventional beamforming and post-processing algorithms from their models and statistical assumptions. Next, we turn to DL methods, after which we bridge the gap between model-based and DL-based processing, identifying opportunities for data-driven enhancement of model-based methods (and their assumptions) by DL. Finally we provide a discussion and conclusion, where we provide a future outlook and several opportunities for deep learning in ultrasound signal processing. A probabilistic approach to deep learning in ultrasound signal processing In this paper we will use the language and tools of probability theory to seamlessly bridge the gap between conventional model-based signal processing and contemporary machine/deep learning approaches. As Shakir Mohamed (DeepMind) phrased it: "Almost all of machine learning can be viewed in probabilistic terms, making probabilistic thinking fundamental. It is, of course, not the only view. But it is through this view that we can connect what we do in machine learning to every other computational science, whether that be in stochastic optimisation, control theory, operations research, econometrics, information theory, statistical physics or bio-statistics. For this reason alone, mastery of probabilistic thinking is essential." To that end, we begin by briefly reviewing some concepts in probabilistic signal processing based on models, and then turn to recasting such problems as data-driven learning problems. Preliminaries on model-based probabilistic inference Let us consider a general linear model where y is our observed signal, A a measurement matrix, n a noise vector, and x the signal of interest. As we shall see throughout the paper, many problems in ultrasound signal processing can be described according to such linear models. In ultrasound beamforming for example, y may denote the measured (noisy) RF signals, x the spatial tissue reflectivity, and A a matrix that transforms such a reflectivity map to channel domain signals. The goal of beamforming is then to infer x from y, under the measurement model in (1). Recalling Bayes rule, we can define the posterior probability of x given y, as a product of the likelihood p(y|x) and a prior p(x), such that Following (3) we can define a maximum a posteriori (MAP) estimator for (1), given byx which provides a single, most likely, estimate according to the posterior distribution. If we assume a Gaussian white noise vector n in (1), i.e. y ∼ N (Ax|σ 2 n I), the MAP estimator becomes: where λ is a scalar regularization parameter. Evidently, the MAP estimator takes the prior density function p(x) into account. In other words, it allows us to incorporate and exploit prior information on x, should this be available. Conversely, if x is assumed to be deterministic but unknown, we get the maximum likelihood (ML) estimator. The ML estimator thus assigns equal likelihood to each x in the absence of measurements. As such this simplifies to: Many traditional ultrasound processing methods are in this form, where its output only depends on a set of (finely tuned) hyper-parameters, and the input data. This is not surprising, as deriving a strong and useful prior that generalizes well to the entire expected data distribution is challenging in its own right. Data-driven approaches aim to overcome the challenges of accurate modeling by learning the likelihood function, the prior, the entire posterior, or a direct end-to-end mapping (replacing the complete MAP estimator) from data. We will detail on these methods in the following section. Preliminaries on deep-learning-based inference Fully data-driven methods aim at learning the optimal parameters θ * of a generic parameterized mapping, f θ (·), Y → X from training data. In deep learning the mapping function f θ (·) is a deep neural network. Learning itself can also be formulated as a probabilistic inference problem, where optimized parameter settings for a fixed network architecture are inferred from a dataset D. To that end we define a posterior over the parameters: where p(θ) denotes a prior over the parameters. Often p(θ) is fully factorized, i.e. each parameter is assumed independent, to keep the learning problem in deep networks (with millions of parameters) tractable. Typical priors are Gaussian or Laplacian density functions. Most deep learning applications rely on MAP estimation to find the set of parameters that minimize the negative log posterior: Note that for measurement (input) -signal (output) training pairs (y i , x i ) ∈ D common forms of p(x|f θ (y), θ) are Gaussian, Laplacian, or categorical distributions, resulting in mean-squared-error, mean-absolute-error, and crossentropy negative log-likelihood functions, respectively. Similarly, Gaussian and Laplacian priors lead to ℓ 2 and ℓ 1 regularization on the parameters, respectively. It is worth noting that while most deep learning applications perform MAP estimation, there is increasing interest in so-called Bayesian deep learning, which aims at learning the parameters of the prior distribution p(θ) as well. This enables posterior sampling during inference (by sampling from p(θ)) for (epistemic) uncertainty estimation. Again, often these distributions are fully factorized (e.g. independent Gaussian or Bernoulli) to make the problem tractable (Gal and Ghahramani, 2016). After training (i.e. inferring parameter settings), we can use the network to perform MAP inference to retrieve x from new input measurements y: The neural network thus directly models the parameters of the posterior, and does not factorize it into a likelihood and prior term as model-based MAP inference does. Note that for Gaussian and Laplace density functions, in which a neural network f θ (y) computes the distribution mean, For categorical distributions, f θ (y) computes the probabilities for each category/class. Typical deep neural network parameterizations f θ (·) are therefore modelagnostic, as they disregard the structure of the measurement/likelihood model and prior, and offer a high degree of flexibility to fit many data distributions and problems. However, many such parameterizations do exploit specific symmetries in the expected input data. Examples are convolutional neural networks, which exploit the spatial shift-invariant structure of many image classification/regression problems through shift-equivariant convolutional layers. Similarly, many applications where the input is temporally correlated, such as time series analysis, recurrent neural networks (RNN) are employed. Preliminaries on model-based deep learning Model-based DL aims at imposing much more structure to the network architectures and parameterizations of f θ (·). Where standard deep networks aim at fitting a broad class of problems, model-based DL offers architectures that are highly tailored to specific inference problems given in (1) and (4) -i.e. they are aware of the model and structure of the problem. This promises to relax challenges related to generalization, robustness, and interpretability in deep learning. It often also enables designing smaller (but more specialized) networks with a lower computational and memory footprint. To derive a model-based DL method, one can start by deriving a MAP estimator for x from the model, including assumptions on likelihood models p(y|x) and priors p(x). Generally, such estimators come in two forms: analytic (direct) and iterative solutions. The solution structure dictates the neural network architecture. One then has to select which parts of the orig- This iterative solution consists of two alternating steps: 1) a gradient step on x to maximize the log-likelihood of log p(y|x), 2) a proximal step that Also within the field of US imaging and signal processing, model-based DL is seeing increasing adoption for problems spanning from beamforming to clutter suppression (Solomon et al., 2019a) and localization microscopy . Exact implementations of these model-based DL methods for US imaging is indeed highly application specific (which is its merit), as we will discuss in a later section. Fundamentals of US acquisition Ultrasound imaging is based on the pulse-echo principle. First, a pressure pulse is transmitted towards a region of interest by the US transducer consisting of multiple transducer elements. Within the medium, scattering occurs due to inhomogeneities in density, speed-of-sound and non-linear behavior. The resulting back-scattered echoes are recorded using the same transducer, yielding a set of radio-frequency (RF) channel signals that can be processed. Typical ultrasound signal processing includes B-mode image reconstruction via beamforming, velocity estimation (Doppler), and additional downstream post-processing and analysis. Although the focus of this paper lies on these processing methods, which we will discuss in later chapters, we will for the sake of completeness briefly review the basic principles of ultrasound channel signal acquisition. Transmit schemes Consider an ultrasound transducer with channels c ∈ C. A transmit scheme consists of a series of transmit events e ∈ E. Different transmit events can be constructed by adjusting the per channel transmit delays (focusing), the number of active channels (aperture), and in advanced modes, also waveform parameters. We briefly list the most common transmit schemes. Line scanning Most commercial ultrasound devices rely on focused, line-by-line, acquisition schemes, as it yields superior resolution and contrast compared to unfocused strategies. In line scanning, a subaperture of channels focuses the acoustic energy by channel-dependent transmit delays along a single (axial) path at a set depth, maximizing the reflected echo intensity in a region-of-interest (Ding et al., 2014). Some transmit schemes make use of multiple foci per line. To cover the full lateral field of view, many scan lines are needed, limiting the overall frame rate. Synthetic aperture In synthetic aperture (SA) imaging, each channel transmit-receive pair is acquired separately (Ylitalo and Ermert, 1994;Jensen et al., 2006). To that end, each element independently fires a spherical wavefront, of which the reflections can be simultaneously recorded by all receiving elements. Typically, the number of transmit events is equal to the number of transducer elements (E = C). Having access to these individual transmit-receive pairs enables retrospective transmit focusing to an arbitrary set of foci (e.g each pixel). While SA imaging offers advantages in terms of receive processing, it is time consuming, similar to line scanning. Furthermore, single elements generate low acoustic energy, which reduces the SNR. Plane-and Diverging wave As of recent, unfocused (parallel) acquisition schemes have become more popular, since they can drastically reduce acquisition times, yielding so-called ultrafast imaging at very high frame rates. Plane wave (PW) imaging insonifies the entire region of interest at once through a planar wave field, by firing with all elements and placing the axial focus point at infinity. Diverging wave (DW) transmissions also insonify the entire region of interest in one shot, but generate a spherical (diverging) wavefront by placing a (virtual) focus point behind the transducer array. Especially for small transducer footprints (e.g. phased array probes), DW schemes are useful to cover a large image region. Both PW and DW imaging suffer from deteriorated resolution and low contrast (high clutter) due to strong interference by scattering from all directions. Often, multiple transmits at different angles are therefore compounded to boost image quality. However, this reduces frame rate. Unfocused transmissions rely heavily on the powerful receive processing to yield an image of sufficient quality, raising computational requirements. Doppler Beyond positional information, ultrasound also permits measurement of velocities, useful in the context of e.g. blood flow imaging or tissue motion estimation. This imaging mode, called Doppler imaging (Chan and Perlas, 2011;Routh, 1996;Hamelmann et al., 2019), often requires dedicated transmit schemes with multiple high-rate sequential acquisitions. Continuous wave Doppler allows for simultaneous transmit and receive of acoustic waves, using separate sub-apertures. While this yields a high temporal sampling rate, and prevents aliasing, it does result in some spatial ambiguity. The entire region of overlap between the transmit and receive beam contribute to the velocity estimate. Alternatively, pulsed-wave Doppler relies on a series of snapshots of the slow-time signal, with the temporal sampling rate being equal to the frame rate. From these measurements, a more confined region-of-interest can be selected for improved position information, at the cost of possible aliasing. Waveform and frequency The resolution that can be obtained using ultrasound is for a large part dependent on the frequency of the transmitted pulse. High transmit pulse frequencies, and short pulse durations, yield high spatial resolution, but are strongly affected by attenuation. This becomes especially problematic in deep tissue regions. As a general rule, the smallest measurable structures scale to approximately half the wavelength of the transmit frequency, i.e. the diffraction limit. In practice the transmit pulse spans multiple wavelengths, which additionally limits axial resolution by half the transmit pulse length. Design choices such as transducer array aperture, element sensitivity, bandwidth of the front-end circuitry, and reconstruction algorithms also play a dominant role in this. Array designs Depending on the application, different transducer types may be preferred. Either due to physical constraints, or by having desirable imaging properties. Commonly used transducer geometries include linear-, convex-and phased arrays. Effectively, the transducer array, consisting of elements, spatially samples the array response. Typically, these array elements have a centerto-center spacing (pitch) of λ/2 or less, in order to avoid spatial aliasing. In general, a higher number of elements yields a better resolution image, but this consequently increases size, complexity, and bandwidth requirements. Especially for 2D arrays (used in 3D imaging), the high number of transducer elements can be problematic in implantation due to the vast number of physical hardware connections. Other than translating to an increase in cost and complexity, it also raises power consumption. In those cases, often some form of micro-beamforming is applied in the front-end, combining individual channel signals early in the signal chain. Similar reductions in data rates can be achieved through sub-sampling of the receive channels. Trivial approaches include uniform or random subsampling, at the cost of reduced resolution, and more pronounced aliasing artifacts (grating lobes). Several works have showed that these effects can be mitigated either by principled array designs (Cohen and Eldar, 2020) Song et al. (2020), or by learning sub-sampling patterns from data in a taskadaptive fashion (Huijben et al., 2020). Sub-Nyquist signal sampling Digital signal processing of US signals requires sampling of the signals received by the transducer, after which the digital signal is transferred to the processing unit. To prevent frequency-aliasing artifacts, sampling at the Nyquist limit is necessary. In practice, sampling rates of 4-10 times higher are common, as it allows for a finer resolution during digital focusing. As a consequence, this leads to high bandwidth data-streams, which become especially problematic for large transducer arrays (e.g. 3D probes). Compressed sensing (CS) provides a framework that allows for reduced data rates, by sampling below the Nyquist limit, alleviating the burden on data transfer (Eldar, 2015). CS acquisition methods provide strong signal recovery guarantees when complemented with advanced processing methods for reconstruction of the signal of interest. These reconstruction methods are typically based on MAP estimation, combining likelihood models on the measured data (i.e. a measurement matrix), with priors on signal structure (e.g. sparsity in some basis). Many of the signal processing algorithms that we will list throughout the paper will find application within a CS context, especially those methods that introduce a signal prior for reconstruction, either through models or by learning from data. The latter is especially useful for elaborate tasks where little is known about the distribution of system parameters, offering signal reconstruction beyond what is possible using conventional CS methods. For further reading into the fundamentals of ultrasound, the reader may refer to works such as Brahme (2014). Model-based US signal processing Model-based ultrasound signal processing techniques are based on first principles such as the underlying physics of the imaging setup or knowledge of the statistical structure of the signals. We will now describe some of the most commonly used model-based ultrasound signal processing techniques, building upon the probabilistic perspective sketched in earlier sections. For each algorithm we will explicitly list 1) inputs and outputs (and dimensions), 2) the assumed signal model and statistics, 3) signal priors, and 4) the resulting ML/MAP objective and solution. Beamforming, the act of reconstructing an image from the received raw RF channel signals, is central to ultrasound imaging and typically the first step in the signal processing pipeline. We will thus start our description with beamforming methods. Beamforming Given an ultrasound acquisition of C transducer channels, N t axial samples, and E transmission events, we can denote Y ∈ R E×C×Nt as the recorded RF data cube, representing back-scattered echoes from each transmission event. With beamforming, we aim to transform the raw aperture domain signals Y to the spatial domain, through a processing function f (·) such that whereX represents the data beamformed to a set of focus points S r . As an example, in pixel-based beamforming, these focus points could be a pixel grid such that S r ∈ R rx×rz , where r x and r y represent the lateral and axial components of the vector indicating the pixel coordinates, respectively. Note that, while this example is given in cartesian coordinates, beamforming to other coordinate systems (e.g. polar coordinates) is also common. Delay-and-sum beamforming Delay-and-sum (DAS) beamforming has been the backbone of ultrasound image reconstruction for decades. This is mainly driven by its low computational complexity, which allows for real-time processing, and efficient hard- aims at aligning the received signals for a set of focus points (in pixel-based beamforming: pixels) by applying time-delays. We can define the total TOF from transmission to the receiving element, as where τ r is the required channel delay to focus to an imaging point r, vectors r e and r c correspond to the origin of the transmit event e, and the position of element c, respectively, and v is the speed of sound in the medium. Note that the speed-of-sound is generally assumed to be constant throughout the medium. As a consequence, speed-of-sound variations can cause misalignment of the channel signals, and result in aberration errors. After TOF correction, we obtain a channel vector y r per pixel r, for which we can define a linear forward model to recover the pixel reflectivity x r : where y r ∈ R C is a vector containing the received aperture signals, x r ∈ R the tissue reflectively at a single focus point r, and n r ∈ R C×1 an additive Gaussian noise vector ∼ N (0, σ 2 n I). In this simplified model, all interference (e.g. clutter, off-axis scattering, thermal noise) is contained in n. Note that (without loss of generality) we assume a real-valued array response in our analysis, which can be straightforwardly extended to complex values (e.g. after in-phase and quadrature demodulation). Under the Gaussian noise model, (14) yields the following likelihood model for the channel vector: where σ 2 n denotes the noise power. The delay-and-sum beamformer is the per-pixel ML estimator of the tissue reflectively,x r , given bŷ Solving (17) yields:x where C is the number of array elements. In practice, apodization/tapering weights are included to suppress sidelobes: This form can be recognized as the standard definition of DAS beamforming, in which the channel signals are weighed using an apodization function, w, and subsequently summed to yield a beamformed signal. Here Q k,c,r are the Fourier coefficients of a distortion function derived from the beamforming delays at r, as in (13). When not all Fourier coefficients are sampled (i.e. in sub-Nyquist acquisition), the desired time-domain signal can be recovered using CS methods such as NESTA Becker et al. (2011), or via deep learning approaches. Advanced adaptive Beamforming The shortcomings of standard DAS beamforming have spurred the development of a wide range of adaptive beamforming algorithms. These methods aim to overcome some of the limitations that DAS faces, by adaptively tuning its processing based on the input signal statistics. Minimum Variance DAS beamforming is the ML solution of (14) under white Gaussian noise. To improve realism for more structured noise sources, such as off-axis interference, we can introduce a colored (correlated) Gaussian noise profile n ∼ N (0, Γ n ), with Γ r being the array covariance matrix for beamforming point r. Maximum (log) likelihood estimation for x r then yields: Setting the gradient of the argument in (22) with respect tox r equal to zero, gives: It can be shown that solution (25) can also be obtained by minimizing the total output power (or variance), while maintaining unity gain in a desired direction (the foresight): Solving for (26) yields the closed form solution which is known as Minimum Variance (MV) or Capon beamforming. In practice, the noise covariance is unknown, and is instead empirically estimated from data (Γ n = E[yy H ]). For stability of covariance matrix inversion, this estimation often relies on averaging multiple sub-apertures and focus points, or by adding a constant factor to the diagonal of the covariance matrix (diagonal loading). Note here, that for Γ = σ 2 n I (White Gaussian noise), we get the DAS solution as in (18). Minimum Variance beamforming was shown to improve both resolution and contrast in ultrasound images, and has similarly found application in plane wave compounding (Austeng et al., 2011). However, it is computationally complex due to the inversion of the covariance matrix (Raz, 2002), leading to significantly longer reconstruction times compared to DAS. To boost image quality further, eigen-space based MV beamforming has been proposed (Deylami et al., 2016), at the expense of further increasing computational complexity. As a result of this, real-time implementations remain challenging, to an extent that MV beamforming is almost exclusively used as a research tool. Wiener beamforming In the previously covered methods, we have considered the ML estimate ofx. Following (4), we can extend this by including a prior probability distribution p(x r ), such that For a Gaussian likelihood model, the solution to this MAP estimate is equivalent to minimizing the mean-squared-error, such that also known as Wiener beamforming (Van Trees, 2004). Solving this yieldŝ with Γ r being the array covariance for beamforming point r, and w MV the MV beamforming weights given by (27). Wiener beamforming is therefore equivalent to MV beamforming, followed by a scaling factor based on the ratio between the signal power and total power of the output signal, which can be referred to as post-filtering. Based on this result, Nilsen and Holm (2010) observe that for any w that satisfies w H 1 = 1 (unity gain), we can find a Wiener post-filter that minimizes the MSE of the estimated signal. As such, we can write Assuming white Gaussian noise (Γ r = σ 2 n I, and x ∼ N (0, σ 2 x ) the Wiener beamformer is equivalent to Wiener post-filtering for DAS, given by: Coherence Factor weighing The Coherence Factor (CF) (Mallart and Fink, 1994) aims to quantify the coherence of the back-scattered echoes in order to improve image quality through scaling with a so-called coherence factor, defined as where C denotes the number of channels. Effectively, this operates as a post-filter, after beamforming, based on the ratio of coherent and incoherent energy across the array. As such, it can suppress focusing errors that may occur due to speed-of-sound inhomogeneity, given bŷ The CF has been reported to significantly improve contrast, especially in regions affected by phase distortions. However it also suffers from reduced brightness, and speckle degradation. An explanation for this can be found when comparing (35) with the Wiener post-filter for DAS in (33). We can see that CF weighing is in fact a Wiener post-filter where the noise is scaled by a factor C, leading to a stronger suppression of interference, but consequently also reducing brightness. Several derivations of the CF have been proposed to overcome some of these limitations, or to further improve image quality, such as the Generalized CF (Pai-Chi Li and Meng-Lin Li, 2003), and Phase Coherence factor (Camacho et al., 2009). Iterative MAP beamforming: Chernyakova et al. (2019), propose an iterative maximum a posteriori (iMAP) estimator, which provides a statistical interpretation to post-filtering. The iMAP estimator works under the assumption of knowledge on the received signal model, and treats signal of interest and interference as uncorrelated Gaussian random variables with variance σ 2 x . Given the likelihood model in (15), and x ∼ N (0, σ 2 x ), the MAP estimator of x is given bŷ However, the parameters σ 2 x and σ 2 n are unknown in practice. Instead, these can be estimated from the data at hand, leading to an iterative solution First an initial estimate of the signal and noise variances is calculated and initializing with the DAS estimatex (0) = 1 C 1 H y. Following (4) and (37), a MAP estimate of the beamformed signal is given bŷ where t is an index denoting the number of iterations. Equations (37) and (38) where α and λ are regularization parameters. This particular form of regularization is also called elastic-net regularization. ADMIRE shows significant reduction in clutter due to multi-path scattering, and reverberation, resulting in a 10-20dB improvement in CNR. Sparse coding Chernyakova et al. propose to formulate the beamforming process as a line-by-line recovery of back-scatter intensities from (potentially undersampled) Fourier coefficients (Chernyakova and Eldar, 2014). Denoting the axial fast-time intensities by x ∈ R N , and the noisy measured DFT coefficients of a scan line byỹ ∈ R M , with M ≤ N , we can formulate the following linear measurement model: where assuming it is sparse) can again be posed as a MAP estimation problem: where λ is a regularization parameter. Problem (41) can be solved using the Iterative Shrinkage and Thresholding Algorithm (ISTA), a proximal gradient where τ λ = sgn(x i )(|x i |−λ) + is the proximal operator of the ℓ 1 norm, µ is the gradient step size, and (·) H denotes the Hermitian, or conjugate transpose. It is interesting to note that the first step in the ISTA algorithm, given bŷ x 1 = A Hỹ = F H u H Hỹ , thus mapsỹ back to the axial/fast-time domain through the zero-filled inverse DFT. Wavefield inversion The previously described beamforming methods all build upon measurement models that treat pixels or scan lines (or, for ADMIRE: short-time windows) independently. As a result, complex interaction of contributions and interference from the full lateral field of view are not explicitly modeled, and often approximated through some noise model. To that end, several works explore reconstruction methods which model the joint across the full field of view, and its intricate behavior, at the cost of a higher computational footprint. Such methods typically rely on some form of "wavefield inversion", i.e. inverting the physical wave propagation model. One option is to pose beamforming as a MAP optimization problem through a likelihood model that relates the per-pixel back-scatter intensities to the channel signals (Szasz et al., 2016b,a;Ozkan et al., 2017), and some prior/regularization term on the statistics of spatial distributions of back-scatter intensities in anatomical images. Based on the time-delays given by (13) (and the Green's function of the wave equation), one can again formulate our typical linear forward model: where x ∈ R rxrz is a vector of beamformed data, n ∈ R CNt an additive white Gaussian noise vector, and y ∈ R CNt the received channel data. The space-time mapping is encoded in the sparse matrix A ∈ R. Solving this system of equations relies heavily on priors to yield a unique and anatomically-feasible solution, and yields the following MAP optimiza- where log p θ (x) acts as a regularizer, with parameters θ (e.g. an l 1 norm to promote a sparse solution (Combettes and Wajs, 2005)). Ozkan et al. (2017) investigate several intuition-and physics-based regularizers, and their effect on the beamformed image. The results show benefits for contrast and resolution for all proposed regularization methods, however each yielding different visual characteristics. This shows that choosing correct regularization terms and parameters that yield a robust beamformer can be challenging. Post processing After mapping the channel data to the image domain via beamforming, ultrasound systems apply several post processing steps. Classically, this includes further image processing to boost B-mode image quality (e.g. contrast, resolution, de-speckling), but also spatio-temporal processing to suppress tissue clutter and to estimate motion (e.g. blood flow). Beyond this, we see increasing attention for post-processing methods dedicated to advanced applications such as super-resolution ultrasound localization microscopy (ULM). We will now go over some of the model-based methods for post processing, covering B-mode image quality improvement, tissue clutter filtering, and ULM. B-mode image quality improvement Throughout the years, many B-mode image-quality boost algorithms have been proposed with aims that can be broadly categorized into: 1) resolution enhancement, 2) contrast enhancement, and 3) speckle suppression. Although our focus lies on model-based methods (to recall: methods that are derived from models and first principles), it is worth nothing that Bmode processing often also relies on heuristics to accommodate e.g. user preferences. These include fine-tuned brightness curves (S-curve) to improve perceived contrast. A commonly used method to boost image quality is to coherently compound multiple transmissions with diverse transmit parameters. Often, a simple measurement model similar to that in DAS is assumed, where multiple transmissions are (after potential TOF-alignment) assumed to measure the same tissue intensity for a given pixel, but with different Gaussian noise realizations. As for the Gaussian likelihood model for DAS, this then simply yields averaging of the individual measurements (e.g different plane wave angles, or frequencies). More advanced model-based compounding methods use MV-weighting of the transmits, thus assuming a likelihood model where multiple measurements have correlated noise: x r = arg max xr log p(y r |x r , Γ r ) = arg min xr (y r − 1x r ) H Γ −1 r (y r − 1x r ). Note that here, unlike in MV beamforming, y r is a vector containing the beamformed pixel intensities from multiple transmits/measurements (after TOF alignment),x r is the compounded pixel, and Γ r is the auto-correlation matrix across the series of transmits to be estimated. Compounding can boost resolution, contrast, and suppress speckle. Bayesian interpretation to NLM has enabled ultrasound-specific implementations with more realistic (multiplicative) noise models (Coupé et al., 2008). Other MAP approaches pose denoising as a dictionary matching problem (Jabarulla and Lee, 2018). These methods do not explicitly estimate patch density functions from the image, but instead learn a dictionary of patches. To achieve a boost in image resolution, the problem can be recast as MAP estimation under a likelihood model that includes a deterministic blurring/pointspread-function matrix A blur : where x is the (vectorized) high-resolution image to be recovered, and y a (vectorized) blurred and noisy (Gaussian white) observation y. This deconvolution problem is ill posed and requires adequate regularization via priors. As we noted before, the log-prior term can take many forms, including ℓ 1 or total variation based regularizers. Clutter filtering for flow Slow moving tissue introduces a clutter signal that introduces artefacts and obscures the feature of interest being imaged (be it blood velocity or e.g. contrast agents), and considerable effort has gone into suppressing this tissue clutter signal. Although Infinite Impulse Response (IIR) and Finite Impulse Response (FIR) filters have been the most commonly used filters for tasks such as this, it is still very difficult to separate the signals originating from slow moving blood or fast moving tissue. Therefore, spatio-temporal clutter filtering is receiving increasing attention. We will here go over some of these more advanced methods (including singular value thresholding and robust principle component analysis), again taking a probabilistic MAP perspective. We define the spatio-temporal measured signal as a Casorati matrix, Y ∈ R N M ×T , where N and M are spatial dimensions, and T is the time dimension, which we model as Y = X tissue + X blood , where X tissue ∈ R N M ×T is the tissue component, and X blood ∈ R N M ×T is the blood/flow component. We then impose a prior on X tissue , and assume it to be low rank. If we additionally assume X blood to have i.i.d. Gaussian entries, the MAP estimation problem for the tissue clutter signal becomes: where ∥·∥ F and ∥·∥ * denote the Frobenius norm and the nuclear norm, respectively. The solution to (48) is: where T SVT,λ is the singular value thresholding function, which is the proximal operator of the nuclear norm (Cai et al., 2010). To improve upon the model in (48), one can include a more specific prior on the flow components, and separate them from the noise: where we place a mixed ℓ 1 /ℓ 2 prior on the blood flow component X blood , and assume i.i.d. Gaussian entries in the noise matrix N, such that: where ∥·∥ 1,2 indicates the ℓ 1 and ℓ 2 norm. This low-rank plus sparse optimization problem is also termed Robust Principle Component Analysis (RPCA), and can be solved through an iterative proximal gradient method: where T SVT,λ 1 is the solution of (48) (i.e. the proximal operator of the nuclear norm), T 1,2,λ 1 is the mixed ℓ 1 -ℓ 2 thresholding operation, and µ 1 and µ 2 are the gradient steps for the two terms. Shen et al. (2019) further augment the RPCA formulation to boost resolution for the blood flow estimates. To that end they add a PSF-based convolution kernel to the blood component A r ⊛ X blood , casting it as a joint deblurring and signal separation problem. Ultrasound Localization Microscopy We will now turn to an advanced and increasingly popular ultrasound signal processing application: ULM. Conventional ultrasound resolution is fundamentally limited by wave physics, to half the wavelength of the transmitted wave, i.e., the diffraction limit. This limit is in the range of millimeters for most ultrasound probes, and is inversely proportional to the transmission frequency. However, high transmit frequencies come at the cost of lower penetration depth. To overcome this diffraction limit, ULM adapts concepts from Nobelprize winning super-resolution fluorescence microscopy to ultrasound. Instead of localizing fluorescent blinking molecules, ULM detects and localizes ultrasound contrast agents, microbubbles, flowing through the vascular bed. These microbubbles have a size similar to red blood cells, and act as point scatterers. By accumulating precisely localized microbubbles across many frames, a super-resolution image of the vascular bed can be obtained. In typical implementations, the localization of the MB's is performed by centroid detection (Siepmann et al., 2011;Couture et al., 2011;Christensen-Jeffries et al., 2020). Not surprisingly, we can also pose microbubble localization as a MAP estimation problem (Van Sloun et al., 2017). We define a sparse high-resolution image that is vectorized into x, in which only few pixels have non-zero entries: those pixels that contain a microbubble. Our vectorized measurements can then be modeled as: y = Ax + n, where A is a PSF matrix and n is a white Gaussian noise vector. This yields the following MAP problem: x = arg max according to a motion model . Deep Learning for US Signal Processing Deep learning based ultrasound signal processing offers a highly flexible framework for learning a desired input-output mappingX = f θ (Y ) from training data, overcoming the need for explicit modeling and derivation of solutions. This can especially be advantageous for complex problems in which models fall short (e.g. incomplete, with naive assumptions) or their solutions are demanding or even intractable. We will now go over some emerging applications of deep learning in the ultrasound signal processing pipeline. As in the previous section, we will first cover advanced methods for beamforming and then turn to downstream post-processing such as B-mode image quality improvement, clutter suppression and ULM. Beamforming We discern two categories of approaches: Neural networks that replace the entire mapping from channel data to images, and those that only replace the beamsumming operation, i.e. after TOF correction. Hyun et al. (2021) and Bell et al. (2020Bell et al. ( , 2019 Direct channel to image transformation The authors in Nair et al. (2018Nair et al. ( , 2020 Most beamforming approaches thus benefit from traditional alignment of the channel data before processing. In addition, the authors provide a mechanism for jointly learning optimal channel selection/sparse array design via a technique dubbed deep probabilistic subsampling. Beam-Summing after TOF correction While most beamforming methods aim at boosting resolution and contrast, Hyun et al. (2019) argue that beamformers should accurately estimate the true tissue backscatter map, and thus also target speckle reduction. The authors train their beamformer on ultrasound simulations of a large variety of artificial tissue backscatter maps derived from natural images. Post processing Application of deep learning methods to general image processing/restoration problems has seen a surge of interest in recent years, showing remarkable performance across a range of applications. Naturally, these pure image processing methods are being explored for ultrasound post processing as well. In this section we will treat the same topics as in the previous chapter, but focus on recent deep learning methods. (2020) attempt to extend it to 3D imaging using a 3D UNet model. It is an interesting point to note that Vedula et al. (2017) and Ando et al. (2020) use simulated data, while the other works on speckle reduction use in-vivo data gathered from volunteers. However, there is uniformity in how these works create their target images; through model based speckle reduction algorithms. B-mode image quality improvement Most deep learning methods for image quality improvement rely on supervised learning, requiring ground truth targets which are often difficult to obtain. As an alternative, Huh et al. (2021) present a self-supervised method based on the cycle-GAN architecture, originally developed for un-paired (cycle-consistent) style transfer (Huh et al., 2021). This approach aims at transferring the features of a high-quality target distribution of images to a given low-quality image, which the authors leverage to improve elevational image quality in 3D ultrasound. Clutter filtering for flow processing rates (Brown et al., 2021). Notably, Youn et al. (2020) perform localization directly from channel data. Model-Based Deep Learning for US Signal Processing We now highlight several works that incorporate signal processing knowledge in their deep learning approaches to improve performance, reduce network complexity, and to provide reliable inference models. Generally, these models retain a large part of the conventional signal processing pipeline intact, and replace critical points in the processing with neural networks, so as to provide robust inference as a result. We will discuss methods ranging from iterative solvers, to unfolded fixed complexity solutions. Beamforming Model-based pre-focussing using DL Pre-focussing (or TOF correction) is conventionally done deterministically, based on the array geometry and assuming a constant speed-of-sound. Instead, data-adaptive focusing, by calculating delays based on the recorded data, facilitates correction for speed-of-sound mismatches. The work by Nair et al. (2018Nair et al. ( , 2020 does this implicitly, by finding a direct mapping from the time-domain to an output image, using DL. However, this yields a black-box solution, which can be difficult to interpret. The authors of Kim et al. (2021) adhere more strictly to a conventional beamforming structure, and tackle this problem in two steps: first the estimation of a local speed-of-sound map, and secondly the calculation of the corresponding beamforming delays. The speed-of-sound image is predicted from multi-angled plane wave transmissions using SQI-net (Oh et al., 2021), a type of U-net. One then needs to find the propagation path and travel time of the transmitted pulse, i.e. the delay-matrix, between each imaging point and transducer element. For a uniform speed-of-sound this is trivial, since the shortest distance between a point and element corresponds to the fastest path. For a non-uniform speed-of-sound, this is more challenging, and requires a path finding algorithms that adds to the computational complexity. The Dijkstra algorithm (Dijkstra, 1959) for instance, which is commonly used to find the fastest path, has a complexity of O(n 2 log n), where n is the number of nodes in the graph, or equivalently, the density of the local speed-of-sound grid. As such, the authors propose a second U-net style neural network, referred to as DelayNet, for estimating these delay times. The network comprises 3×3 locally masked convolutions, such that no filter weights are assigned in the direction opposite from direction of wave propagation. Intuitively, this can be understood as enforcing an increasing delay-time the further we get from the transducer, i.e. the wave does not move in reverse direction. Furthermore, the reduced filter count improves computational efficiency by ∼33%. Finally, the predicted delay matrix is used to focus the RF data using the corrected delays, after which it is beamsummed to yield a beamformed output signal. As such, DelayNet does not to be trained directly on a target delaymatrix, but instead can be trained end-to-end on the desired beamformed targets. Note that in this method, the estimation of the speed-of-sound is done in a purely data-driven fashion. However, the pre-focussing itself inherits a model-based structure, by constraining the problem to learning time-shifts from the aforementioned speed-of-sound map. Model-based beamsumming using DL Luijten et al. (2019Luijten et al. ( , 2020 propose adaptive beamforming by deep learning (ABLE), a deep learning based beamsumming approach that inherits its structure from adaptive beamforming algorithms, specifically minimum variance (MV) beamforming. ABLE specifically aims to overcome the most computationally complex part of the beamforming, the calculation of the adaptive apodization weights, replacing this with a neural network f θ . The step from the model-based MAP estimator to ABLE is then given bŷ where θ comprise the neural network weights, and y r the TOF corrected RF data. Multiplying the predicted weights with the TOF corrected data, and summing the result, yields a beamformed output signal. Note that for training, we do not need access to the apodization weights as in MV beamforming. Instead, this is done end-to-end towards a MV generated target, given by arg min wherex MV is a MV training target, and L a loss function. Since the network operates directly on RF data, which has positive and negative signal components, as well as a high dynamic range, the authors propose an Antirectifier as an activation function. The Antirectifier introduces a non-linearlity while preserving the sign information and dynamic range, unlike the rectified linear unit, or hyperbolic tangent. Similarly, a signed-mean-squared-logarithmicerror (SMSLE) loss function is introduced, which ensures that errors in the RF domain reflect the errors in the log-compressed output image. The authors show that a relatively small network, comprising four fully connected layers, can solve this task, and is able to generalize well to different datasets. They report an increase in resolution and contrast, while reducing computational complexity by 2 to 3 orders of magnitude. Wiacek et al. (2020) similarly exploit DNNs as a function approximator in order to accelerate the calculation of the short-lag spatial coherence (SLSC). Specifically the authors apply their method to SLSC beamforming, which displays the spatial coherence of backscattered echoes across the transducer array. This contrasts conventional DAS beamforming in which the recorded pressures are visualized. The authors report a 3.4 times faster computation compared to the standard CPU based approach, corresponding to a framerate of 11 frames-per-second. Luchies and Byram (2018) propose a wideband DNN for suppressing offaxis scattering, which operates in the frequency domain, similar to ADMIRE discussed earlier. After focusing an axially gated section of channel data, the RF signals undergo a discrete Fourier transform (DFT), mapping the signal into different frequency bins. The neural network operates specifically on these frequency bins, after which the data is transformed back to the timedomain using the inverse discrete Fourier transform (IDFT) and summed to yield a beamformed signal. The same fully connected network structure was used for different center frequencies, only retraining the weights. An extension of this work is described in Khan et al. (2021a), where the neural network itself is replaced by a model-based network architecture. The estimation of model parameters β, as formulated in (39), can be seen as a sparse coding problem y = Aβ (where β is a sparse vector) which can be solved by using an iterative algorithm such as ISTA. This yieldŝ where τ λ (·) is the soft-thresholding function parameterized by λ. To derive a model-based network architecture, (57) is unfolded as a feedforward neural network with input A T y and outputβ, the predicted model coefficients. For each iteration, or fold, we can then learn the weight matrices, and the soft-thresholding parameter λ trainable. This then leads to a learned ISTA algorithm (LISTA): where W k represents a trainable fully connected layer and λ k is a (perfold) trainable thresholding parameter. When contrasted with its modelbased iterative counterpart ISTA, LISTA is a fixed complexity solution that tailors its processing to a given dataset using deep learning. Compared to conventional deep neural networks, LISTA has a low number of trainable parameters however. The authors show that LISTA can be trained on model fits of ADMIRE, or even simulation data containing targets without off-axis scattering, thereby potentially outperforming the fully model-based algorithm, ADMIRE, due to its ability to learn optimal regularization parameters from data. (2021) The ultrasound forward model is based on a set of differential equations, and mainly depends on three parameters: the acoustic velocity c 0 , the density ρ 0 , and the attenuation α 0 . Such a model could abstractly be defined as Mamistvalov and Eldar However, due to the complex non-linear nature of this forward model, a simplified linear model was developed in (details are given in Almansouri et al. (2018a)), which yields the estimator where A is a matrix that accounts for time-shifting and attenuation of the transmit pulse. The adjoint operator operator of the linearized model gives an approximate estimator for x, given byx = A T y. The authors adopt a U-net architecture, to compensate for artifacts caused by non-linearities. Effectively the the network finds a mapping from a relatively simple estimate, yet based on the physical measurement model, and maps it to a desired high- where f (·) θ denotes the neural network, andx the high-quality estimate. Post-Processing and Interpretation Deep Unfolding for B-mode IQ enhancement/PW compounding/Compressed Acquisition Chennakeshava et al. (2020Chennakeshava et al. ( , 2021 propose a plane wave compounding and deconvolution method based on deep unfolding. Their architecture is based on a proximal gradient descent algorithm derived from a model-based MAP optimization problem, that is subsequently unfolded and trained to compound 3 plane wave images, gathered at low frequency, into an image gathered using 75 compounded plane wave transmissions at a higher frequency. This encourages a learned proximal operator that maps low-resolution, low-contrast input images onto a manifold of images with better spatial resolution and contrast. Denote x ∈ R N as the vectorized high-resolution beamformed RF image, and y ∈ R N M the vectorized measurement of low-resolution beamformed RF images from M = 3 transmitted plane waves. The authors assume the following acquisition model: where and where y m is the vectorised, beamformed RF image belonging to the m th steered plane wave transmission, n ∈ R N M is a noise vector which is assumed to follow a Gaussian distribution with zero mean and diagonal covariance, and A ∈ R N M ×N is a block matrix, with its blocks A 1 , A 2 ,..., A M being the measurement matrices of individual PW acquisitions. The authors assume that the measurement matrices (which capture the system PSF for each PW) follow a convolutional Toeplitz structure. Based on this model, each fold in the unfolded proximal gradient algorithm aimed at recovering the high-resolution image x is written as, where P θ is a Unet-style neural network replacing the generalised proximal operator, and where ⊛ denotes a convolutional operation, and {w Deep unfolding for clutter filtering Solomon et al. (2019a) propose deep unfolded convolutional robust RPCA for ultrasound clutter suppression. The approach is derived from the RPCA algorithm, given by (51) and (52), but unfolds it and learns all the parameters (gradient projection and regularization weights) from data. Each network layer in the unfolded architecture takes the following form: and,X where W 1 , W 2 , W 3 , W 4 , W 5 , and W 6 are trainable convolutional kernels. The resulting deep network has two distinct (model-based) non-linearities/activations per layer: the mixed ℓ 1,2 thresholding, and singular value thresholding. The authors train the architecture end to end on a combination of simulations and RPCA results on real data, and demonstrate that it outperforms a strong non-model-based deep network (a ResNet). Deep unfolding for ultrasound localisation microscopy In the spirit of unfolding, Van Sloun et al. (2019a) propose to unfold their sparse recovery algorithm for ULM to enable accurate localization even for high concentrations of microbubbles. Similar to the previous examples of unfolding, each of the layers k in the resulting architecture takes the following form: with W where λ is a parameter that depends on the assumed noise variance. Iterative optimization of (71) was performed using gradient descent, and the recovered clean image is given byx = f −1 θ (ẑ). Discussion Over the past decade, the field of ultrasound signal processing has seen a large transformation, with the development of novel algorithms and processing methods. This development is driven for a large part by the move from hardware-to software based reconstruction. In this review, we have showcased several works, from conventional algorithms, to full deep learning based approaches; each having their own strengths and weaknesses. Conventional model-based algorithms are based on first principles and offer a great amount of interpretability, which is relevant in clinical settings. However, as we show in this paper, these methods rely on estimations, and often simplifications of the underlying physics model, which result in suboptimal signal reconstructions. For example, DAS beamforming assumes a linear measurement model, and a Gaussian noise profile, both of which are very crude approximations of a realistic ultrasound measurement. In contrast, adaptive methods (e.g. MV beamforming) that aim at modeling the signal statistics more accurately, are often computationally expensive to implement in real-time applications. Spurred by the need to overcome these limitations, we see a shift in research towards data-driven signal processing methods (mostly based on deep learning), a trend that has started around 2014 (Zhang et al., 2021), which sees a significant increase in the number of peer reviewed AI publications. This can be explained by 2 significant factors: 1) the availability of high compute-power GPUs, and 2) the availability of easy-to-use machine learning frameworks such as TensorFlow (Abadi et al., 2015) and PyTorch (Paszke et al., 2019), which have significantly lowered the threshold of entry into the field of AI for ultrasound researchers. However, the performance of datadriven, and more specifically, deep learning algorithms is inherently bounded by the availability of large amounts of high-quality training data. Acquiring ground truth data is not trivial in ultrasound beamforming and signal processing applications, and thus simulations or the outputs of advanced yet slow model-based algorithms are often considered as training targets. Moreover, the lack of clear understanding of the behavior of learned models (i.e. the black box model), and ability to predict their performance "in the wild", makes implementations in clinical devices challenging. These general challenges associated with fully data-driven deep learning methods have in turn spurred research in the field of "model-based deep learning". Model-based deep learning combines the model-based and datadriven paradigms, and offers a robust signal processing framework. It enables learning those aspects of full models from data for which no adequate first-principles derivation is available, or complementing/augmenting partial model knowledge. Compared to conventional deep neural networks, these systems often require a smaller number of parameters, and less training data, in order to learn an accurate input-output mapping. ing under unknown non-diagonal covariance Gaussian channel noise is augmented with a neural network, and the entire hybrid solution is optimized end-to-end. The methods covered here aim to achieve a better imaging quality, e.g. temporal or spatial resolution, ultimately aiding in the diagnosis process. While a deeper analysis of the clinical relevance is a crucial and interesting topic, it is beyond the scope of this work. Conclusion In this review, we outline the development of signal processing methods in US, from classic model-based algorithms, to fully data driven DL based methods. We also discuss methods that lie at the intersection of these two approaches, using neural architectures inspired by model-based algorithms, and derived from probabilistic inference problems. We take a probabilistic perspective, offering a generalised framework with which we can describe the multitude of approaches described in this paper, all under the same umbrella. This provides us insight into the demarcation between components derived from first principles, and the components derived from data. This also affords us the ability to combine these components in a unique combination, to derive architectures that integrate multiple classes of signal processing algorithms. The application of such novel, DL based reconstruction methods requires the next generation of US devices to be equipped accordingly. Either by fast networking and on-device encoding, or by fully arming them with sufficient and appropriate processing power (GPUs and TPUs), which allows for flexible and real-time deployment of AI algorithms.
v3-fos-license
2017-11-11T14:06:55.047Z
2017-10-01T00:00:00.000
3695367
{ "extfieldsofstudy": [ "Physics", "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/17/10/2325/pdf", "pdf_hash": "49afa0aed151cf7af4c811257c79927fff335389", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41586", "s2fieldsofstudy": [ "Physics" ], "sha1": "49afa0aed151cf7af4c811257c79927fff335389", "year": 2017 }
pes2o/s2orc
A High Stability Time Difference Readout Technique of RTD-Fluxgate Sensors The performance of Residence Times Difference (RTD)-fluxgate sensors is closely related to the time difference readout technique. The noise of the induction signal affects the quality of the output signal of the following circuit and the time difference detection, so the stability of the sensor is limited. Based on the analysis of the uncertainty of the RTD-fluxgate using the Bidirectional Magnetic Saturation Time Difference (BMSTD) readout scheme, the relationship between the saturation state of the magnetic core and the target (DC) magnetic field is studied in this article. It is proposed that combining the excitation and induction signals can provide the Negative Magnetic Saturation Time (NMST), which is a detection quantity used to measure the target magnetic field. Also, a mathematical model of output response between NMST and the target magnetic field is established, which analyzes the output NMST and sensitivity of the RTD-fluxgate sensor under different excitation conditions and is compared to the BMSTD readout scheme. The experiment results indicate that this technique can effectively reduce the noise influence. The fluctuation of time difference is less than ±0.1 μs in a target magnetic field range of ±5 × 104 nT. The accuracy and stability of the sensor are improved, so the RTD-fluxgate using the readout technique of high stability time difference is suitable for detecting weak magnetic fields. Introduction The fluxgate sensor has been widely used in geomagnetic observation, space magnetic field measurement, and other fields due to its high sensitivity, small size, and low power consumption, etc. [1][2][3][4][5]. The RTD-fluxgate sensor developed by Bruno Andò, et al., using the hysteresis saturation phenomenon of soft magnetic material, can detect magnetic fields through the corresponding relationship between the residence times difference of the induction pulse signal and the target magnetic field [6][7][8], and has the advantages of a simple detection procedure, strong anti-interference ability, easy miniaturization, and digitization, etc. It has attracted more attention in the fields of national defense military and geomagnetic prospecting [9][10][11]. However, the noise of the induction signal makes the time difference read uncertain, seriously affecting the accuracy of RTD-fluxgate sensor measurements [12,13]. The quality of the induction signal is closely related to variations in the dynamic permeability of the magnetic core [14,15]. To reduce the effects of noise, an effective approach is utilized with a 2714A annealed core with a sharp hysteresis loop and low coercive field [16,17]. Because the induction pulse signal corresponds to the state of magnetic saturation, Bruno Andò, et al. read the time difference between the peak points of the induction pulse signal to measure the target magnetic field [18,19]. As a result of the sensitivity of the RTD-fluxgate unit to repetitive magnetization, magnetic core noise, electronic circuit noise, and environment interference, it is difficult to locate the peak points accurately [20][21][22]. Wang Y.Z., et al. read the time difference by using the threshold which is set slightly lower than the peak value of the induction signal [23]. Although the error of locating the peak points can be avoided, the threshold has to be set. Even in this case, the magnetic noise and electrical noise cause the transverse instability of the induction signal, resulting in the uncertainty of the time difference readout. Lu S.B., et al. fitted the pulse curve by using the data near the peak value of positive and negative output pulse, and depending on the time of three adjacent peaks to calculate the residence time difference which measure the target magnetic field [24]. The method does not need to consider the influence of the threshold set on the output performance of the sensors, however, the accuracy of curve fitting is limited due to noise interference existing in the induction signal. In order to reduce the influence of noise on time difference reading, several approaches for filtering the induction signal are introduced [25,26]. Although the method can reduce the noise intensity, it is mainly aimed at detecting the amplitude of the signal. After filtering, there is a certain degree of distortion in the induction signal which causes the time difference reading error. Therefore, the induction signal filtering is limited to time difference reading. According to RTD-fluxgate detection theory, the large deviation in the output time difference can be caused by output signal noise. If the induction signal is used individually to read time difference, the noise effects cannot be avoided. In order to improve the accuracy and stability of RTD-fluxgate sensors and reduce the noise that produces uncertainties in the estimation of the residence times, the relationship between the state of the magnetic core and the target magnetic field is studied. On the basis of analyzing the working principle of the RTD-fluxgate sensor, in this paper, a new method of time difference reading between the excitation signal passing through the zero point as a reference time and the negative output pulse is proposed, that is, the excitation signal and the output pulse signal are combined to read the negative magnetic saturation time ∆T NMST as the detection quantity to measure the target magnetic field. A mathematical model of the sensor output response between the ∆T NMST and (DC) target magnetic field H x is established under a triangular excitation signal. It is analyzed that ∆T NMST and sensitivity S NMST change with the variation of the amplitude and frequency of the excitation current. The theoretical and experimental comparison between the NMST and the BMSTD readout strategy is presented and discussed. The results show that this method can reduce the influence of the output pulse noise effects on the readout technique. The rest of this paper is organized as follows. Section 2 presents the working principle of RTD-fluxgate sensor in the case of triangular signal excitation. The influence of the output induction signal noise on the time difference readout strategy is analyzed. In Section 3, the NMST readout strategy is introduced and the uncertainty of the method is calculated. By using the NMST readout strategy, the mathematical model of the sensor output response is established and the variation of ∆T NMST and S NMST with different excitation conditions is analyzed. In Section 4, experiments are investigated to check the performance of the NMST readout strategy compared to the BMSTD readout strategy. Section 5 concludes the whole paper and the results are presented. Working Principle of RTD-Fluxgate Sensors under Triangular Excitation Signal The magnetic core of the sensor is magnetized by a periodically alternating triangular magnetic field to the states of two-way over-saturation, as is shown in Figure 1a. The ideal hysteresis loop of the magnetic core is shown in Figure 1b. The magnetization produced in the induction coil is shown in Figure 1c. If a target magnetic field H x exists along the axis of the sensor, the residence times of the magnetic core in positive and negative saturation states are different. Because the time interval between the positive and negative pulse of induction signal T + is not equal to the time interval between the negative pulse and the next positive pulse T − , a time difference between them exists. We may obtain the values of H x by detecting the bidirectional magnetic saturation time difference T = T + − T − of the output pulse signal which relates to the states [8,[27][28][29], as is shown in Figure 1d. Sensors 2017, 17, 2325 3 of 15 may obtain the values of Hx by detecting the bidirectional magnetic saturation time difference △T = T + − T − of the output pulse signal which relates to the states [8,[27][28][29], as is shown in Figure 1d. In this article, the case of triangular excitation is considered. The triangular excitation is assumed to have amplitude and period equal to Hm and Te, respectively. The expression is as follows: e e e e e e T T t N T t N T H t T T T As is shown in Figure 1a, when the excitation field He(t) reaches saturation of magnetic core, the times are t1, t2 and t3. At period Te of the induction signal, it is straightforward to calculate the residence times T + and T − : The output response of the RTD-fluxgate under the triangular excitation field is expressed as shown in Equation (4): In this article, the case of triangular excitation is considered. The triangular excitation is assumed to have amplitude and period equal to H m and T e , respectively. The expression is as follows: As is shown in Figure 1a, when the excitation field H e (t) reaches saturation of magnetic core, the times are t 1 , t 2 and t 3 . At period T e of the induction signal, it is straightforward to calculate the residence times T + and T − : The output response of the RTD-fluxgate under the triangular excitation field is expressed as shown in Equation (4): The sensitivity of the RTD-fluxgate can be estimated: Stability Analysis of BMSTD Readout Technology Adoping the detection method using the induction output signal's hysteresis shape and timing, that is, counting the low and high levels formed after the signal is amplified and then made in to shapes, the RTD-fluxgate can read ∆T. According to the method, when the excitation condition and the core material are determined, the stability of the time difference measurement is only related to the readout technology. Generally, the output signal is not smooth, and there is transverse instability because of electrical noise, magnetic noise, etc. As is shown in Figure 2, the trigger position of the output signal varies because of the noise, eventually leading to fluctuation of the time difference. The sensitivity of the RTD-fluxgate can be estimated: Stability Analysis of BMSTD Readout Technology Adoping the detection method using the induction output signal's hysteresis shape and timing, that is, counting the low and high levels formed after the signal is amplified and then made in to shapes, the RTD-fluxgate can read ΔT. According to the method, when the excitation condition and the core material are determined, the stability of the time difference measurement is only related to the readout technology. Generally, the output signal is not smooth, and there is transverse instability because of electrical noise, magnetic noise, etc. As is shown in Figure 2, the trigger position of the output signal varies because of the noise, eventually leading to fluctuation of the time difference. It is assumed that in an ideal case, the induction output signal does not have noise interference. The time of the first positive pulse appears as t1, the time of the first negative pulse appears as t2, and the second positive pulse appears as t3. The presence of magnetic and electrical noise affect the estimation of three transition times, so the corresponding actual transition times are t1', t2', t3', respectively. As is observed in Figure 3, the solid line represents the ideal output residence times, and the dotted line represents the actual output residence times when influenced by noise. The tnoise is an uncertainty product due to noise in the estimation of the residence times. ΔT is expressed as follows: The magnetic noise is not related to the electrical noise in the detection system, therefore, the uncertainty of ΔT is affected by noise, which is described in Equation (7): In Equation (7), γ represents the total noise of ΔT, γmi represents the magnetic noise of transition time ti, and γei represents the electrical noise of transition time ti. Assuming the same uncertainty value for each ti, it is possible to write the following expression: It is assumed that in an ideal case, the induction output signal does not have noise interference. The time of the first positive pulse appears as t 1 , the time of the first negative pulse appears as t 2 , and the second positive pulse appears as t 3 . The presence of magnetic and electrical noise affect the estimation of three transition times, so the corresponding actual transition times are t 1 , t 2 , t 3 , respectively. As is observed in Figure 3, the solid line represents the ideal output residence times, and the dotted line represents the actual output residence times when influenced by noise. The t noise is an uncertainty product due to noise in the estimation of the residence times. ∆T is expressed as follows: The magnetic noise is not related to the electrical noise in the detection system, therefore, the uncertainty of ∆T is affected by noise, which is described in Equation (7): In Equation (7), γ represents the total noise of ∆T, γ mi represents the magnetic noise of transition time t i , and γ ei represents the electrical noise of transition time t i . Assuming the same uncertainty value for each t i , it is possible to write the following expression: By using the BMSTD readout scheme, three transition times need to be estimated. Because each transition time is affected by noise, the uncertainties of ∆T are fairly large. In view of the situation above, in order to minimize the influence of noise on the detection and reduce the uncertainty of the time difference, the readout technology needs to be improved. By using the BMSTD readout scheme, three transition times need to be estimated. Because each transition time is affected by noise, the uncertainties of ΔT are fairly large. In view of the situation above, in order to minimize the influence of noise on the detection and reduce the uncertainty of the time difference, the readout technology needs to be improved. The NMST Readout Technique and the Mathematical Output Response Model If a target magnetic field Hx exists, the time when the soft magnetic material reaches the positive and negative saturation states will change. The time when the soft magnetic material reaches two steady points of the double-well potential are different. The Hx directly affects the time when the magnetic core reaches the positive and negative saturation states. Therefore, the time of magnetic core saturation is equivalent to the bidirectional magnetic saturation time difference, ΔT, which can be considered as the detection quantity to measure Hx. In this paper, however, we also present a different way to process the excitation and induction signals to get information on the target field. This readout strategy is quite similar to the BMSTD scheme except for the use of a reference time. It is known when the excitation signal passes through the zero point, and so the transition time is used as reference time. The method of reading the time difference between the reference time and the negative output pulse is proposed, that is, the excitation signal and induction signal are combined to read the negative magnetic saturation time, ΔTNMST, to detect the Hx. As is shown in Figure 4, when the magnetic core becomes saturated, an output pulse signal is generated on the induction coil; the transition time when the excitation signal amplitude is zero is used as the reference time, tT, and when the applied magnetic field exceeds the coercive field, −Hc, the induced voltage produces a negative pulse at tP. The relationship between the triangular excitation field He(t) expressed by Equation (1) and the target field Hx is as follows: : 0 Deduced by Equations (9) and (10): The NMST Readout Technique and the Mathematical Output Response Model If a target magnetic field H x exists, the time when the soft magnetic material reaches the positive and negative saturation states will change. The time when the soft magnetic material reaches two steady points of the double-well potential are different. The H x directly affects the time when the magnetic core reaches the positive and negative saturation states. Therefore, the time of magnetic core saturation is equivalent to the bidirectional magnetic saturation time difference, ∆T, which can be considered as the detection quantity to measure H x . In this paper, however, we also present a different way to process the excitation and induction signals to get information on the target field. This readout strategy is quite similar to the BMSTD scheme except for the use of a reference time. It is known when the excitation signal passes through the zero point, and so the transition time is used as reference time. The method of reading the time difference between the reference time and the negative output pulse is proposed, that is, the excitation signal and induction signal are combined to read the negative magnetic saturation time, ∆T NMST , to detect the H x . As is shown in Figure 4, when the magnetic core becomes saturated, an output pulse signal is generated on the induction coil; the transition time when the excitation signal amplitude is zero is used as the reference time, t T , and when the applied magnetic field exceeds the coercive field, −H c , the induced voltage produces a negative pulse at t P . The relationship between the triangular excitation field H e (t) expressed by Equation (1) and the target field H x is as follows: Deduced by Equations (9) and (10): The time difference ∆T NMST between t P and t T is defined to negative magnetic saturation time, which is given by: When using the NMST readout strategy, the output response of the RTD-fluxgate is as follows: The sensitivity expression for the NMST strategy obtained by using similar calculations is shown in Equation (15): When H x and the coercive field H c of the magnetic core are fixed, the relationship between ∆T NMST and H e (t) is as shown in Figure 5. When the amplitude H m and the frequency f of the excitation magnetic field are smaller, the ∆T NMST is greater. 2 The time difference ΔTNMST between tP and tT is defined to negative magnetic saturation time, which is given by: When using the NMST readout strategy, the output response of the RTD-fluxgate is as follows: The sensitivity expression for the NMST strategy obtained by using similar calculations is shown in Equation (15): When Hx and the coercive field Hc of the magnetic core are fixed, the relationship between ΔTNMST and He(t) is as shown in Figure 5. When the amplitude Hm and the frequency f of the excitation magnetic field are smaller, the ΔTNMST is greater. The time difference ΔTNMST between tP and tT is defined to negative magnetic saturation time, which is given by: When using the NMST readout strategy, the output response of the RTD-fluxgate is as follows: The sensitivity expression for the NMST strategy obtained by using similar calculations is shown in Equation (15): When Hx and the coercive field Hc of the magnetic core are fixed, the relationship between ΔTNMST and He(t) is as shown in Figure 5. When the amplitude Hm and the frequency f of the excitation magnetic field are smaller, the ΔTNMST is greater. According to Equation (15), the variation tendency of the sensitivity of the RTD-fluxgate S NMST , obtained by using the NMST readout strategy, is shown in Figure 6. According to Equation (15), the variation tendency of the sensitivity of the RTD-fluxgate SNMST, obtained by using the NMST readout strategy, is shown in Figure 6. From the figure above, when using the NMST readout technology, the SNMST of RTD-fluxgate is inversely proportional to the excitation magnetic field's amplitude Hm and frequency f. Therefore, when the excitation circuit structure does not need changing, the sensitivity of the RTD-fluxgate is improved by reducing the excitation magnetic field amplitude Hm and the frequency f, and at the same time, the power consumption is cut down. But, according to the working principle of RTDfluxgate, the magnetic core of the sensitive unit needs to achieve bidirectional oversaturation, so the excitation magnitude of Hm should at least saturate the core and the excitation frequency f is too low, which will lead to a smaller range of the measured magnetic field and a worse effect of the induction output signal. Therefore, the excitation parameters can be determined according to the actual measurement conditions. Stability Analysis of NMST Readout Technology When using the NMST readout scheme, the excitation signal is generated by the signal generator. The reference time tT does not need measuring, therefore tT can be obtained accurately. As is observed in Figure 7, in the ideal condition, because the output signal does not have noise interference, the time of the first negative pulse appears at tp and the corresponding actual transition time is tp'. The solid line represents the ideal output residence times and the dotted line represents the actual output residence times influenced by noise. An expression about ΔTNMST actually measured is as follows: Because of the known tT, the noise affects the ΔTNMST at the transition time tp'. In this case, the presence of noise affects only the estimation of one transition time (the reference time being assumed to be noiseless) instead of three such times in the BMSTD strategy. The uncertainty of ΔTNMST affected by the noise is as follows: Based on analysis of theory, the relationship between the NMST and BMSTD readout schemes affected by noise is shown in the Equation (18). The NMST readout scheme can reduce the influence From the figure above, when using the NMST readout technology, the S NMST of RTD-fluxgate is inversely proportional to the excitation magnetic field's amplitude H m and frequency f. Therefore, when the excitation circuit structure does not need changing, the sensitivity of the RTD-fluxgate is improved by reducing the excitation magnetic field amplitude H m and the frequency f, and at the same time, the power consumption is cut down. But, according to the working principle of RTD-fluxgate, the magnetic core of the sensitive unit needs to achieve bidirectional oversaturation, so the excitation magnitude of H m should at least saturate the core and the excitation frequency f is too low, which will lead to a smaller range of the measured magnetic field and a worse effect of the induction output signal. Therefore, the excitation parameters can be determined according to the actual measurement conditions. Stability Analysis of NMST Readout Technology When using the NMST readout scheme, the excitation signal is generated by the signal generator. The reference time t T does not need measuring, therefore t T can be obtained accurately. As is observed in Figure 7, in the ideal condition, because the output signal does not have noise interference, the time of the first negative pulse appears at t p and the corresponding actual transition time is t p '. The solid line represents the ideal output residence times and the dotted line represents the actual output residence times influenced by noise. An expression about ∆T NMST actually measured is as follows: Because of the known t T , the noise affects the ∆T NMST at the transition time t p . In this case, the presence of noise affects only the estimation of one transition time (the reference time being assumed to be noiseless) instead of three such times in the BMSTD strategy. The uncertainty of ∆T NMST affected by the noise is as follows: Based on analysis of theory, the relationship between the NMST and BMSTD readout schemes affected by noise is shown in the Equation (18). The NMST readout scheme can reduce the influence of noise on reading the time difference. Therefore, reading ∆T NMST to measure H x can improve the stability of the time difference. Experiments and Preliminary Results The experimental instruments are shown in Figure 8. The RTD-fluxgate sensor is made by the Key laboratory of geophysical exploration equipment, Ministry of Education (Jilin University) and included two parts: the sensitive unit and the signal detection circuit. The sensitive unit consists of an excitation coil, magnetic core, and induction coil. The core adopts a co-based amorphous ribbon which is 0.8 mm in width, 0.025 mm in thickness, and 100 mm in length. The core is placed inside the non-magnetic framework. The excitation coil is symmetrically twined on both ends of the nonmagnetic framework with the same number of turns, and the induction coil is twined in the middle. The excitation coil and induction coil are 100 turns and 1000 turns, respectively, using 0.1 mm enameled copper wires. The induction signal is amplified and rectified in the signal detection circuit, and then the rectangular signal is input to the time difference counting and processing part which is made up of Field Programmable Gate Array (FPGA) and STM32 microcontroller. In the magnetic shielding room made of multilayer silicon steel, the Helmholtz coil is placed in the middle of the multilayer electromagnetic shielding cylinder made of permalloy. The RTD-fluxgate is laid in the center of the loop, which can be considered as a homogeneous magnetic field. Two precision current sources of KEITHLEY 6221 are utilized in the experiment to excite the Helmholtz coil for generating a DC target magnetic field and drive the excitation coil of the RTD-fluxgate. The experimental measurement schematic diagram is shown in Figure 9. The excitation coil of the sensitive unit generates a triangular excitation magnetic field. The induced voltage generated by the induction coil passes through the instrumentation amplifier circuit, the second level amplifier circuit, the addition circuit, and the shaping circuit, obtaining a rectangular signal which carries the information of the Hx. The signal is input to the CH1 channel of FPGA logic signal processor. Regulating the excitation current source generates a synchronous triggering pulse. When the excitation voltage amplitude is zero, the trigger point is set. The synchronous trigger pulse is input to the CH2 channel of FPGA. FPGA uses two channel signals to count the number of time Experiments and Preliminary Results The experimental instruments are shown in Figure 8. The RTD-fluxgate sensor is made by the Key laboratory of geophysical exploration equipment, Ministry of Education (Jilin University) and included two parts: the sensitive unit and the signal detection circuit. The sensitive unit consists of an excitation coil, magnetic core, and induction coil. The core adopts a co-based amorphous ribbon which is 0.8 mm in width, 0.025 mm in thickness, and 100 mm in length. The core is placed inside the non-magnetic framework. The excitation coil is symmetrically twined on both ends of the non-magnetic framework with the same number of turns, and the induction coil is twined in the middle. The excitation coil and induction coil are 100 turns and 1000 turns, respectively, using 0.1 mm enameled copper wires. The induction signal is amplified and rectified in the signal detection circuit, and then the rectangular signal is input to the time difference counting and processing part which is made up of Field Programmable Gate Array (FPGA) and STM32 microcontroller. In the magnetic shielding room made of multilayer silicon steel, the Helmholtz coil is placed in the middle of the multilayer electromagnetic shielding cylinder made of permalloy. The RTD-fluxgate is laid in the center of the loop, which can be considered as a homogeneous magnetic field. Two precision current sources of KEITHLEY 6221 are utilized in the experiment to excite the Helmholtz coil for generating a DC target magnetic field and drive the excitation coil of the RTD-fluxgate. The experimental measurement schematic diagram is shown in Figure 9. The excitation coil of the sensitive unit generates a triangular excitation magnetic field. The induced voltage generated by the induction coil passes through the instrumentation amplifier circuit, the second level amplifier circuit, the addition circuit, and the shaping circuit, obtaining a rectangular signal which carries the information of the H x . The signal is input to the CH1 channel of FPGA logic signal processor. Regulating the excitation current source generates a synchronous triggering pulse. When the excitation voltage amplitude is zero, the trigger point is set. The synchronous trigger pulse is input to the CH2 channel of FPGA. FPGA uses two channel signals to count the number of time points when the counting frequency f c is 100 MHz. The time points N is transmitted into ∆T NMST which is sent to STM32 for storage. points when the counting frequency fc is 100 MHz. The time points N is transmitted into ΔTNMST which is sent to STM32 for storage. From the figure above, the measured ΔTNMST and Hx are linear. The linear regression technique (least square method) is used to fit the ΔTNMST curve. Assuming the linear fitting polynomial is y = ax + b (a ≠ 0) by using n data (xi, yi) (I = 1, 2, … n), the sum of the deviation square between data points and the fitted curve is shown in below: One of the curves in Figure 10 is taken to illustrate the concept. When d 2 = min(d 2 ), the fitting curves between ΔTNMST1 and Hx with excitation current I1 = 80 mA and excitation frequency f = 30 Hz are presented Equation (20). The fitting linear deviations are shown in Figure 11, which shows that the linear deviations are mainly concentrated at both ends and center, so it is in accordance with the points when the counting frequency fc is 100 MHz. The time points N is transmitted into ΔTNMST which is sent to STM32 for storage. From the figure above, the measured ΔTNMST and Hx are linear. The linear regression technique (least square method) is used to fit the ΔTNMST curve. Assuming the linear fitting polynomial is y = ax + b (a ≠ 0) by using n data (xi, yi) (I = 1, 2, … n), the sum of the deviation square between data points and the fitted curve is shown in below: One of the curves in Figure 10 is taken to illustrate the concept. When d 2 = min(d 2 ), the fitting curves between ΔTNMST1 and Hx with excitation current I1 = 80 mA and excitation frequency f = 30 Hz are presented Equation (20). The fitting linear deviations are shown in Figure 11, which shows that the linear deviations are mainly concentrated at both ends and center, so it is in accordance with the (1) Relationship between the H x and ∆T NMST (a) The experiments are performed in the following conditions of different triangular excitation magnetic fields (excitation current from 40 mA to 80 mA with a 20 mA interval and excitation frequency at 30 Hz) and a range of H x from −5 × 10 4 nT to 5 × 10 4 nT with a 5 × 10 3 nT interval. Figure 10 shows the output time difference, ∆T NMST , of the RTD-fluxgate which are actually measured with different excitation currents. From the figure above, the measured ∆T NMST and H x are linear. The linear regression technique (least square method) is used to fit the ∆T NMST curve. Assuming the linear fitting polynomial is y = ax + b (a = 0) by using n data (x i, y i ) (I = 1, 2, . . . n), the sum of the deviation square between data points and the fitted curve is shown in below: One of the curves in Figure 10 is taken to illustrate the concept. When d 2 = min(d 2 ), the fitting curves between ∆T NMST1 and H x with excitation current I 1 = 80 mA and excitation frequency f = 30 Hz are presented Equation (20). The fitting linear deviations are shown in Figure 11, which shows that the linear deviations are mainly concentrated at both ends and center, so it is in accordance with the regulation of the linear sensor. The sum of the relative deviations square is 0.0573 and the RTD-fluxgate possesses good linearity in the whole range of measurement. From the Equations above, the sensitivities of different excitation currents are SNMST1 = 0.0144 μs/nT, SNMST2 = 0.0191 μs/nT and SNMST3 = 0.0289 μs/nT. When the excitation amplitude Hm is smaller, the sensitivity SNMST is greater. From the Equations above, the sensitivities of different excitation currents are SNMST1 = 0.0144 μs/nT, SNMST2 = 0.0191 μs/nT and SNMST3 = 0.0289 μs/nT. When the excitation amplitude Hm is smaller, the sensitivity SNMST is greater. By using the same method, the fitting curves between ∆T NMST and H x with different excitation currents I 2 = 60 mA and I 3 = 40 mA are as follows: ∆T NMST3 = 0.0289 × H x + 1.67 × 10 4 From the Equations above, the sensitivities of different excitation currents are S NMST1 = 0.0144 µs/nT, S NMST2 = 0.0191 µs/nT and S NMST3 = 0.0289 µs/nT. When the excitation amplitude H m is smaller, the sensitivity S NMST is greater. (b) When the excitation current is 80 mA, the excitation frequency changes from 20 Hz to 60 Hz with a 20 Hz interval. H x is the same as mentioned above. Figure 12 shows the output time difference ∆T NMST of the RTD-fluxgate which is actually measured with different excitation frequencies. Sensors 2017, 17, 2325 11 of 15 (b) When the excitation current is 80 mA, the excitation frequency changes from 20 Hz to 60 Hz with a 20 Hz interval. Hx is the same as mentioned above. Figure 12 shows the output time difference ΔTNMST of the RTD-fluxgate which is actually measured with different excitation frequencies. According to the least square fitting method, the fitting curves between ΔTNMST and Hx with different excitation frequencies, f1 = 20 Hz, f2 = 40 Hz, and f3 = 60 Hz, are as follows: From the Equations above, the sensitivities of different excitation frequencies are SNMST1 = 0.0218 μs/nT, SNMST2 = 0.0109 μs/nT, and SNMST3 = 0.0074 μs/nT. When the excitation frequency f is smaller, the sensitivity SNMST is greater. In Figures 10 and 12, the experimental results validate that the sensitivity SNMST is inversely proportional to the amplitude Hm and frequency f of the excitation magnetic field. (2) Stability Analysis Analysis was performed under the conditions of an excitation magnetic field with parameters I = 80 mA, f = 60 Hz, and Hx = 25,000 nT. To compare the stability of the two readout methods effectively, the time of observation is 60 s. The fluctuations in time difference by using the NMST and BMSTD readout methods are shown in Figures 13 and 14, respectively. Because the observation time is longer, the data of time difference fluctuations are larger. We only present the data of 3 s among 60 s in Table 1. According to the least square fitting method, the fitting curves between ∆T NMST and H x with different excitation frequencies, f 1 = 20 Hz, f 2 = 40 Hz, and f 3 = 60 Hz, are as follows: ∆T NMST2 = 0.0109 × H x + 1.25 × 10 4 (24) ∆T NMST3 = 0.0074 × H x + 8.34 × 10 3 From the Equations above, the sensitivities of different excitation frequencies are S NMST1 = 0.0218 µs/nT, S NMST2 = 0.0109 µs/nT, and S NMST3 = 0.0074 µs/nT. When the excitation frequency f is smaller, the sensitivity S NMST is greater. In Figures 10 and 12, the experimental results validate that the sensitivity S NMST is inversely proportional to the amplitude H m and frequency f of the excitation magnetic field. (2) Stability Analysis Analysis was performed under the conditions of an excitation magnetic field with parameters I = 80 mA, f = 60 Hz, and H x = 25,000 nT. To compare the stability of the two readout methods effectively, the time of observation is 60 s. The fluctuations in time difference by using the NMST and BMSTD readout methods are shown in Figures 13 and 14, respectively. Because the observation time is longer, the data of time difference fluctuations are larger. We only present the data of 3 s among 60 s in Table 1. Analysis was performed under the conditions of an excitation magnetic field with parameters I = 80 mA, f = 60 Hz, and Hx = 25,000 nT. To compare the stability of the two readout methods effectively, the time of observation is 60 s. The fluctuations in time difference by using the NMST and BMSTD readout methods are shown in Figures 13 and 14, respectively. Because the observation time is longer, the data of time difference fluctuations are larger. We only present the data of 3 s among 60 s in Table 1. As is illustrated in Figures 13 and 14, by using the NMST readout scheme, the standard deviation of ΔTNMST is 1.086 μs and the fluctuation of ΔTNMST is 5.780 μs. By using the BMSTD readout scheme, the standard deviation of ΔT is 1.465 μs and the fluctuation of ΔT is 7.559 μs. The comparison between the two readout methods indicates that the standard deviation and the fluctuation of the time difference are reduced by 36% and 32%, respectively, by adopting the NMST readout scheme. In order to process the time difference dynamically in real time, in this paper, the variable coefficient Pauta criterion and equal-weight endpoint smoothing are combined to form a hybrid time difference processing algorithm. The specific procedure is as follows: a Every n ΔTNMST forms an array Ni, and then the mean i N and variance σi of each array need calculating in turn. b The amount of effective data after processing is more than 3/4 times of the amount of data before processing. When |ΔTNMST(i−1)n+j − i N | > kσi, ΔTNMST(i−1)n+j is considered as gross error, therefore, the value is replaced by the mean of the array; when |ΔTNMST(i−1)n+j − N | < kσi, this As is illustrated in Figures 13 and 14, by using the NMST readout scheme, the standard deviation of ∆T NMST is 1.086 µs and the fluctuation of ∆T NMST is 5.780 µs. By using the BMSTD readout scheme, the standard deviation of ∆T is 1.465 µs and the fluctuation of ∆T is 7.559 µs. The comparison between the two readout methods indicates that the standard deviation and the fluctuation of the time difference are reduced by 36% and 32%, respectively, by adopting the NMST readout scheme. In order to process the time difference dynamically in real time, in this paper, the variable coefficient Pauta criterion and equal-weight endpoint smoothing are combined to form a hybrid time difference processing algorithm. The specific procedure is as follows: a Every n ∆T NMST forms an array N i , and then the mean N i and variance σ i of each array need calculating in turn. b The amount of effective data after processing is more than 3/4 times of the amount of data before processing. When |∆T NMST(i−1)n+j − N i | > kσ i , ∆T NMST(i−1)n+j is considered as gross error, therefore, the value is replaced by the mean of the array; when |∆T NMST(i−1)n+j − N i | < kσ i , this value is reserved. . . . , n × I − l + 1. In this paper, the sequence N' is processed twice. After hybrid algorithm processing, the standard deviation of ∆T NMST is reduced to 0.044 µs and the fluctuation is reduced to 0.170 µs, as is shown in Figure 15. Conclusions On the basis of the working principle of RTD-fluxgate sensors, the influence of the induction signal noise on the time difference reading is analyzed. A readout method is proposed in which the excitation and induction signals are combined to read the negative magnetic saturation time. A mathematical model of the RTD-fluxgate sensor output response between ΔTNMST and Hx is established. The NMST readout scheme, which is proposed in this paper, is compared with the BMSTD readout scheme. The experimental results validate the effectiveness of the readout method. The standard deviation and the fluctuation of the time difference are reduced by 36% and 32%, respectively. This technique of RTD-fluxgate sensor usage can reduce the noise influence and improve the stability of time difference measurement. After ΔTNMST is processed by the time difference hybrid algorithm, the fluctuation can be stabilized within ±0.1 μs, so the accuracy of RTDfluxgate measurement is improved further. The NMST readout method is suitable for RTD-fluxgate detection of weak magnetic fields. Conclusions On the basis of the working principle of RTD-fluxgate sensors, the influence of the induction signal noise on the time difference reading is analyzed. A readout method is proposed in which the excitation and induction signals are combined to read the negative magnetic saturation time. A mathematical model of the RTD-fluxgate sensor output response between ∆T NMST and H x is established. The NMST readout scheme, which is proposed in this paper, is compared with the BMSTD readout scheme. The experimental results validate the effectiveness of the readout method. The standard deviation and the fluctuation of the time difference are reduced by 36% and 32%, respectively. This technique of RTD-fluxgate sensor usage can reduce the noise influence and improve the stability of time difference measurement. After ∆T NMST is processed by the time difference hybrid algorithm, the fluctuation can be stabilized within ±0.1 µs, so the accuracy of RTD-fluxgate measurement is improved further. The NMST readout method is suitable for RTD-fluxgate detection of weak magnetic fields.
v3-fos-license
2020-02-27T09:35:00.040Z
2020-03-24T00:00:00.000
213882220
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/nr/c9nr10745b", "pdf_hash": "7495ca392b4024219e833f7f160c9e43756852b4", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41589", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Engineering" ], "sha1": "28dd64ef41a1aca41ad649f452957901597b92b6", "year": 2020 }
pes2o/s2orc
Hysteresis-free perovskite transistor with exceptional stability through molecular cross-linking and amine-based surface passivation Please note that technical editing may introduce minor changes to the text and/or graphics, which may alter content. The journal’s standard Terms & Conditions and the Ethical guidelines still apply. In no event shall the Royal Society of Chemistry be held responsible for any errors or omissions in this Accepted Manuscript or any consequences arising from the use of any information it contains. Accepted Manuscript View Article Online Introduction Organo-metal halide perovskites have attracted tremendous interest for high-performance, large area, low-cost, optoelectronic devices. [1][2][3][4][5][6] Besides their great success in photovoltaics (PVs) and light-emitting diodes (LEDs), 7-10 they are also highly promising for field-effect transistors (FETs) due to their very large theoretically predicted mobility values that are comparable with those of inorganic semiconductors, such as GaAs. 11,12 However, the demonstration of room temperature operating perovskite FETs that are free of hysteresis and instability issues still remains elusive. [13][14][15][16][17][18][19] The overall performance of perovskite transistors is poor due to several factors that can be divided into intrinsic and extrinsic ones. 20 The intrinsic factors primarily include the migration of ions produced by the imperfections in perovskite stoichiometry and film growth/morphology (i.e. structural defects/grain boundaries). The main extrinsic factors causing perovskite degradation are ambient humidity and oxygen, UV irradiation and elevated temperature. 21,22 The large hysteresis of perovskite FETs occurs due to ions migration and modulation of charge transport by carrier trapping and detrapping at deep energetic sites located at grain boundaries of the polycrystalline film. This also hampers the lateral charge transport over a long distance between contacts through the semiconductordielectric interface reducing charge carrier mobility. 23,24 Additionally, the source-drain contacts are usually aggravated by unavoidable deposition-induced defects and chemical imperfections that largely dictate the interface properties. 25 The intrinsic inability of perovskite to resist attack by moisture induces significant instability during FET's operation, 26 with no published work on stable and hysteresis-free perovskite FETs reported to date. Many recent efforts to solve these issues have been mainly focused on the use of perovskite microplates, 6,8 the optimization of the synthetic procedures, 27 the use of single crystal perovskite layers, 28 the application of solvent vapor annealing, 29 the surface passivation using self-assembled monolayers (SAMs), 30 and interface engineering to improve charge injection. 31 Despite the great merits of these works, the room temperature measured field-effect mobilities remain as low as 0.18 (0.17) cm 2 V -1 s -1 for holes (electrons) for trihalide perovskites and 1.24 (1.01) cm 2 V -1 s -1 for mixed-halide 3D perovskites. Note that, hole mobilities up to 15 cm 2 V -1 s -1 and electron mobilities up to 10 cm 2 V -1 s -1 have been recently reported for FETs based on the 2D layered (C 6 H 5 C 2 H 4 NH 3 ) 2 SnI 4 , 32 and the 3D CH 3 NH 3 PbI 3-x Cl x perovskites, 33 respectively. However, these transistors were measured under high vacuum of at least 10 -4 Torr. For practical device applications in complete circuits, efficient transistors capable of operating in ambient conditions are necessary. More important, steady improvements in the device hysteresis and stability issues have not been achieved to date thus still puzzling the community. Arguably, a lack of sustained stability and the presence of hysteresis are the biggest obstacles on the further development of perovskite transistors. Recently, our group demonstrated the first perovskite FET with balanced room temperature ambipolar transport by using a triple cation perovskite of the chemical structure Cs x (MA 0.17 FA 0.83 ) 1-x Pb(Br 0.17 I 0.83 ) 3 , where Cs is caesium, MA and FA are methylammonium and formamidinium, respectively, as the semiconducting channel material and a PMMA film as the gate dielectric. 34 These FETs featured both hole and electron mobilities around 2 cm 2 V -1 s -1 as the outcome of improvements in crystallinity linked to stabilization of the perovskite crystal structure upon the incorporation of an optimized Cs content. 35 However, hysteresis and instability issues still governed the operation of these transistors. Here, we report the first hysteresis-free perovskite FET with exceptional stability under bias stress and upon aging in air. Moreover, we obtained a nearly 100% improvement compared to our previous work in the measured hole (electron) mobilities up to 4.02 (3.35) cm 2 V −1 s −1 and a low subthreshold-swing (SS) of 267 mV dec -1 which is the best values reported to date for ambient conditions operating perovskite transistors. This became possible through the implementation of a molecular cross-linking strategy of the perovskite grains by the simple addition of a hydrophobic crosslinker namely diethyl-(12-phosphonododecyl)phosphonate (DPP) in the perovskite precursor followed by effective surface passivation of perovskite film with a thin amine-bearing polyethylenimine ethoxylated (PEIE) interlayer. A tank of experiments and density functional theory (DFT) simulations confirmed the molecular passivation principle in the form of strong hydrogen bonding interactions and high adsorption energy between cross-linker and perovskite. Experimental section Materials. The organic cations were purchased from Dyesol; the lead compounds from TCI; CsI from abcr GmbH. The "mixed" perovskite precursor solutions were deposited from a precursor solution containing FAI (1 M), PbI 2 (1.1 M), MABr (0.2 M) and PbBr 2 (0.2 M) in anhydrous DMF: DMSO 4:1 (v:v). Then CsI, pre-dissolved as a 1.5 M stock solution in DMSO, was added to the mixed perovskite precursor to achieve the desired triple cation composition. Fabrication and characterization of FET devices. 1 M of DPP was added into the triple cation composition and kept stirred for 2 hrs. A DPP modified-CsMAFA was deposited by two-step spin coating method on an n-doped silicon substrate (ρ~2 Ω cm) which was covered with 300 nm thick SiO 2 in a two steps program at 1000 and 6000 rpm for 10 and 30 s, respectively in a nitrogen-filled glove box. During the second step, 100 μL of chlorobenzene (CB) was poured on the spinning substrate 15 s prior to the end of the program, to assist the perovskite film crystallization. The thickness of modified-CsMAFA layer is approximately 250 nm. For the as-spun transistors, no annealing treatment of the samples was performed. For the annealed transistors, the samples were annealed at 100 °C for 20 min. Prior to these processes, silicon substrates were sequentially cleaned in acetone, toluene and isopropyl alcohol for 15 min each, later dried in an oven at 120 °C for 15 min, then exposed for 1.5 h to UV light in air before finally the surface was treated with 3 mM noctadecyltrimethoxysilane (OTS) self-assembled monolayer (SAM) for 0.5 h by spin-coating on the piranha-solution-cleaned wafer at 3000 rpm for 30 s. Then, the wafer was exposed to ammonia vapor for ~12 h, followed by sonication cleaning, sequential washing, and drying. After the perovskite deposition, a thin layer of PEIE was spin-coated at 4000 rpm for 45 s and later annealed at 100 °C for 10 min. The source and drain electrodes made of yttrium fabricated by e-beam lithography and evaporation, followed by a standard lift-off process. The yttrium oxide film with a thickness of 60 nm was deposited by reactive e-beam evaporation. Lastly, Ti/Au (10 nm/30 nm) metal stack was evaporated as the top gate. To check the moisture resistance ability, two water molecules are adsorbed over the relaxed surface of unmodified-CsMAFA and modified-CsMAFA, as shown in Figs. 5b and c. In addition, the interaction of O 2 with these modified and unmodified surfaces, two O 2 molecules are adsorbed over the relaxed surfaces of unmodified-CsMAFA and modified-CsMAFA, shown in Figs S23 a,b. Device measurements. All electrical measurements were monitored by an Agilent 4156 C semiconductor parameter analyzer. FETs' current-voltage characteristics are measured in the dark and at room temperature under ambient pressure, where the voltage scan rate was at 5 mV s -1 . The field-effect mobility (μ) was determined using , where I DS is the drain-to-source current, μ is the mobility, and V G and V T are the gate voltage and threshold voltage, respectively. Measured samples had channel lengths L=50 μm and channel widths W=1000 μm. Characterization tools. The scanning electron microscopy was conducted using a SEM instrument, Hitachi S-4700. X-ray diffraction (XRD) measurements were carried out using X'PERT PRO of PANalytical Diffractometer with a Cu Kα source (wavelength of 1.5405 Å). Photoluminescence (PL) spectra were recorded using a lock-in technique with JASCO FP-6500 composed of two monochromators for excitation and emission, a 150 Watt Xe lamp with shielded lamp house and a photomultiplier as a light detector. Absorption spectra were collected with a Varian Cary 300 UV-Vis spectrophotometer with an internally coupled integrating sphere. The film thicknesses were measured using a Dektak AlphaStep Profilometer. Contact angle measurements were performed using an Attention Theta optical tensiometer with automated liquid pumping system was used for the contact angle measurements. Purified (Milli-Q) and degassed water was used as the probe liquid. Proton nuclear magnetic resonance ( 1 H NMR) was performed using Bruker Advance 300 ( 1 H: 300 MHz) spectrometer at 298 K using partially deuterated solvents as internal standards. Results and discussion Molecular cross-linking of perovskite grains. For the fabrication of our FETs we used the bottom-contact/top-gate (BC/TG) configuration ( Figure 1a) despite the fact it is generally difficult to achieve high performance using this geometry. This is because, charge carriers injected from the source (S) and drain (D) electrodes into the semiconductor channel need to travel typically several tens of nanometers normal to the substrate before they reach the semiconductor/gate dielectric interface where controlled charge transport takes place. In addition, the bulk resistance on the path between the semiconducting channel and the S-D contacts is very large thereby significantly increasing the total resistance. We, however, adopted this transistor configuration because it is more realistic for practical applications. The semiconducting channel material consisted of the previously introduced triple cation perovskite Cs x (MA 0.17 FA 0.83 ) 1-x Pb(Br 0.17 I 0.83 ) 3 (CsMAFA) where x=0.05 with the stark difference that it was subjected to our molecular cross-linking methodology. In particular, the perovskite film was prepared by spin-coating a mixture of the triple cation perovskite and a well-known hydrophobic cross-linker namely diethyl-(12phosphonododecyl)phosphonate (DPP) (Figure 1b and Figure S1, Supporting Information). Motivated by the molecular passivation and molecular cross-linking approaches successfully applied to perovskite PVs and LEDs, [36][37][38][39][40][41] we hypothesized that it should be possible to address the perovskite FET issues by cross-linking the perovskite grains within a framework of a uniform material. To achieve that endeavor, we viewed a DPP molecule bearing phosphonic groups as a potentially suitable cross-linker due to the well-established strong hydrogen bonding coordination of the above groups to the unsaturated bonds. 42 The phosphonic groups present in both sides of each molecule are expected to bond to the periphery of the PbX 6 octahedra forming hydrogen bonds with the halide anions of the perovskite (Figure 1c) and, therefore, crosslinking neighbouring perovskite grains. To scrutinize the formation of hydrogen bonds between the perovskite surface and the crosslinker we performed 1 H NMR measurements. The phosphonic ester group in the pristine DPP solution appears at around 3.95 (CH 2 ) and 1.25 ppm (CH 3 ) (Figure 1d). It is well-known that hydrogen bonding produces upfield chemical shifts caused by shielding. The phosphonic signals of DPP appear in the mixed DPP-CsMAFA solution where they undergo a small upfield shift suggesting the formation of hydrogen bonds between the molecular additive and the perovskite. Our conjecture that the pendant phosphonic groups are integrated at the surface of the perovskite grains was further probed by Fourier transform infrared (FTIR) spectroscopy. The main stretching bands of DPP in the DPP+perovskite mixed film ( Figure S2) exhibit a significant downward shift with respect to those of the pure DPP, indicating strong interaction between the perovskite matrix and the cross-linking additive. Structural and morphological studies. The powder XRD pattern (PXRD) taken in DPP modified-CsMAFA material ( Figure S3) indicates that the nanocrystals derived from the mixed solution are composed of neat perovskite phase suggesting that the cross-linker molecules coordinate exclusively at the surface of perovskite acting as "glue" that cross-links the neighbouring grains and do not occupy the perovskite lattice. Such coordination bonding also results in excellent defect passivation as evidenced by the large enhancement in photoluminescence (PL) of the DPP-CsMAFA film compared to the pristine perovskite ( Figure S4). Our experimental data provide evidence for the coordination of pedant phosphonic groups to the surface of perovskite grains through the formation of hydrogen bonds. As a result, effective defect passivation was realized. Moreover, molecular cross-linking of neighbouring perovskite results in crystallinity enhancement of the DPP modified perovskite film compared to the unmodified one as indicated from XRD and differential scanning calorimetry (DSC) measurements ( Figure S5 and S6 and Table S1). We also performed elemental analysis of the mixed DPP-perovskite film by scanning transmission electron microscopy (STEM). In Figure 1e the high-angle annular dark-field (HAADF) image and EDS mapping in STEM mode are presented confirming the presence of DPP in the sample. The 3 mol % DPP molecules used here easily led to the detection of P in our sample. From STEM-EDS mapping of Pb and P, we infer that these two elements exhibit similar distributions in the mixed sample. Top-view ( Figure S7) and cross-sectional SEM images (Figure 1f) revealed that the pristine perovskite film exhibits a large number of dispersed grains that are loosely bound and randomly distributed within the film. On the contrary, the modified-CsMAFA films are very uniform and smooth with tightly connected grains. This also allowed the formation of a high-quality interface between the perovskite film and the gate dielectric deposited directly on top of the former (Figure 1g). These findings are of great importance as the structural uniformity of the perovskite layer dictates charge transport ability and interface quality that are highly important for superior FET operation. (Table S2). As shown in Fig. 2b and 2d, the DPP molecules are packed almost perpendicular to the surface of perovskite, through phosphonic anchoring groups. The electronic total charge density slices (Figures 2c,d) predict that the anchoring groups of DPP strongly interact with the FA + and MA + , besides the halide groups. The strong hydrogen bonding interaction between perovskite and DPP (Figure 2b) not only cross-link the perovskite grains but improves its charge mobilities as the result of a passivation effect. This can be also concluded from its simulated bandgap, UV-Vis spectra and effective masses of charge carriers (Figure 2c). More specifically, the bandgap of unmodified-CsMAFA decreases from 1.58 to 1.44 eV, upon adsorption of DPP molecules (modified-CsMAFA) in accordance with the experiment (Figure S8), and the effective mass of electrons substantially decreases from 2.98 to 0.16 m e , thus resulting in high charge carrier mobility. Effective masses of electrons and holes are estimated from the conduction Please do not adjust margins Please do not adjust margins band minimum (CBM) and valence band maximum (VBM) of the band structure, respectively, which are listed in Table S3. The lighter the masses of these charge carriers, the higher will be their charge mobilities slowing down charge recombination. Our theoretical results (Figure 2e,f) are in good agreement with the experimental data demonstrating that effective molecular cross-linking of perovskite grains causes desirable changes in optical and electronic properties of the modified perovskite. Despite the fact that not all of these changes (i.e. improvement in UV-Vis absorption) have direct impact on the FET performance, they demonstrate the universal impact of our approach in improving the optoelectronic properties of perovskite materials for several types of device applications (i.e. perovskite PVs). Improvements in mobility and elimination of hysteresis. Except for the semiconducting channel material, the choice of the gate dielectric and source-drain contacts is of paramount importance for the satisfactory FET operation. The polymer dielectrics commonly used in perovskite FETs undergo severe physical aging deteriorating the device stability. In addition, their dielectric constants (k) are not sufficiently high (4.9 for PMMA and 2.1 for Cytop) to increase the device capacitance at low thicknesses and thus to scale-down the device dimensions. Moreover, the commonly used gold (Au) contacts exhibit chemical interactions with the perovskite film thus altering the interface properties. We therefore replaced the reactive gold with inert yttrium (Y) contacts and the 550 nm thick PMMA dielectric with a 60 nm thick yttrium oxide (Y 2 O 3 ). The latter possesses a high-k value equal to 16-19, a defect-free electronic structure and an ultra-fine nanomorphology. 43,44 Actually, Y 2 O 3 has been considered as an ideal gate dielectric especially for transistors using carbon containing materials due to the excellent wetting behavior of yttrium on sp 2 carbon framework. 45 Initially, we performed measurements in a perovskite FET using unmodified CsMAFA channel. The output characteristics of this unmodified transistor in both hole (Fig. S9 a) and electron (Fig. S10 (Table S4). There is a significant unbalance between hole and electron mobilities. The transistor subthreshold swing (SS) values were found equal to 532 and 502 mV dec -1 for hole and electron operation, respectively. These are much lower than the SS values usually reported for perovskite transistors irrespective of the configuration used (above 1000 mV and usually few V dec -1 ). 46 We next fabricated FETs using perovskite channels embedding 3% per mol DPP to act as the cross-linker. The hole output and transfer characteristics of the modified FET are shown in Fig. S9 b and Fig. 3b. The electron output and transfer characteristics are presented in Fig. S10 b and Fig. 3d, respectively. The device performance metrics are summarized in Table S5. An increase in hole mobility up to 3.97 cm 2 V −1 s −1 and a more remarkable enhancement in electron mobility up to 3.20 cm 2 V −1 s −1 is achieved upon the addition of DPP cross-linker within the perovskite (Table S5). Note that this increase was dependent on the DPP concentration ( Figure S11). To understand these improvements in mobility values we have to take into account that except for the externally applied gate voltage an additional internal one is also applied in the semiconducting channel due to the mobile ions present on the perovskite film. This internal field depends on the intensity and the polarity of the gate voltages that were applied during previous measurements. Due to the random movement of ions through the lattice, this internal field is also random and combined with imperfections at the dielectric/perovskite interface (e.g., dangling bonds and interface roughness) will cause the formation of a fringing field, which causes significant scattering to the carriers moving across the channel thus reducing the measured mobilities. The primary culprit is the movement of ions through the perovskite films; under the application of a negative V GS , these positive ions gradually move through grain boundaries and accumulate at the perovskite-gate dielectric and perovskite-contacts interfaces. They thereby act as scattering centers and reduce the number of charges within the channel through Coulomb repulsion resulting in significant hysteresis and loss of stability. The ion movement can also explain the dependence of threshold voltage (V t , in both, p-and n-channel) on the annealing temperature of the perovskite film. A different thermal annealing result in different degree of structural transformation which influences the production of mobile ions in the perovskite film thus affecting the intensity of the internal field and, therefore, shifts the measured V t . However, even in the optimized annealing of 100 o C these mobile ions were not eliminated thus reducing the performance of unmodified transistor. By applying our cross-linking modification approach on the perovskite we effectively reduced the density of defects present at grain boundaries hence gaining a significant enhancement in the device performance. Notably, the electron mobility is more strongly affected by the concentration of the cross-linker verifying that upon the addition of DPP the positive ions that may repel electrons are highly suppressed. Additionally, the SS values were found even lower and equal to 288 and 267 mV dec -1 for the p-and n-channel, respectively. These are the lowest values ever reported for perovskite transistors and the closest to the room temperature Boltzmann theoretical limit of 60 mV dec -1 . However, from the transfer characteristics in Fig. 3a-d, it becomes evident that, even though decreases compared to the unmodified transistor, the hysteresis of the modified FET is still present. Previous reports indicate that hysteresis in perovskite FETs is due to a combined effect of charge trapping and ion transport, and thus decreasing structural and ionic defects in bulk perovskite films could both decrease hysteresis. [27][28][29] Sirringhaus's group also demonstrated that the perovskite surface defects also contribute to hysteretic issues. 17 We, therefore, applied a perovskite surface passivation approach by spin coating an amine-containing nonconjugated polymer namely polyethylenimine ethoxylated (PEIE) to reduce the density of surface defect states and dangling bonds due to well established passivation ability of amine groups. Indeed, it has been previously demonstrated that PEIE can effectively modify the surface of perovskites by forming strong hydrogen bonding interactions with surface defects. 47 Fig. 3e,f presents the hole and electron transfer characteristics for the DPP-modified and PEIE surface treated perovskite FET where complete elimination of hysteresis is observed. In fact, this is the first hysteresis-free perovskite transistor reported in the literature. The complete elimination of hysteretic issues was irrespective of the perovskite post-annealing temperature. Moreover, the room temperature mobility was further improved to 4.02 (hole) and 3.35 (electron) cm 2 V −1 s −1 (Table S6). These are the highest mobilities reported thus far for any perovskite transistor operating under ambient pressure at room temperature (Table S7). This is expected to have a profound impact on the use of perovskite transistors in complementary logic elements requiring n-and p-type semiconductors. Stability study. Furthermore, an outstanding increase in both aging and operational stability under stress was obtained in the DPP modified and PEIE passivated FET as seen from the variation of hole and electron mobility of the unmodified and DPP-modified FETs for over sixteen days as the aging period ( Figure S12). Remarkably, the mobility (both hole and electron) of the modified FET remains nearly unchanged. However, a large decrease in hole and mainly in electron mobility is obtained for the aged unmodified FET which is attributed to the continuous accumulation of positive ions at the surface or grain boundaries of the perovskite film upon aging. The dramatic increase in the intensity of the PbI 2 peak in the XRD spectrum of the aged unmodified perovskite (Figure 4a) provides evidence for the well-known degradation process of the perovskite film through hydrolysis that produces PbI 2 upon the release of gas phase HI and CH 3 NH 2 species. 48,49 On the contrary, the PbI 2 peak does not appear in the XRD pattern of the aged modified perovskite (Figure 4b) providing evidence for the strong resistance of the molecular cross-linked perovskite on attack of moisture as expected due the hydrophobic nature of the DPP-modified film. Indeed, upon the addition of the hydrophobic crosslinker within the perovskite film, the latter becomes less hydrophilic ( Figure S13) hence being more resilience to environmental water molecules that cause the undesired perovskite hydrolysis and device degradation. SEM images of the aged unmodified and modified perovskite films (Fig. 4c,d) reveal the stronger resistance to environment induced degradation of the modified-CsMAFA film compared to the unmodified one. Please do not adjust margins Please do not adjust margins Bias stress driven electrical stability of FETs is extremely important as it dictates the normal operation of an FET-based electronic circuit. Figure 4e presents the channel current of the modified FET under a continuous bias stress of -20 V for over 80,000 seconds (the inset shows the I SD -V GS curve taken before and after bias stress) whereas Figure S14 presents the bias stress stability of the unmodified transistor. Figure 4f shows the cycle stability of the device, where a train of gate voltage pulse (-20 V) was applied and the device was switched between on and off for 50,000 cycles (1 Hz). Inset shows the I SD response from the 15,000 th until 30,000 th seconds. Such measurements were not possible in the unmodified FET due to its large instability attributed to charge carrier trapping and ion migration inside the device originated from uncontrolled microstructure of the perovskite film. By forming a cohesive and compact and passivated perovskite layer, charge carrier trapping was successfully eliminating thereby allowing for the achievement of unprecedented bias stability of the DPP-modified PEIE passivated device. DFT calculations on hysteresis and stability. Severe hysteresis is seen for unmodified-CsMAFA devices which is largely alleviated after inclusion of DPP molecules. As structural defects and ion migration are responsible for the hysteretic behavior, 2% Idefective (vacancy) unmodified-CsMAFA and modified-CsMAFA are employed as models for theoretical simulations ( Figure S15). The simulated formation energy of modified-CsMAFA is 0.46 eV higher than that of unmodified-CsMAFA, thus validating our experimental results (Figure 5 a,b and Table S8). It is known that water adsorption on the perovskite surface can be hardly prevented, followed by water infiltration into the subsurface of the perovskite network, which begins the hydrolysis process in bulk perovskite. A DFT calculation was again used to evaluate the interaction between water molecules and the perovskite lattice. In the modeling of atomic structures, the surface Cs(Ma/FA)(Br/I) molecules were replaced with DPP, and the stabilized molecular configurations are shown in Figure a,b. We place two water molecules between the DPP molecules, unmodified-CsMAFA and modified-CsMAFA framework. The corresponding formation energy is -0.49 eV per molecule for modified-CsMAFA, and -0.01 eV per molecule for unmodified-CsMAFA; confirming the thermodynamic stability of the DPP molecules. To verify whether the molecular crosslinking possibly decreases water adsorption, we calculate the water adsorption energy (E ad ) on the surface of the perovskite films. An adsorption energy of -107 kcal mol -1 between water and unmodified-CsMAFA (Figure 5c), confirmed its poor stability against moisture. However, modified-CsMAFA has lower interaction energy with water (E ad -36.44 kcal mol -1 ), which is almost one-third of the unmodified-CsMAFA-water system, given in Figure 5d and Table S9. This lower E ad further validates the DPP passivation principle in the form of improved moisture resistance as can be seen from the electron total charge density slices (Figure 5e,f and Table S10), which implies that DPP molecules efficiently keep water molecules away from the perovskite lattice. Analysis of Figure 5c,d led us to conclude that the unmodified-CsMAFA has attracted water molecules through sharing electronic cloud density, compared to that of modified-CsMAFA where the water molecules are repelled. The notably reduced E ad in modified-CsMAFA indicates that the DPP molecules have better water repelling capability as observed from the electron total charge density slices (Fig. 5e,f). As we know, the extrinsic factors causing perovskite degradation are not only the moisture, but oxygen as well. We have also simulated the interaction of oxygen molecules with unmodified-and modified-CsMAFA, as can be visualized from Figure S16a, b. along with their electronic total charge density slices ( Figure S16c, d). In case of unmodified-CsMAFA, the oxygen atoms of O 2 have about 3.30 Å, inter-atomic distance with the surface atoms. However, this distance is longer in modified-CsMAFA, which is about 2.44 Å. The per O 2 molecule interaction (adsorption) with unmodified-and modified-CsMAFA is -286.53 and -104.58 kcal mol -1 , respectively (see Table S2). Again, it is pointed out that DPP-passivated CsMAFA repel both water and O 2 molecules which further validate and confirm the stability of our modified perovskite. Conclusions In this work, we demonstrate the first hysteresis-free, extremely stable perovskite FET with balanced ambipolar transport under ambient pressure at room temperature. Added to the merits, our transistor exhibits high hole and electron mobilities and the lowest SS value of 267 mV dec -1 reported to date. To achieve those goals, we applied a combinatorial strategy including molecular crosslinking of the perovskite grains that enabled the formation of a cohesive and compact film and effective surface passivation of the perovskite channel. DFT calculations confirmed the molecular crosslinking and passivation principle. High adsorption energy and strong hydrogen bonding between DPP and perovskite confirm their excellent interaction and stability. That ingenious approach successfully addresses the severe hysteresis and instability issues of perovskite transistors and paves the way for the fabrication of enhanced performance perovskite FETs that would be highly suitable for the future realization of FET-based products and CMOS circuits.
v3-fos-license
2019-04-21T13:12:36.102Z
2016-05-31T00:00:00.000
55781462
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.journalijar.com/uploads/62_IJAR-10405.pdf", "pdf_hash": "324bdf815c331d155b8b1c26c49f3056c275638f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41591", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9a1f37b9abb4ae6403d4f721d8206ce1c834890e", "year": 2016 }
pes2o/s2orc
DEVELOPMENTAL MILESTONES OF INFANTS ADMITTED TO FACILITY BASED NEWBORN CARE UNIT: A CASE CONTROL STUDY Dr Nimali Singh. Introduction:Facility based new born care has been identified in treating sick child and providing support to low birth weight babies immediately after birth. This care is crucial in determining the child survival. Developmental milestones act as check points in a child‟s development to determine what the average child is able to do at a particular age. Facility based care includes essential care at birth and care of sick babies which helps infants to grow at normal rate and live a healthy life. This study was an attempt to compare normal infants and infants admitted in FBNC units. Results: - The results of the study indicate that there was no significant difference in the 2 groups (FBNC and Normal) with regard to cognitive development. Ninety percent infants in both groups gave social smile at one month, neck holding at 3 months was seen in around 37 % in both groups, standing with support at 9 months was seen in 22 % infants and 4 % infants at 1 year, similarly speaking monosyllables at 9 months was around 20 % and at 1 year it was 4 % in both the groups. Although there was no significant difference between the 2 groups but with regard to cognitive development the indicators were not up to the acceptable standard of development according to age. Introduction:- Every year 70 percent of neonatal deaths take place because simple yet effective interventions do not reach those who need them most. Every year, majority of child deaths in India occur during the newborn period (within first 28 days after birth) with a neonatal mortality rate (NMR) of 32 per 1000 live births in 2011 (CHERG,2012). Thirty five percent of newborn deaths are caused by complications of premature birth with surviving newborns facing a lifetime of disability, including learning disabilities, visual and hearing problems. Birth complications and septicemia contributes 23% each towards new-born mortality. These three causes together contribute 80% of total mortality in newborns. (Ministry of Health and Family Welfare Annual Report 2013-14). Health experts agree that the Millennium Development Goal to reduce child mortality by two-thirds between 1990 and 2015 cannot be reached unless neonatal mortality is halved. As per the Executive summary of the Lancet neonatal survival series (2005), intervention coverage is low, progress of scaling up is slow, and inequity is high. The gap is due to poor coverage within the health system, shortage of health care providers, and issues related to access to referral services. About 70% of all the 4 million newborns that die each year can be saved by low cost, low-tech interventions. During the last two decades, newborn survival has gained global attention and various efforts have been undertaken by governments to improve newborn health and reduce mortality. "Facility based neonatal care" is an attempt by the Government of India to strengthen the neonatal care provision. Facility based care includes essential care at birth and care of sick babies in different facilities. Stratification of various levels according to the ability of the units to handle cases has been devised. While it is desirable to see babies receiving care with appropriate facilities, designing such a model and operationalizing it within the health system is a challenge. Facility based neonatal care consists of newborn care undertaken in a health facility/hospital and includes Essential Newborn Care (ENC) immediately after delivery (drying and wrapping of baby, thermal control, initiating early breastfeeding, resuscitation for babies unable to breathe and monitoring the baby till discharge from hospital). If danger signs are detected in hospital or after discharge, baby requires sick newborn care which includes antibiotics for sepsis and other infections, fluid management, treatment of jaundice and respiratory distress. Such babies need strict monitoring round the clock and have to be referred to a higher centre if there are no signs of improvement (Pandya, 2013). Developmental milestones also act as check points in a child"s development to determine what the average child is able to do at a particular age. It helps in identifying potential problems with delayed development. Developmental milestones are behaviours or physical skills seen in infants and children as they grow and develop. Rolling over, crawling, walking and talking at the right age are all considered developmental milestones. Facility based care includes essential care at birth and care of sick babies in different facilities which helps infants to grow at normally and live a healthy life. The present study is an attempt to compare normal infants and infants admitted in FBNC to identify whether the infants are leading a life with normal developmental milestones or are they lagging behind and to check the child survival rate. Methodology:- The present study was undertaken to evaluate the status of infants admitted in Special New Born Care Unit (SNCU) / FBNC (Facility Based New Born Care Units) at the time of birth, in Sawai Madhopur District of Rajasthan. Sampling Method: -New born admitted to SNCU over a period of one year were included in the study. The other group comprised of normal children born during the same time was selected from the respective villages for comparisons. An Interview schedule was formulated to collect information on birth weight and information related to developmental milestones like age of social smiling, neck holding, standing etc. Sample Selection and Sample Size:-Data of infants born and admitted to FBNC/ SNCU over a period of one year were taken from district hospital, Sawai Madhopur. ASHA Sahyogini and AWW were approached to locate infant and then the information on child was collected from their families. The status of infant at birth was taken from hospital records and data on children born during the same time in same villages was collected from ASHA and AWW to match as controls. The data was matched for age, sex and caste for control. A total number of infants tracked included 2888 infants, 1444 each-admitted to FBNC and from normal population from the same village from Sawai Madhopur District of Rajasthan. Results and discussion:- The FBNC is an approach to improve the status of newborn health in the country. It is in some way contributing to child survival therefore Newborn Care Corners (NBCCs) are established at delivery points to provide essential newborn care, while Special Newborn Care Units (SNCUs) at District and Newborn Stabilization Units (NBSUs) at Primary Health Centre (PHC) provide care for sick newborns. The results from present study indicate that out of 2888 infants, very low birth weight babies were 4 % in FBNC group whereas there were no very low birth weight babies in the normal group. Low birth weight babies were around 55 % in FBNC group and 58 % in normal babies. This percent is too high in both the groups. Babies with normal weight were around 41 % in both the groups. This data is very alarming as only 40 % babies were born with normal weight (Table 1). Data on pre term births were also recorded and it was found that between the two groups the number of pre-term infants was higher in FBNC group; around 32 infants had born prematurely whereas in normal group only 0.7 % infants had a pre term birth. The difference between the 2 groups was highly significant statistically. The indicators for developmental milestones is given in table 2, since it is a follow up study it was observed that there was no significant difference in the 2 groups (FBNC and Normal) with regard to developmental indicators. Ninety percent infants in both groups gave social smile at one month, neck holding at 3 months was seen in around 37 % in both groups, standing with support at 9 months was seen in 22 % infants and 4 % infants at 1 year, similarly speaking monosyllables at 9 months was around 20 % and at 1 year it was 4 % in both the groups. Although there was no significant difference between the 2 groups but with regard to development the indicators were not up to the acceptable standard of development according to age. Figure 3:- In the age group of 10-12 months 88% children were able to stand with support in comparison to 78.97% children from FBNC group. In both the groups, FBNC (χ 2 = 44.70, df =1) and Normal (χ 2 = 45.13, df =1) age and ability to stand with support were found to have highly significant association at p<0.001. When association between the two variables was calculated within the two groups it was found to be non-significant (χ 2 = 0.0045, df =1) at 1% significance level. FBNC (%) Normal (%) Figure 4:-Ability to speak monosyllables was found to present in greater percent of children in the age group 10-12 months as compared to children in 7-9 months. Percentage of children from Normal group (76.39%) speaking monosyllables was high. Ability to speak monosyllables was found to significantly dependent on age (χ 2 = 0.0045, df =1) at 5% significance level. Discussion:- Developmental milestones are specific skill attainments occurring in a predictable sequence over time, reflecting the interaction of the child"s developing neurological system with the environment. Skills can be grouped in sectors of development: gross motor, fine motor (including self-care), and communication (speech, language and nonverbal), cognitive and social-emotional (Dosman, Andrews, and Goulden, 2012). Developmental screening is both effective and feasible if potential barriers are addressed adequately (Schonwald et al., 2009). The future of human societies depends on children being able to achieve their optimal physical and psychological development. Developmental delay is failure to acquire age-appropriate functionality. It may involve one or more streams of development. Responsive parenting has potential to promote better development (Aly, Taj and Ibrahim, 2010). Early identification of children with developmental delays or at risk of delay allows for referral to early intervention services, which have been shown to improve developmental and behavioral outcomes ( Barriers to screening that have been identified previously include lack of clinician knowledge and training, lack of adequate reimbursement for conducting screening, and the need to develop clinical workflow plans carefully (Earls and Hay, 2006). Primary health physicians are in the best arrangement to provide this assistance as they can monitor child's development longitudinally and understand the child's developmental trajectory better. Current strategy employed by majority of primary-care providers to monitor the trajectory is termed 'developmental surveillance'. It is "a flexible, continuous process whereby knowledgeable professionals perform skilled observations of children during the provision of health care" (Aly, Taj and Ibrahim, 2010 Age wise Percent Distribution of Infants as per Ability to Speak Monosyllables Comparison of a child"s current developmental skills to milestone data remains the most frequently reported method of developmental surveillance for physicians in practice, in conjunction with the clinical assessment and the physical examination of the child (Sices et al., 2003 andSand et al., 2005). As per WHO Multicentre Growth Reference Study, longitudinal data collected to describe the attainment of six gross motor milestones by children aged 4 to 24 months in Ghana, India, Norway, Oman and the USA, (N=816) children and reported that around 90% of children achieved five of the milestones following a common sequence, and 4.3% did not exhibit hands-and-knees crawling. The six windows have age overlaps but vary in width; the narrowest is sitting without support (5.4 mo), and the widest are walking alone (9.4 mo) and standing alone (10.0 mo). The estimated 1st and 99th percentiles in months are: 3.8, 9.2 (sitting without support), 4.8, 11.4 (standing with assistance), 5.2, 13.5 (hands-and-knees crawling), 5.9, 13.7 (walking with assistance), 6.9, 16.9 (standing alone) and 8.2, 17.6 (walking alone) (WHO, 2006a). According to WHO (2006b), there is little or no relationship between physical growth and motor development in the population studied. The literature indicates that growth retardation is related to delayed motor development, perhaps because of common causes such as nutritional deficiencies and infections, but in healthy children, as found size and motor development are not linked. In the current study, it was observed that infants in this particular region were not faring well in the developmental indicators. The data for survival indicated that maximum infants from the FBNC group died at 1 month of age (25), at 3, 6 and 9 months also some deaths were reported in the FBNC group (18,13,13), but in normal population no such deaths were reported. This shows that although infants in the FBNC presented with equivalent status developmental indicators but somehow they were still more vulnerable and hence deaths in them were observed. Conclusion:- It can be concluded that children admitted to FBNC are neither ahead of normal children nor they are lagging in developmental milestones comparatively. FBNC is a step forward which is providing critical life care during critical stages which is beneficial in order to improve child survival.
v3-fos-license
2023-04-15T06:17:47.996Z
2023-04-14T00:00:00.000
252914636
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.85167", "pdf_hash": "d6fa370882bccc6138459ae38d9f76811095a921", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41592", "s2fieldsofstudy": [ "Biology" ], "sha1": "8cdf65c00369a9498c12c1fc2bd6073917db9bab", "year": 2023 }
pes2o/s2orc
ERK3/MAPK6 dictates CDC42/RAC1 activity and ARP2/3-dependent actin polymerization The actin cytoskeleton is tightly controlled by RhoGTPases, actin binding-proteins and nucleation-promoting factors to perform fundamental cellular functions. We have previously shown that ERK3, an atypical MAPK, controls IL-8 production and chemotaxis (Bogueka et al., 2020). Here, we show in human cells that ERK3 directly acts as a guanine nucleotide exchange factor for CDC42 and phosphorylates the ARP3 subunit of the ARP2/3 complex at S418 to promote filopodia formation and actin polymerization, respectively. Consistently, depletion of ERK3 prevented both basal and EGF-dependent RAC1 and CDC42 activation, maintenance of F-actin content, filopodia formation, and epithelial cell migration. Further, ERK3 protein bound directly to the purified ARP2/3 complex and augmented polymerization of actin in vitro. ERK3 kinase activity was required for the formation of actin-rich protrusions in mammalian cells. These findings unveil a fundamentally unique pathway employed by cells to control actin-dependent cellular functions. Introduction Actin is one of the most abundant and highly conserved proteins with over 95% homology among all isoforms (Vandekerckhove and Weber, 1979). In cells, it is present in globular/monomeric form (G-actin), which can polymerize into branched and elongated filamentous forms (F-actin) in a dynamic, spatially, and temporally controlled process (Skruber et al., 2018;Lee and Dominguez, 2010;Carlsson, 2006;Carlier et al., 1997). The cytoplasmic actin network constitutes an important part of the cytoskeleton, which not only mechanically supports the plasma membrane and gives the cell its shape, but fulfills a variety of other functions: It regulates velocity and directionality of cell migration, enables intracellular signaling and transport, supports cell division, and more (Yamaguchi and Condeelis, 2007;dos Remedios et al., 2003). Moreover, by branching and bundling of F-actin filaments the actin cytoskeleton can form unique structures such as lamellipodia and filopodia, which in epithelial cells regulate cell polarization and are crucial for the contact with their environment, while in other cell types these actin rich structures control motility, chemotaxis, and haptotaxis (Vasioukhin et al., 2000;Raich et al., 1999;Millard and Martin, 2008;Khurana and George, 2011;Bahri et al., 2010). The small RhoGTPases belong to the Ras homologous (Rho) superfamily of GTP-binding proteins (Hodge and Ridley, 2016). They function as binary molecular switches circulating between active (GTP-bound) and inactive (GDP-bound) state. The on-off regulation of the RhoGTPases is under the control of guanine exchange factors (GEFs) and GTPase-activating proteins (GAPs) (Hodge and Ridley, 2016). GTP-bound active CDC42 and RAC1 promote the formation of filopodia and lamellipodia, respectively, by activating the NPFs, WASP, and WAVE (Rohatgi et al., 1999;Carlier et al., 1999). Recent studies revealed a role for kinases in directly activating RhoGTPases mainly through phosphorylation, but the identity of the relevant kinase(s) remained elusive (Tong et al., 2013;Forget et al., 2002;Chang et al., 2011;Tu et al., 2003;Ren et al., 2001). ERK3 (MAPK6) is an atypical member of the mitogen-activated protein kinases (MAPKs). Its physiological roles are tissue-specific and can be both dependent and independent of its catalytic activity: kinase-dependent roles of ERK3 include promoting lung cancer cell migration and invasion, while its function in the regulation of breast cancer cell morphology and migration is shown to be partially kinase-independent (Al-Mahdi et al., 2015;Bogucka et al., 2021;Bogucka et al., 2020;Elkhadragy et al., 2020). Moreover, ERK3 is a labile protein whose rapid turnover has been implicated in cellular differentiation (Bogucka et al., 2021;Coulombe et al., 2003). Although the specific signaling mechanisms involved in the regulation of ERK3 have been studied by us and others and provided tissuespecific insights into this atypical MAPK function (Bogucka et al., 2021;Elkhadragy et al., 2020;El Merahbi et al., 2020;Elkhadragy et al., 2017;Long et al., 2012;Sauma and Friedman, 1996), many aspects of ERK3 biology such as its role in suppressing melanoma cell growth and invasiveness remain elusive (Chen et al., 2019). It is, however, known that ERK3 possesses a single phosphorylation site at serine 189 (S189) within S-E-G activation motif, located in its kinase domain, which was shown to be phosphorylated by group I p-21-activated kinases (PAKs), the direct effectors of RAC1 and CDC42 GTPases (Déléris et al., 2011;De la Mota-Peynado et al., 2011;Coulombe and Meloche, 2007). ERK3 signaling is required for the motility and migration of different cancer cells, yet its direct role in the regulation of the polarized phenotype of the cells and actin cytoskeleton is lacking. Here, we unveil a multilayered role for ERK3 in regulating actin filament assembly. We demonstrate that ERK3 controls bundling of actin filaments into filopodia via activation of CDC42. Mechanistically, ERK3 directly binds to RAC1 and CDC42 in a nucleotide-independent manner and is required for the activity of both RhoGTPases in vivo. Furthermore, ERK3 could function as a direct GEF for CDC42 in vitro. In addition, ERK3 stimulates ARP2/3-dependent actin polymerization and maintains F-actin abundance in cells via direct binding and phosphorylation of the ARP3 subunit at S418. While the kinase activity is not required for CDC42 activation, RAC1 activity partially depends on ERK3 kinase function. The in vivo relevance of these ERK3 functions was corroborated by analyses of the motility of tumor cells in orthotopic mammary tumors grown in mice. Together, our results establish a hitherto unknown regulatory pathway directly controlled by ERK3 involving the major signaling molecules CDC42, RAC1, and ARP2/3 in the modulation of the actin cytoskeleton and cell migration. Results ERK3 is required for motility of mammary epithelial cells ERK3 has been shown to regulate mammary epithelial cancer cell migration and metastasis (Al-Mahdi et al., 2015;Bogucka et al., 2020). To uncover the molecular underpinnings of the role of ERK3 in cell migration, we investigated the intracellular localization of ERK3 by immunocytochemical analyses using a validated antibody (Bogucka et al., 2021;Bogucka et al., 2020). Endogenous ERK3 co-localized to the F-actin-rich protrusions in human mammary epithelial cells (HMECs) ( Figure 1A). Reorganization of actin protrusions at the leading edge of the cells initiates cell migration (Yamaguchi and Condeelis, 2007;Affolter and Weijer, 2005); therefore, we further analyzed the significance of ERK3 in cancer cell motility by loss-of-function studies. Depletion of ERK3 reduced speed and displacement length (distance between the start point and the end position of the cell along each axis) of MDA-MB231 cells . As shown in Figure 1C, depletion of ERK3 strongly reduced random cell motility, which remained low over time, as indicated by the calculated acceleration ( Figure 1D) and concomitantly resulted in a shorter overall displacement length ( Figure 1E) with shorter migration track length and average speed (Figure 1-figure supplement 1A and B). Considering that responses of tumor cells depend on their microenvironment, we tested the in vivo motility of these cells. Intravital imaging of the orthotopic mammary tumors confirmed reduced motility of the ERK3 knockdown MDA-MB231-GFP cells ( Figure 1G and Videos 5 and 6). Together, these data suggest that ERK3 likely controls actin cytoskeleton dynamics, thereby influencing cell shape, motility, and polarized migration. ERK3 is activated and required for EGF-mediated directional migration in epithelial cells To further elucidate the physiological role of ERK3 in breast epithelial cell motility, we assessed EGFinduced chemotactic responses to evaluate directional migratory properties of control and ERK3depleted primary HMECs and metastatic MDA-MB231 cells. Interestingly, knockdown of ERK3 significantly attenuated the EGF-induced chemotaxis of both cell types (Figure 1-figure supplement 1C-F, respectively). Considering that ERK3 participated in EGF-mediated chemotactic responses of both primary and oncogenic mammary epithelial cells, we tested ERK3 protein kinetics in response to the growth factor. EGF induced serine 189 (S189) phosphorylation of ERK3 at early time points in primary epithelial cells (Figure 1-figure supplement 2A and B) and MDA-MB231 cells (Figure 1-figure supple ment 2C and D). A slight increase in ERK3 protein levels was detected in HMECs upon EGF treatment ( Figure 1-figure supplement 2A and B). Further cycloheximide (CHX) chase experiments coupled with RT-PCR analyses suggested that EGF treatment did not increase ERK3 protein half-life ( Figure 1-figure supplement 2E-2H). Taken together, these data confirmed that ERK3 was activated in response to EGF and required for EGF-mediated directional migration of epithelial cells. We then investigated the significance of S189 phosphorylation of ERK3 on F-actin and formation Figure 1. ERK3 localizes to the F-actin-rich protrusions and regulates mammary epithelial cells motility. (A) Confocal analyses of control (shCo) and ERK3 knockdown (shERK3) human mammary epithelial cells (HMECs) co-stained with anti-F-actin (green) and anti-ERK3 antibodies (red). Hoechst staining (blue) was used to visualize cell nuclei. Scale bars 28 µm. Higher magnification images of the boxed areas are shown on the right. Red and white arrows exemplify lamellipodia and filopodia, respectively. (B-F) Single-cell tracking analyses of control (shCo) and ERK3 knockdown (shERK3) MDA-MB231- Figure 1 continued on next page GFP cells. (B) Representative images of the cell migration assay were taken at the start (top panel) and end point (20 hr) (lower panel) of the full track length. Random fields were selected, and boxed areas were magnified on the right. Arrows indicate exemplified cell tracking at the beginning and at the end of the tracking. (C-E) Violin plots present cell distribution according to the analyzed motility parameters, calculated as described in the 'Materials and methods' section. Plotted are median (solid line) and 25%/75% quartiles (dashed lines). (C) Speed (µm/s), (D) acceleration (µm/s 2 ), and (E) displacement length (µm) of 14,659 single cells (n = 14659) are depicted. Significance was determined using nonparametric Mann-Whitney test. *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001. Analyses of the migration tracks length and average speed are depicted in Figure 1-figure supplement 1A and B. (F) Cell migration patterns of four randomly selected control and ERK3 knockdown cells were plotted as x-y trajectories over the 20 hr tracking. (G) Quantification of the intravital imaging of the control (shCo) and ERK3 knockdown (shERK3) orthotopic mammary tumors (MDA-MB231-GFP). The number of GFP-positive motile cells was quantified as described in the 'Materials and methods' section and is presented as mean ± SEM from seven animals (n = 7) per condition. Please see representative Videos 5 and 6. Chemotactic responses of the ERK3-depleted cells were assessed in the presence of EGF and are presented in Figure 1-figure supplement 1C-F. ERK3 protein kinetics and stability in response to EGF are depicted in Figure 1-figure supplement 2. (H-J) Depletion of ERK3 alters leading edge protrusions in primary mammary epithelial cells. (H) Confocal F-actin (green)/Hoechst (blue) and SEM images analyses of control and ERK3 knockdown HMECs are presented after 4 hr starvation. For the confocal analyses, exemplified filopodia fitting was included in the middle panel. (I) Western blot analyses presenting ERK3 knockdown efficiency and total levels of β-actin. Ponceau S was used as a loading control. (J) Filopodia number was quantified as described in the 'Materials and methods' section and is presented as mean ± SEM of 17 single cells (n = 17); *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, unpaired t-test. Effect of S189 phosphorylation of ERK3 on F-actin assembly is presented in Figure 1-figure supplement 3. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Prism and Excel file for Figure 1C-E. Source data 2. Excel file for Figure 1F. Source data 3. Prism and Excel file for Figure 1G. Source data 4. Full membrane scans for western blot images for Figure 1I. Source data 5. Prism and Excel file for Figure 1J. https://elifesciences.org/articles/85167/figures#video2 of actin-rich protrusions. ERK3-depleted HMECs cells (shERK3 -3′ UTR) presented in Figure 1A were transfected with V5-tagged ERK3 wild type (ERK3 WT) or S189 mutants (ERK3 S189A/ERK3 S189D) and expression of exogenous ERK3 was visualized by V5-tag staining. Co-staining of the cells with phalloidin green did not show any significant differences in F-actin organization upon the overexpression of the S189 phosphorylation mutants compared to the wild type ERK3 transfected cells (Figure 1-figure supplement 3). Moreover, we observed that exogenous ERK3 localized predominantly in the cytosol compared to the endogenous ERK3, which we detected at the cell edges and F-actin rich protrusions ( Figure 1A). These results prompted us to prefer ERK3 knockdown cells for the study of ERK3-mediated phenotypes. ERK3 is required for actin-rich protrusions In order to migrate, cells acquire morphological asymmetry to drive their locomotion. One of the first steps into breaking spatial symmetry of the cell involves its polarization and formation of membrane extensions (Theriot and Mitchison, 1991). In response to extracellular guidance, cells polarize and extend F-actin-rich protrusions at the leading edge to direct cell migration (Affolter and Weijer, 2005;Ridley et al., 2003;Lauffenburger and Horwitz, 1996). As shown in Figure 1A, ERK3 co-localized with F-actin at the edge of the cell and at the protruding filopodial spikes. Moreover, knockdown of ERK3 clearly reduced F-actin content and actin-rich protrusions in human mammary primary Video 3. Movies representing motility of control (Videos 1 and 2) and ERK3-knockdown (Videos 3 and 4) MDA-MB231 cells. https://elifesciences.org/articles/85167/figures#video3 Video 4. Movies representing motility of control (Videos 1 and 2) and ERK3-knockdown (Videos 3 and 4) MDA-MB231 cells. https://elifesciences.org/articles/85167/figures#video4 Video 5. Representative videos following the motility of the control (shCo) and ERK3 knockdown (shERK3) GFP-expressing MDA-MB231 cells. https://elifesciences.org/articles/85167/figures#video5 Video 6. Representative videos following the motility of the control (shCo) and ERK3 knockdown (shERK3) GFP-expressing MDA-MB231 cells. epithelial cells ( Figure 1A). Further analyses of control and ERK3 knockdown HMECs at the singlecell level confirmed that ERK3-depleted cells had limited, diffuse F-actin, concentrated in the cortical areas of the cells, and a significantly decreased number of filopodia ( Figure 1G-J). ERK3-dependent regulation of RAC1 and CDC42 activity RhoGTPases are the major membrane signaling transmitters that drive polarized cell asymmetry by locally regulating the formation of F-actin-rich protrusions (Hall, 1998;Nobes and Hall, 1995). EGF signaling triggers activation of RAC1 and CDC42, which link the signal to the actin cytoskeleton leading to the formation of lamellipodia-and filopodia-like protrusions, respectively (Figure 2A; Kurokawa et al., 2004). Considering that loss of ERK3 significantly affected F-actin distribution and filopodia formation in primary mammary epithelial cells ( Figure 1A and G-J), we tested the activity of the two major regulators of actin assembly, CDC42 and RAC1 ( Figure 2B-G). ERK3 knockdown significantly decreased the levels of both basal and EGF-induced GTP-bound CDC42 and RAC1 in primary (HMEC) ( Figure 2B, D, and F) and triple-negative breast cancer (MDA-MB231) mammary epithelial cells ( Figure 2C, E, and G), respectively. HMECs were cultured in growth factor/hormone-enriched medium that also contained EGF. Therefore, we withdrew the supplements to test the effect of EGF alone on the activity of RAC1 and CDC42 in these cells and compared the responses with those of the cancer cells. Interestingly, we detected high RAC1 and CDC42 levels at steady state in HMEC cells, which underwent a rapid activation-inactivation cycle after starvation and restimulation with EGF alone. The inhibitory effect of ERK3 knockdown on RAC1 was not as striking as with CDC42 ( Figure 2D-G). Intriguingly, we could observe co-precipitation of ERK3 protein with the active RhoGTPases at endogenous levels, suggesting that these proteins exist in a multimeric complex ( Figure 2B and C). We expanded these analyses to different cell types and consistently found that depletion of ERK3 reduced the activity of RAC1 and CDC42 and that the active RhoGTPases specifically co-precipitated endogenous ERK3 (Figure 2-figure supplements 1 and 2). Interestingly, the effect of ERK3 on RAC1 and CDC42 activity is either EGF-dependent (MCF7 cells [ Figure ERK3 directly binds to RAC1 and CDC42 In light of these observations, we tested whether any direct interaction existed between these RhoGTPases and ERK3. By employing purified full-length recombinant proteins, we found that ERK3 directly bound to RAC1 and CDC42 in a nucleotide-independent manner ( Figure 2H and I). The nucleotide-loading status of the purified RAC1 and CDC42 proteins was simultaneously assessed by GST-PAK1-PBD pull-down assay followed by immunoblots ( Figure 2-figure supplement 3). To further assess the affinity of the binding, we performed concentration-dependent protein binding assays with ELISA as described in the 'Materials and methods' section. Binding of full-length ERK3 to RAC1 and CDC42 could be detected at a low nanomolar range (5 nM) ( Figure 2J and K). Next, we tested whether the kinase domain of ERK3 (amino acids [aa] 9-327) would be sufficient for this interaction. In vitro GST-pull-down assays confirmed that the kinase domain of ERK3 was sufficient for its binding to RAC1 and CDC42 ( Figure 2L and ERK3 functions as a GEF for CDC42 The intrinsic GDP-GTP exchange of RhoGTPases is a slow process, which is stimulated in vivo by GEFs ( Figure 3A; Hodge and Ridley, 2016). To examine the ability of the ERK3 kinase domain to stimulate GDP-GTP exchange, we incubated GDP-loaded RhoGTPases with non-hydrolysable GTPγS in the presence or absence of the ERK3 kinase domain and quantified the final amount of GTP-bound CDC42 ( Figure 3B and C) and RAC1 ( Figure 3D and E) using GST-PAK1-PBD fusion beads. The ERK3 kinase domain failed to interact directly with the GST-PAK1-PBD protein but bound to active CDC42 in the same assay confirming that conformationally stable protein was employed in these assays (Figure 2figure supplement 4). These experiments showed that although ERK3 regulated the activity of both Figure 3B and C). We further corroborated these results by measuring the GEF activity of the ERK3 kinase domain in vitro by employing a second fluorophore-based assay. The results confirmed those of the previous assay as addition of ERK3 stimulated GTP-loading on CDC42 ( Figure 3F) but not on RAC1 ( Figure 3G). Notably, ERK3 exerted a more potent and long-lasting effect than Dbl's Big Sister (Dbs), a well-established GEF for CDC42 and RhoA proteins ( Figure 3F; Baumeister et al., 2006). To assess whether full-length ERK3 could potentiate the effect in terms of activation of CDC42, we compared GEF activity of the ERK3 kinase domain with that of the full-length protein. Interestingly, full-length ERK3 exhibited weaker GEF activity in vitro than the kinase domain of ERK3 (Figure 3figure supplement 1A). The observed in vitro discrepancy between the kinase domain and the fulllength protein could be attributed to differences in posttranslational modifications due to different production methods of the two recombinant proteins, which might affect conformation and/or activity. The full-length protein employed in these assays was purified from Sf9 cells, while the kinase domain was purified from bacterial cells and therefore lacked -among other things -Ser189 phosphorylation and thus kinase activity in vitro ( Figure 3-figure supplement 1B). We next examined whether ERK3 is required for the localization and activity of CDC42 at the plasma membrane (PM). Cellular fractionation assays revealed that knockdown of ERK3 does not disrupt the localization of CDC42 and RAC1 to the PM ( Figure 4A-C). On the contrary, the PM fraction from ERK3-depleted HMECs had more ERK3 with active RAC1 and CDC42. Relative levels of (D, E) active CDC42 and (F, G) RAC1 were calculated with respect to the total protein levels and are presented as mean ± SEM from minimum three (n = 3) independent experiments; *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, one-way ANOVA, Tukey's post-test. ERK3-dependent regulation of the RAC1 and CDC42 activity was assessed in multiple cell types and data are presented in Figure 2-figure supplements 1 and 2. (H, I) In vitro interaction between ERK3 and GDP/GTP bound (H) RAC1 and (I) CDC42 was assessed, GST was used as a negative control. Pull-down efficiency was assessed with GST antibody and levels of bound ERK3 were verified. The nucleotide-loading status of the purified RAC1 and CDC42 proteins is presented in Figure 2-figure supplement 3. (J, K) Concentration-dependent binding affinity of ERK3-GST protein to (J) RAC1 and (K) CDC42 was determined. Interacting ERK3/GST proteins were used at 5, 10, 20, and 40 nM concentrations. Data from three independent experiments run in triplicates are presented as mean ± SEM. (L) ERK3 kinase domain (aa 9-327) binds to RAC1 and CDC42. Representative analysis of the in vitro GST pull-down of RAC1 and CDC42 and the interaction with the ERK3 kinase domain recombinant protein. Binding affinity of ERK3 (aa 9-327) with PAK1-PBD was verified and is presented in Figure 2-figure supplement 4. The online version of this article includes the following source data and figure supplement(s) for figure 2: Source data 1. Full membrane scans for western blot images for Figure 2B, C, H, I, and L. Source data 2. Prism and Excel file for Figure 2D. Source data 3. Prism and Excel file for Figure 2E. Source data 4. Prism and Excel file for Figure 2F. Source data 5. Prism and Excel file for Figure 2G. Source data 6. Prism and Excel file for Figure 2J. Source data 7. Prism and Excel file for Figure 2K. total CDC42 and RAC1 than control cells. Interestingly, subsequent pull-down of active CDC42/RAC1 from the PM fraction revealed that although both RhoGTPases are present at the plasma membrane in the absence of ERK3, their activity is significantly reduced ( Figure 4B and C). These results were corroborated by the colocalization of ERK3 with CDC42 to the protrusions at the leading edge of the cell ( Figure 4D-G) with mean Pearson's correlation coefficient (PCC (r)) of 0.6638 ± 0.02946 and Spearman's rank correlation coefficient (SRCC (ρ)): 0.7122 ± 0.02586 ( Figure 4F and G, respectively). As ERK3 protein co-precipitated with active RAC1 and CDC42 ( Figure 2B and C), we further investigated whether also components of the ARP2/3 complex precipitate with RAC1/CDC42. Interestingly, we readily detected the ARP2/3 complex subunits ARP3, ARP2, and ARPC1A, as well as ERK3 by immunoblots in active RAC1/CDC42 pull-downs ( Figure 5B). Consistent with the reduction in active RAC1 and CDC42 levels, we observed that knockdown of ERK3 by CRISPR/Cas9 or with si/ shRNAs led to a reduction in F-actin staining in mammary epithelial cells as shown in Figure 5C and Figure 5-figure supplement 1A. To further quantify these effects and corroborate the role of ERK3 in polymerization of actin filaments, we evaluated the ratio of cytoskeleton-incorporated filamentous (F-) and globular/monomeric actin (G-actin) in control and ERK3-depleted mammary epithelial cells by ultracentrifugation. Knockdown of ERK3 significantly shifted total F-actin to G-actin in both primary HMECs ( Figure 5D and E) and MDA-MB231 cancer cells ( Figure 5-figure supplement 1B and C). Moreover, in the absence of CDC42, we could still detect colocalization between endogenous ERK3 and the ARP3 subunit of the ARP2/3 complex ( Figure 5-figure supplement 2A-D). These data prompted us to test whether ERK3 directly controls ARP2/3-dependent actin polymerization in the absence of RAC1 and CDC42. In vitro, the ARP2/3 complex expresses low intrinsic actin nucleating activity. In vivo, its activity is tightly controlled by NPFs, which stimulate conformational changes (Mullins et al., 1998;Welch et al., 1998). To determine whether ERK3 had any direct effect on the assembly of actin filaments, we used a pyrene-actin polymerization assay to monitor fluorescence kinetics over time as the actin filaments assembled in the presence of the purified ARP2/3 complex. We used the VCA domain blot analyses of active RAC1 and CDC42 pull-down, respectively, using PAK1-PBD fusion beads. Levels of active RhoGTPases were detected using RAC1-and CDC42-specific antibodies. Levels of PAK1-PBD protein were detected by Ponceau S staining and used for the quantification presented in (C) and (E). Fold change in GTP-loading was calculated by normalization of the signal obtained for the samples with GTPγS in the presence of ERK3 with samples incubated with GTPγS alone and is presented as mean ± SEM from (D) five (n = 5) independent experiments; *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, unpaired t-test. (F, G) In vitro RhoGEF activity assay was performed to assess guanine nucleotide exchange activity on (F) CDC42 and (G) RAC1 in the presence and absence of recombinant ERK3 protein (9-327 aa). After six initial readings, Dbs-GEF or ERK3 protein were added at 0.5 µM final concentration. GEF activity was expressed as mean relative fluorescence units (RFUs) from at least three independent experiments. RhoGEF Dbs protein was used as a positive control. In vitro GEF activity of full-length ERK3 vs. ERK3 kinase domain towards CDC42 was compared as is presented in Figure 3-figure supplement 1A. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Full membrane scans for western blot images for Figure 3B and D. Source data 2. Prism and Excel file for Figure 3C. Source data 3. Prism and Excel file for Figure 3E. Source data 4. Prism and Excel file for Figure 3F. Source data 5. Prism and Excel file for Figure 3G. of WASP protein as a positive control. Indeed, recombinant ERK3 stimulated ARP2/3-dependent actin polymerization at a nanomolar concentration (5 nM) in the presence of 10 nM of the ARP2/3 complex ( Figure 5F). Moreover, the nucleation-promoting activity of ERK3 was comparable to WASP (VCA), a well-known NPF. ERK3 protein alone did not exert any stimulatory effect on actin nucleation ( Figure 5F). These data suggest that ERK3 could function as nucleation promoting factor that stimulates ARP2/3-dependent actin polymerization. Additionally, we measured the effect of having both ERK3 and WASP (VCA) present on ARP2/3-dependent actin polymerization. Using the same concentration as for the initial screening (ERK3 4.8 nM and WASP [VCA] at 400 nM), we did not detect any additive effect when both proteins were combined ( ERK3 directly binds to ARP3 The ARP2/3 complex is composed of seven subunits: the actin-related proteins ARP2 and ARP3 and five associated proteins (ARPC1-5) that sequester ARP2 and ARP3 in the complex's inactive conformation (Robinson et al., 2001;Nolen et al., 2004; Figure 6A). In vitro, NPFs such as WASP stimulate actin polymerization by directly binding to and activating ARP2/3. Considering that ERK3 exerted a similar effect on the ARP2/3-dependent actin polymerization as NPFs, we further investigated the mode of interaction between ERK3 and the ARP2/3 protein complex by employing purified components. Indeed, using quantitative ELISAs we found that full-length ERK3 bound to the ARP2/3 complex with high-affinity in vitro ( Figure 6B). ARP2 and ARP3 are essential for the nucleating activity of the ARP2/3 complex and occupy the central position in actin nucleation. Structural modeling has suggested that although ARP2 and ARP3 interact with each other in the inactive conformation, they are not in the right configuration to act as a seed for actin nucleation. After activation by NPF binding, the heterodimer mimics two actin subunits. Binding of an actin monomer then initiates the formation of the actin nucleus and the polymerization process (Mullins et al., 1998;Gournier et al., 2001;Kelleher et al., 1995). Extensive characterization of the individual subunits also revealed that ARP3 is crucial for the stimulation and activity of the ARP2/3 complex and nucleation in vitro (Gournier et al., 2001). Considering that ARP3 is a structural component regulating the nucleating properties of the ARP2/3 complex, we further determined whether ERK3 can directly bind to ARP3. Through GST pull-down experiments with purified recombinant proteins, we could indeed detect a concentration-dependent direct binding between full-length ERK3 and ARP3 ( Figure 6C). These results were further confirmed with ELISA analyses suggesting a high-affinity binding between these two proteins ( Figure 6D). The ERK3 kinase domain binds also directly to the Arp3 protein, albeit with low affinity in comparison to the full-length protein ( Figure 6-figure supplement 1A). Finally, we also detected ERK3 co-precipitating with ARP3 and ARP2 in HMECs at endogenous levels ( Figure 6E). instructions (Cat# 16118/19, Thermo Fisher) and as described in the 'Materials and methods' section. (B) Cellular fractionation was performed. Expression levels of ERK3, RAC1, and CDC42 were assessed in nuclear (N), cytosolic (C), and PM fractions. Na/K ATPases and GAPDH were used as controls for the analyzed fractions. (C) Active RAC1/CDC42 pull-down was performed from the isolated PM fraction. (D-G) Colocalization of ERK3 and CDC42 in polarized cells. Human mammary epithelial cells (HMECs) were seeded and cultured on cover slips. When cells became around 70% confluent, scratch wounds were introduced to the cover slip using a 200 µl tip, medium was exchanged to supplement-free, and cells were cultured for additional 6 hr. Afterward, cells were fixed and subjected to IF staining as described in the 'Materials and methods' section with anti-CDC42 (red) (secondary antibody: anti-mouse Cy3; Cat# A10521, Thermo Fisher Scientific) and anti-ERK3 (green) (secondary antibody: Alexa Fluor 488; Cat# A11008, Thermo Fisher Scientific) antibodies. Merged images show the colocalization of both proteins in yellow. Magnification of the boxed regions is shown on the right for better visualization. In (D), a group of cells at the scratch site is presented and ERK3-CDC42 colocalization is marked with red arrows at the cell protrusions and with white arrows at the cell body. (E) Images representing ERK3-CDC42 colocalization at a single-cell level at the scratch site. Scale bars 28 µm. (F, G) Colocalization of ERK3 and CDC42 was analyzed as described in the 'Materials and methods' section and values for (F) Pearson's correlation coefficient as well as the (G) Spearman's rank correlation coefficient are presented for eight randomly selected cells (n = 8). Scores above 0 indicate a tendency towards colocalization with a perfect colocalization with a score of 1. The online version of this article includes the following source data for figure 4: Source data 1. Full membrane scans for western blot images for Figure 4A-C. Source data 2. Prism and Excel file for Figure 4F and G. ERK3 phosphorylates ARP3 at S418 Conformational changes activating ARP2/3 are induced by binding to NPFs, actin, and ATP. Additionally, phosphorylation of various ARP2/3 residues has been reported previously to play a crucial role in the conformational rearrangements within the ARP2/3 complex (Vadlamudi et al., 2004;Singh et al., 2003;Narayanan et al., 2011;LeClaire et al., 2008;Choi et al., 2013). Most of the studies focused on phosphorylation of the ARP2 subunit, which induced the conformational repositioning of this subunit toward ARP3, further allowing binding of NPFs and full activation of the ARP2/3 complex leading to the formation of the ARP2-ARP3 heterodimer (Narayanan et al., 2011;LeClaire et al., 2008;Choi et al., 2013). From the seven components of the ARP2/3 complex ARP3, ARP2 and ARPC1A purified from Acanthamoeba castellani were shown to be phosphorylated (LeClaire et al., 2008). Since ERK3 is a kinase, we assessed phosphorylation of the ARP2/3 complex by ERK3 by employing in vitro kinase assays and subsequent analysis of phosphopeptides by mass spectrometry. Initial analyses detected phosphorylation of the ARP3 subunit at serine 418 (S418) (peptide sequence: HNPVFGVMS, please see Supplementary file 1). We further validated the significance of the detected phosphorylation in vivo by overexpressing the non-phosphorylatable (S418A) and phospho-mimicking (S418D) mutants of ARP3 in primary mammary epithelial cells and analyzing the cell morphology and F-actin abundance. Cells transduced with wild type (WT) ARP3 or empty vector (EV) were used as reference controls ( Figure 6F). We were able to validate the phosphorylation of ARP3 at Ser418 in ARP3overexpressing cells ( Figure 6G) and detect a decrease in the S418 phosphorylation in ERK3-depleted the V-verpolin-like motif binds actin monomer (G-actin), C-central and A-acidic domains bind and activate the ARP2/3 complex. Conformational changes induced by the binding of the ARP2/3 complex promote its binding to the actin filament, which is strengthened by the additional interaction of the ARP2/3 complex with WASP (VCA)-G-actin. Further conformational changes will secure the ARP2/3 complex on the filament and allow its binding to the actin monomer and the polymerization of the newly nucleated filament. Actin polymerizes at the fast-growing/barbed end, elongating toward the plasma membrane and the ARP2/3 complex would cross-link newly polymerizing filament to the existing filament. (B) ERK3 co-precipitates with active RAC1 and CDC42 in complex with ARP2/3. Active RAC1/CDC42 pull-down was performed using control and ERK3 knockdown human mammary epithelial cells (HMECs). Levels of the active RAC1 and CDC42 were assessed as well as the co-immunoprecipitation levels of ERK3, ARP2, ARP3, and ARPC1A. Levels of the total protein expression were evaluated in the total cell lysates (TCL) and Ponceau S staining was used as a loading control. (C-F) ERK3 regulates F-actin levels in vitro and in vivo. (C) Western blot analyses of control (CRISPR Co) and ERK3-depleted (CRISPR ERK3) HMECs are presented alongside with representative confocal images of F-actin staining. (D, E) In vivo analysis of F-and G-actin levels in HMECs upon ERK3 knockdown. (D) Representative western blot analyses of the enriched F-and G-actin fractions as well as the ERK3 knockdown validation and total actin levels in the TCL are presented. (E) F-and G-actin levels were quantified, and ratios were calculated from five (n = 5) independent experiments and are presented as mean ± SEM; *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, unpaired t-test. Analyses of ERK3-dependent regulation of F-actin levels in cancerous MDA-MB231 cells is presented in Figure 5-figure supplement 1. Cellular colocalization between endogenous ERK3 and the ARP2/3 was assessed in the absence of CDC42 and is presented in Figure 5-figure supplement 2. (F) Effect of full-length ERK3 on ARP2/3-dependent pyrene actin polymerization was assessed using a pyrene actin polymerization assay. Polymerization induced by the VCA domain of WASP that served as a positive control (green) as well as the ARP2/3 (orange) and ERK3 protein alone (blue) are shown for reference. Actin alone (black) was used to establish a baseline of polymerization. Fluorescence at 360/415 was measured over time and is presented as mean fold change from at least three independent experiments after normalization to the first time point within the respective group. ARP2/3-dependent actin polymerization was measured in the presence of both ERK3 and WASP (VCA) domain, and the results are depicted in Figure 5-figure supplement 3. The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1. Full membrane scans for western blot images for Figure 5B and C and Figure 4D. Source data 2. Prism and Excel file for Figure 5E. Source data 3. Prism and Excel file for Figure 5F. Figure 6H) and in the kinase dead (KD) ERK3-overexpressing cells (Figure 6-figure supplement 1B). Of note, we observed that expression of exogenous ARP3 reduced the protein levels of the endogenous ARP3 ( Figure 6G), possibly due to disruption of the protein complex, thereby affecting the stability of the endogenous proteins. Strikingly, expression of the phosphorylation-mimicking ARP3 mutant (S418D) under these settings predominantly induced actin filament formation, as indicated by the intense phalloidin staining ( Figure 6F) and enhanced the F/G-actin ratio in primary mammary epithelial cells ( Figure 6I and J). Moreover, S418D-overexpressing cells exhibited F-actin-rich protrusions ( Figure 6F, lower panel). In contrast, most of the S418A-expressing cells were smaller in size and had a round morphology ( Figure 6F, upper panel). Quantification of the F-actin levels revealed that despite the morphological distortion, cells expressing non-phosphorylatable ARP3 (ARP3 S418A) had no significant effect on overall F-actin levels ( Figure 6I and J). To further determine the relevance of the S418 phosphorylation in ERK3-regulated ARP2/3-dependent actin polymerization in cells, we depleted ERK3 in the ARP3 S418D-overexpressing HMECs using shRNA ( Figure 6K-M). Knockdown by ELISA as described in the 'Materials and methods' section and mean absorbance (Abs) ± SEM from three independent experiments is presented. (E) Co-immunoprecipitation (IP) of ARP2/3 protein complex and ERK3 was performed in HMECs using ARP3 antibody. Levels of precipitated ARP3 as well as co-IP of ARP2 and ERK3 were assessed. IgG control was included to determine the specificity of the interaction. Total cell lysate (TCL) was included to present expression levels of the verified interacting partners. Ponceau S staining was used as a loading control. (F, G) Actin phenotype of the human mammary epithelial cells (HMECs) was validated upon stable overexpression of the ARP3 non-phosphorylatable (S418A) and the phosphomimicking (S418D) mutant, respectively. Wild type (WT) ARP3 was used as a control for the mutants and empty vector (EV) served negative control for the overexpression itself. (F) F-actin expression and organization in the negative (S418A) and phospho-mimicking (S418D) ARP3 mutant was visualized by green phalloidin and merged with the Hoechst staining of the nuclei. Four representative confocal images are presented. Images of EV-transfected and ARP3 WT-overexpressing HMECs are presented as controls. (G) Western blot validation of the overexpression efficiency and phosphorylation of ARP3 at S418. Anti-V5-tag antibody was used to detect levels of exogenous ARP3 WT, S418A, and S418D. Expression levels of the endogenous ARP3 were assessed as well as the phosphorylation at S418, total actin was validated. Ponceau S staining was used as a loading control. (H) Detection of the S418 phosphorylation of ARP3 in CRISPR ERK3 HMECs presented in Figure 5C and D. (I, J) Effect of the ARP3 mutant overexpression on F-actin levels was quantified using F/G actin in vivo assay. (I) Representative western blot analyses of F-and G-actin levels detected in fractions obtained from EV, ARP3 WT, S418A, S418D HMECs. (J) Quantification of the F/G actin ratios was performed for three (n = 3) independent experiments and is presented as mean ± SEM; *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, one-way ANOVA, Tukey's post-test. (K-M) Effect of ERK3 depletion on dense F-actin phenotype of the ARP3 S418D-overexpressing HMECs. HMECs stably overexpressing ARP3 S418D were transduced with lentiviral particles targeting ERK3 (shERK3) and stable knockdown was established as described in the 'Materials and methods' section. Cells were further subjected to analyses of the F-actin levels. (K) IF staining with Oregon Green Phalloidin 488 to visualize F-actin levels and organization. Scale bars 28 µm. (L, M) Effect of the ERK3 knockdown on F-actin levels was quantified in the ARP3 S418D mutant overexpressing HMECs using F/G actin in vivo assay. (L) Representative western blot analyses of F/G actin levels. ARP3 S418D-(V5-tagged) overexpression and ERK3 knockdown efficiency were validated in TCL. Actin and Ponceau S staining were used as loading controls. (M) Calculated ratios of F/G actin are presented as mean ± SEM from three (n = 3) independent experiments; *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, paired t-test. Colocalization of endogenous ERK3 with endogenous and exogenous ARP3 mutant (S418D) was verified, and further effect of the ERK3 depletion on the RAC1 and CDC42 activity was assessed in ARP3 S418Doverexpressing HMECs and presented in Figure 6-figure supplement 2. The online version of this article includes the following source data and figure supplement(s) for figure 6: Source data 1. Full membrane scans for western blot images for Figure 6A, C, E, G, I, and L. Source data 2. Prism and Excel file for Figure 6B. Source data 3. Prism and Excel file for Figure 6D. Source data 4. Prism and Excel file for Figure 6J. Source data 5. Prism and Excel file for Figure 6M. of ERK3 led to a significant reduction of the F-actin levels in the ARP3 S418D-overexpressing cells ( Figure 6K-M). These data suggested that although phosphorylation at S418 of ARP3 promoted actin polymerization and thus the F-actin content it is not absolutely essential like ERK3. A concomitant active RAC1/CDC42 pull-down revealed a significant decrease in RAC1 activity, with almost no effect on CDC42 (Figure 6-figure supplement 2A and B). We could still detect ARP3 and ARP2 in complex with CDC42 under these settings ( Figure 6-figure supplement 2A). Using immunofluorescence analysis, we observed significant colocalization between endogenous ERK3 and ARP3 in HMECs ( Figure 6-figure supplement 2C) and between endogenous ERK3 and overexpressed ARP3 S418D (Figure 6-figure supplement 2D-F), respectively. Further, depletion of ERK3, as expected, reduced the migration of ARP3 S418D-expressing cells (Figure 6-figure supplement 2G and H). Kinase activity of ERK3 is necessary for membrane protrusions in primary mammary epithelial cells ERK3 controls both F-actin levels by regulation of the new filament assembly via ARP3-binding and its branching and bundling into the actin-rich protrusions by binding to CDC42/RAC1. We further investigated whether kinase activity of ERK3 was required for the formation of actin-rich protrusions and F-actin levels in mammary epithelial cells. Control (shCo) and ERK3-depleted (shERK3, 3′ UTR) HMECs were reconstituted with WT or kinase dead (KD) (K49A/K50A) ERK3, and EV was introduced as a control ( Figure 7A). Knockdown of ERK3 led to a decrease in actin-rich protrusions and overall F-actin staining ( Figure 7B) with concomitant decrease in F-actin abundance ( Figure 7C and D). Complementation with ERK3 WT rescued overall F-actin levels and the protrusive phenotype of the HMECs ( Figure 7B and D). Interestingly, KD ERK3 recovered cytoskeletal F-actin levels to the same extent as ERK3 WT ( Figure 7C and D). However, although the abundance of F-actin is rescued upon KD ERK3 overexpression, cells formed a dense meshwork of actin membrane ruffles which did not protrude into filopodia but rather curved around the edges of the cell forming a tangled web ( Figure 7B). These results indicate that kinase activity of ERK3 is not required for the polymerization of actin, but rather for the bundling and/or branching of the rapidly polymerizing actin filaments. To further corroborate these observations, the WT and KD ERK3 proteins were expressed in the rabbit reticulocyte lysate (RRL) system and then employed to stimulate ARP2/3-dependent actin polymerization in vitro. Kinase activity of ERK3 did not seem to affect the ARP2/3-dependent actin polymerization in vitro (Figure 7-figure supplement 1). We performed active RAC1/CDC42 pull-down from HMECs shERK3 (3′ UTR) cells reconstituted with either WT or the KD ERK3 and found that the kinase activity of ERK3 was not required for interactions or activation of CDC42. However, RAC1 activation was partially dependent on the kinase activity of ERK3 ( Figure 7E-I). Discussion The regulation of the actin cytoskeleton is an intensely investigated area because of its fundamental role in many basic cellular functions. The signaling machinery that controls the actin skeleton dynamics is deregulated in cancer contributing to metastasis. Kinases are major drivers of signaling events and form the largest part of the druggable genome. Several drugs have successfully been developed to target deregulated kinases in cancer. Of the more than 500 kinases encoded by the human genome, many studies have been focused on the conventional MAPKs, while the pathophysiological significance of atypical MAPKs remains underexplored. We have recently demonstrated that ERK3 directly contributed to AP-1/c-Jun activation, IL-8 production, and chemotaxis (Bogucka et al., 2020). Emerging studies have further shown that ERK3 functions in a context-and tissue-specific manner in controlling tumorigenesis and metastasis (Al-Mahdi et al., 2015;Bogucka et al., 2021;Elkhadragy et al., 2020;Elkhadragy et al., 2017;Choi et al., 2013;Elkhadragy et al., 2018;Alshammari et al., 2021). Mice expressing a catalytically inactive ERK3 mutant survive to adulthood and are fertile, but the kinase activity of ERK3 is necessary for optimal postnatal growth. In this study, we aimed to elucidate how ERK3 influences cell shape and actin cytoskeleton and unexpectedly revealed a pivotal role for this atypical MAPK in the regulation of RhoGTPases as GEF, of ARP2/3-dependent polymerization of actin probably as NPF, and of cell migration in mammalian cells (Figure 8). Further, our studies unveiled an evolutionarily conserved role for ERK3 in the formation of actin-rich protrusions. ERK3 is a GEF for CDC42 The activation of both RAC1 and CDC42 was compromised in the absence of ERK3 in primary mammary epithelial cells and oncogenic MDA-MB-231 cells. While the activation of RAC1 was partially rescued in HMECs upon EGF stimulation in ERK3-depleted cells, this was not the case in MDA-MB231 cells ( Figure 2B, C, F, and G). Furthermore, while ERK3 regulated the activity of both RAC1 and CDC42 in vivo ( Figure 2B-G), the kinase domain of ERK3, which lacks kinase activity, directly facilitated GDP-GTP nucleotide exchange specifically for CDC42 ( Figure 3B, C, and G), which is implicated as a main regulator of filopodia assembly. In some cell types, we often detect a reduction in the total protein levels of RAC1 and CDC42 after depletion of ERK3. Whether inactivation of these RhoGTPases prompts them for proteasomal degradation under these settings deserves further analysis. Using several experimental procedures, we demonstrated that the interaction between ERK3 and RAC1 or CDC42 is direct and nucleotide-independent, which suggests that ERK3 is probably targeting the C-terminus of these RhoGTPases. Interestingly, it has been reported that RAC1 possesses a C-terminal docking (D site) for the canonical ERK ( 183 KKRKRKCLLL 192 ) (Tong et al., 2013). Whether the same site is being exploited by ERK3 for its binding to RAC1 or CDC42 deserves further studies. It is interesting to note that while the kinase domain of ERK3 promoted GTP binding to CDC42, it slightly attenuated the same process with RAC1 ( Figure 3B-G). While the kinase activity of ERK3 is not required for CDC42 activation, RAC1 activity partially depends on it. Further structural, biochemical, and biophysical studies are clearly warranted to unveil the molecular basis of this interaction and the subsequent functional implications. In cells, regulation of RhoGTPases and that of the actin assembly is a much more complex and integrative process (Kurokawa et al., 2004). For example, a cell type-dependent, hierarchical crosstalk between RhoGTPases was observed in the regulation of actin-rich protrusions and CDC42 activity was shown to initiate RAC1-dependent lamellipodia protrusions (Nobes and Hall, 1995;Zamudio-Meza et al., 2009;Nobes and Hall, 1999). While ERK3 might activate RAC1 through its GEF-like function toward CDC42, it could also function as an intermediate link between RAC1 and its GEF by scaffolding both proteins and/or activating the GEF, thus indirectly contributing to RAC1 activity. It is highly interesting that PAK family kinases, the effector kinases of RAC1 and CDC42, directly phosphorylate the SEG motif in the activation loop of ERK3 (Déléris et al., 2011;De la Mota-Peynado et al., 2011). The observations presented here strongly suggest a positive feedback loop, where ERK3 is required for the GTP loading of CDC42 and RAC1 to induce their interaction with PAKs, which in turn phosphorylate and activate ERK3. We show that active ERK3 kinase is required for the eventual formation of actin-regulated membrane protrusions. The functional and physical uncoupling of this loop could possibly be controlled by a phosphatase or by inducing the rapid turnover of ERK3 protein. The constituents of this dynamic complex need further evaluation and characterization. representing cell phenotype and magnified regions of the cell edges on the right show the actin distribution. (C, D) Levels of F-actin in HMECs analyzed by phalloidin staining were quantified using a F/G actin in vivo assay. (C) Calculated ratios of F/G actin are depicted as mean ± SEM from three (n = 3) independent experiments; *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001, one-way ANOVA, Tukey's post-test. (D) Representative western blot analyses of F-and G-actin levels. (E-I) Active RAC1 and CDC42 pull-down assays were performed from control (shCo) and ERK3-depleted (shERK3, 3′ UTR) HMEC reconstituted with either WT or KD ERK3 as described in the 'Materials and methods' section. PAK1-PBD was used to capture the active form of CDC42 and RAC1 in the respective cell lysates. Levels of active (GTP-bound) CDC42 and RAC1, as well as the total protein expression were assessed, ERK3 knockdown efficiency and overexpression was verified in the total cell lysate (TCL) using ERK3 antibody and V5-tag antibody to detect the exogenous WT and KD version of ERK3. (E) Western blot analyses are presented. (F, G) Relative levels of active CDC42 and RAC1 were calculated with respect to the total protein levels and are presented as ratio of GTP/Total RhoGTPases. (H, I) Additionally, the ratio of GTP/total RAC1 and CDC42 was normalized to the control cells (shERK3+EV) and is presented as fold change in activation. The online version of this article includes the following source data and figure supplement(s) for figure 7: Source data 1. Full membrane scans for western blot images for Figure 7A, D, and E. Source data 2. Prism and Excel file for Figure 7C. . Schematic summary of the ERK3-dependent mechanisms regulating actin cytoskeleton and cell motility. ERK3 directly binds and activates the ARP2/3 protein complex as well as the CDC42 and RAC1 Rho GTPases. Activation of the ARP2/3 complex and RAC1/CDC42 is required for nucleation of the new actin filaments, elongation, and branching into the lamellipodia and filopodia. ERK3 regulates actin-rich protrusions, which play a direct role in cell motility. ERK3 binds ARP3 to regulate its function in actin polymerization Cells rely on several mechanisms to regulate the assembly of actin filaments. To complicate matters further, regulatory pathways involved in the filopodia formation seem to be cell type-specific, and CDC42-, WASP-, and ARP2/3-independent mechanisms of membrane spikes formation have been proposed (Czuchra et al., 2005;Snapper et al., 2001;Steffen et al., 2006). In this study, we demonstrate that ERK3 directly binds the ARP2/3 complex and contributes to actin polymerization (Figures 5 and 6). Conformational changes in ARP2/3 complex assembly are crucial for the initial heterodimer formation between ARP2 and ARP3 subunits (Machesky et al., 1999;Gournier et al., 2001;Welch et al., 1997). In the unstimulated complex, ARP2 and ARP3 subunits exist in a splayed confirmation and upon stimulation by NPFs the two proteins align into a side-by-side position creating a surface mimicking the first two subunits of the new actin filament (Robinson et al., 2001;Mullins and Pollard, 1999). Binding of these two crucial subunits by NPFs could reassure an active conformation. Interestingly, phosphorylation of certain subunits of ARP2/3 complex has been shown to be indispensable for the destabilization of its inactive conformation prior the full activation by the NPFs (Vadlamudi et al., 2004;Narayanan et al., 2011;LeClaire et al., 2008;Choi et al., 2013). LeClaire et al. further reported that Nck-interacting kinase (NIK) regulates ARP2 phosphorylation and grooms the ARP2/3 complex for the activation by NPFs. Phosphorylation events within the ARP2/3 complex could regulate nucleation activity by affecting several properties, such as complex formation itself, conformational changes in the complex assembly, or the affinity of the ARP2/3 to NPFs. Further structural simulation studies are needed to test whether the interaction of ERK3 with the ARP2/3 complex and ARP3 phosphorylation at S418 inflicts key conformational changes to support actin polymerization. While we uncover a role for ERK3 in the formation of filopodia and ARP2/3-dependent actin polymerization, the possible role for ERK3 in the regulation of other actin nucleation factors including formins cannot be ruled out. While the kinase activity of ERK3 is clearly required for the formation of actin-rich protrusions, expression of KD constructs in ERK3-depleted cells failed to inhibit F-actin enrichment. These data suggest that perhaps ERK3 binding to ARP2/3 and possibly other hitherto unknown factors per se contribute to actin polymerization, while its kinase activity is probably still required for ARP2/3dependent and CDC42-induced filopodia formation, polarization, and migration of mammalian epithelial cells. Overall, our studies shed new light on the multilayered regulation of the actin cytoskeleton by an understudied MAPK (Figure 8). ERK3 is druggable, and kinase inhibitors have already been developed for clinical use, which could also serve as tools to decipher kinase-dependent and -independent functions. Our observations will not only enhance the current understanding of actin cytoskeleton regulation, but also instigate new lines of investigations on the molecular, structural, and functional characterization of RhoGTPases and the ARP2/3 complex. Materials and methods Cell culture Medical Center of the Johannes Gutenberg University Mainz) and were cultured on 0.2% gelatin coating in Endothelial Cell Growth Medium (Cat# C-22010, PromoCell). HT-29 cells were purchased from ATCC (ATCC HTB-38) and cultured in McCoy's medium supplemented with 10% heat-inactivated FBS. Calu-1 cells were obtained from Sigma and cultured in DMEM supplemented with 10% heatinactivated FBS. All cell lines used in this study were authenticated cell lines obtained from ATCC or DSMZ. All used cells were periodically tested for Mycoplasma contamination with negative results. Stimulation of cells HMECs and MDA-MB231 cells were seeded in 12-well plates at an initial density of 2 × 10 5 cells/ well. After cells reached 70% confluence, medium was exchanged to MEGM with no supplements for HMECs and DMEM minus FBS for MDA-MB231 cells, 4 hr prior treatment with recombinant human epidermal growth factor (EGF) (Cat# RP-10927, Invitrogen) at 5, 10, 15, and 30 min. Afterward, cells were subjected to western blot analyses. Cycloheximide (CHX) chase experiments To determine ERK3 half-life in mammary epithelial cells and its alteration upon EGF treatment, HMECs were seeded in 12-well plates at an initial density of 2 × 10 5 cells/well and cultured until 70% confluent. Medium was exchanged to MEGM with no supplements, 4 hr before cells were treated with human recombinant EGF at 100 µg/ml for 15 min, followed by treatment with protein synthesis inhibitor CHX (Cat# C-7698, Sigma, stock 100 mg/ml in DMSO) at 0.5, 1, 2, 4, and 6 hr. Cells were subjected to western blot analyses and quantification. Fold change in ERK3 protein levels was calculated with respect to the untreated cells (-EGF, 0 hr) or to the respective control in each group (0 hr) for unstimulated (-EGF) and EGF-stimulated (+EGF) cells using ImageJ software. Figure 6G). ARP3 (Cat# ab151729, Abcam), ARP2 (Cat# ab128934, Abcam), and beta-actin HRP-conjugated antibody (Cat# ab49900, Abcam). HRP-conjugated secondary antibodies for rabbit and mouse IgG were obtained from Invitrogen (Cat# A16096 and A16066, respectively). siRNA transfection Cells were seeded 1 d before transfection at an initial density of 2 × 10 5 cells/well in 12-well plates or at 3 × 10 5 cells/well (6-well plates). Cells were transfected using SAINT-sRNA transfection reagent (SR-2003, Synvolux) according to the manufacturer's instructions. Cells were analyzed 48 hr posttransfection, and knockdown efficiency was verified by western blot using target-specific antibodies. If EGF stimulation was included in the experimental settings, 48 hr post-transfection medium was exchanged for serum/supplements-free medium for 4 hr prior the EGF (100 ng/ml) treatment for 15 min. The images of both GFP fluorescence and transmission of the live cells were acquired with a Leica SP8 confocal microscope (Leica, Mannheim, Germany) performed with a 10 × 0.3 NA objective, with 488 nm excitation (at approximately 150 µW), with an emission window of 500-590 nm for GFP detection and with scanning Differential Interference Contrast transmission imaging in a 1500 µm × 1500 µm frame format with 400 lines per second, 0.71 µm/pixel (2048 × 2048 pixel per frame) and with two times averaging per line with a frame acquisition of every 30 min per selected position within the chamber. Phosphorylation of the ARP2/3 protein complex subunits by ERK3 was detected by mass spectrometry analyses after in vitro kinase assay. Proteins were in-gel-digested and subsequently enriched for phosphopeptides using TiO 2 beads. RNA isolation, cDNA synthesis, and RT-PCR analysis For gene expression analyses, cells were washed with cold PBS and total RNA was extracted using Trizol (Cat# 15596018, Ambion) according to the manufacturer's instructions. Isolated RNA was then used as a template for cDNA synthesis with the RevertAid First Strand cDNA synthesis kit (Cat# K1621, Thermo Fisher Scientific) and random hexamer primers. Proteins size and purity were assessed throughout the purification procedures. Synthesis of WT and KD ERK3 proteins using RRL ERK3 WT or ERK3 KD (K49A K50A) in pcDNA3/V5-Dest40 vector were expressed in T7 RRLs using TNT T7 Quick Coupled transcription/Translation System (Cat# TM045, Promega) following the manufacturer's protocol. Expression levels were assessed by western blot, using V5-tag specific antibody. Lysates were further used in actin polymerization assay as recombinant proteins. Endogenous pull-down of active RAC1/CDC42 Levels of GTP-bound RAC1 and CDC42 were determined using either active RAC1/CDC42 pull-down and detection kit (Cat# 16118/19, Thermo Fisher Scientific) according to he manufacturer's instructions and buffer composition or purified GST-PAK1-PBD fusion beads. HMEC and MDA-MB231 cells were seeded in 6-well plates at an initial density of 3 × 10 5 cells/well. After cells reached 70% confluence, medium was exchanged to MEGM (no supplements) for HMEC or DMEM without FBS for MDA-MB231 cells, 4 hr prior 15 min stimulation with 100 ng/ml of recombinant human EGF. Afterward, cells were subjected to active RAC1/CDC42 pull-down. The precipitated samples and total cell lysates were subjected to western blot analyses and ImageJ quantification. Relative levels of active RAC1 and CDC42 were determined by calculating the ratio of active (GTP loaded) RhoGTPases with respective total protein levels in TCL. Endogenous pull-down of ARP3 HMECs were seeded in 6-well plate at an initial density of 2 × 10 5 cells per well and cultured until 80% confluent. For the immunoprecipitation (IP) of the endogenous ARP3, cells were washed with ice-cold PBS and lysed with ice-cold IP buffer (10 mM HEPES pH 7.4; 150 mM NaCl, 1% Triton X-100, plus protease inhibitor cocktail Set I-Calbiochem 1:100 [Cat# 539131, Merck Millipore], 1 mM Na 3 VO 4 , and 1mM NaF). Cell lysates were incubated with Protein A/G-Agarose beads (Cat# 11 134 515 001/11 243 233 001, Roche), and either ARP3 or normal rabbit IgG antibody for 2 hr at 4°C with rotating. After the incubation, beads were washed with IP buffer and analyzed by immunoblot. Levels of the immunoprecipitated ARP3 as well as the co-immunoprecipitation of ARP2 and ERK3 were assessed. Total cell lysates were used as a control. Binding interaction-GST pull-down assays Purified recombinant GST-fusion RAC1 and CDC42 proteins immobilized on the beads were used for in vitro GST pull-down experiments. To verify the relevance of the nucleotide binding in the interaction of the GTPases with ERK3 protein, RAC1 and CDC42 proteins were loaded with non-hydrolyzable GTPγS or GDP (components of the active RAC1/CDC42 pull-down and detection kit; Cat# 16118/19, Thermo Fisher Scientific). Recombinant GST-fusion GTPases protein beads were incubated with gentle shaking for 15 min at 30°C in 100 µl of 25 mM Tris-HCl, pH 7.2, 150 mM NaCl, 5 mM MgCl 2 , 1% NP-40, and 5% glycerol binding/wash buffer containing 0.1 mM of GTPγS or 1 mM GDP in the presence of 10 mM EDTA pH 8.0 to facilitate nucleotide exchange. Reaction was terminated by placing samples on ice and addition of 60 mM of MgCl 2 . GTPγS-/GDP-loaded RAC1 and CDC42 beads were centrifuged and supernatants were removed. Beads were subjected to GST pull-down ERK3 binding assay. GTPγS/GDP-loaded GST-RAC1 and GST-CDC42 protein beads were incubated with recombinant human ERK3 protein (Cat# OPCA01714, Aviva Systems Biology) in 100 µl of binding/wash buffer supplemented with protease inhibitor with protease inhibitor cocktail Set I-Calbiochem 1:100 (Cat# 539131, Merck Millipore) and phosphatase inhibitors: 1 mM sodium orthovanadate (Na 3 VO 4 ), 1 mM sodium fluoride (NaF) for 1 hr at 4°C with rotation. Recombinant GST protein bound to glutathione beads was used as a negative control. After the incubation, beads were washed three times with 400 µl binding/wash buffer, centrifuged each time at 2000 rpm. Samples were eluted with 4× Laemmli buffer supplemented with 100 mM DTT and 5 min boiling at 95°. Samples were subjected to western blot analyses. For the in vitro interaction between GST-RAC1/ GST-CDC42 and ERK3 kinase domain, RAC1/ CDC42 WT beads and ERK3 (amino acids (aa) 9-327) (Crelux) Escherichia coli purification was employed. Additional information on the Crelux protein can be viewed in the Supplementary file 1. For ERK3-ARP3 binding studies, human recombinant GST-fusion ERK3 protein (SignalChem) and E. coli purified ARP3. Glutathione beads were incubated with 37 nM of GST-ERK3 or GST alone and indicated concentrations of ARP3 for 2 hr at 4°C in the binding buffer (5 mM Tris-HCl, pH 8.0, 0.2 mM CaCl 2 ) supplemented with protease and phosphatases inhibitors. Afterward, beads were washed three times with 400 µl binding buffer supplemented with 1% NP-40. Samples were eluted with 4× Laemmli buffer supplemented with 50 mM DTT and 5 min boiling at 95°. Immobilized protein complexes were detected by GST and ARP3-specific antibodies. Determination of GTP loading on Rho GTPases In vitro active RAC1/CDC42 pull-down assay The efficiency of the nucleotide loading under chosen conditions of the GTP-loading status of the purified RAC1 and CDC42 proteins was determined using GST-PAK1-PBD fusion beads. Recombinant RAC1 and CDC42 proteins (no tag) were loaded with GTPγS or GDP by incubation for 15 min at 30°C in 100 µl of 25 mM Tris-HCl, pH 7.2, 150 mM NaCl, 5 mM MgCl 2 , 1% NP-40, and 5% glycerol binding/wash buffer containing 0.1 mM of GTPγS or 1 mM GDP in the presence of 10 mM EDTA pH 8.0. Reaction was terminated by placing samples on ice and addition of 60 mM of MgCl 2 . GTPγS-/GDP-loaded RAC1 and CDC42 as well as the native (WT) proteins were subjected to active RAC1/CDC42 pull-down assay using GST-PAK1-PBD fusion beads. Levels of GTP-loaded GTPases were detected using RAC1 or CDC42-specific antibodies. GDP/GTP nucleotide exchange assays GDP/GTP exchange assay was performed using recombinant RAC1 and CDC42 proteins in binding buffer containing: 20 mM Tris-HCl (pH 7.5), 100 mM NaCl, and protease/phosphatases inhibitors. Firstly, RhoGTPases were stripped of nucleotides by incubation with binding buffer containing 10 mM EDTA for 10 min at RT. Afterward, reactions were supplemented with 50 mM MgCl 2 and 500 µM GDP (Active RAC1/CDC42 pull-down kit, Thermo Fisher, Cat# 16118/19) and incubated for 15 min at 37°C with gentle agitation. Samples of GDP-loaded RhoGTPases were kept on ice for further analyses as controls. The remaining GDP-loaded RAC1 and CDC42 were then incubated with 500 µM GTPγS (Thermo Fisher kit Cat# 16118/19) in the presence or absence of 2 nM of recombinant ERK3 protein (aa 9-327) (Crelux) for 30 min at 37°C with gentle agitation. All reactions were terminated by addition of 60 mM MgCl 2 on ice. To isolate the active (GTP-bound) RAC1 and CDC42, the specific GST-PAK1-PBD-fusion beads were used. Nucleotide exchange reaction were incubated with the beads for 1 hr at 4°C, rotating. Afterward, beads were washed three times with binding/wash buffer: 25 mM Tris-HCl, pH 7.2, 150 mM NaCl, 5 mM MgCl 2 , 1% NP-40, and 5% glycerol. Samples were eluted with 4× sample buffer supplemented with 100 mM DTT. Levels of active RAC1 and CDC42 were detected using RAC1 and CDC42specific antibodies from the active RAC1/CDC42 pull-down kit (Cat# 16118/19, Thermo Fisher). Levels of GST-PAK1-PBD protein were detected using Ponceau S staining of the membrane. Efficiency of the GDP-GTP exchange was calculated as the ratio of GTPγS+ERK3 samples and GTPγS alone and presented as fold change where GTPγS samples were used as a reference. Guanine nucleotide exchange (GEF) activity of ERK3 (9-327 aa) was further evaluated using RhoGEF Exchange Assay (Cat# BK100, Cytoskeleton). Assay was performed according to the manufacturer's instructions and included Dbs GEF as a positive control. In vivo F-actin/G-actin assay HMECs and MDA-MB231 cells were seeded in 12-well or 6-well plates for F/G actin assay and parallel western blot analyses. When cells reached about 70% confluency, plates were subjected to either western blot analyses or G-actin/F-actin in vivo assay using cytoskeleton kit (Cat# BK037) according to the manufacturer's instructions. Briefly, cells were washed with 1× PBS and lysed in appropriate volume of lysis and F-actin stabilization buffer supplemented with 1 mM ATP and protease inhibitor mixture for F-actin stabilization (Cytoskeleton). Lysates were centrifuged at 100,000 × g at 37°C for 1 hr to separate the G-actin fraction in the supernatants and pelleted F-actin. F-actin pellets were further depolymerized for 1 hr on ice using F-actin depolymerizing buffer. Both sets of samples were mixed with 4× SDS-PAGE sample buffer. Equal volumes of G-actin and F-actin lysates were analyzed by SDS-PAGE and immunoblotting with anti-actin rabbit polyclonal antibody (Cat# AANO1, Cytoskeleton). Densitometric analyses of the G-actin and F-actin levels were performed using ImageJ software. Ratios of F-actin in respect to G-actin levels were calculated. In vitro actin polymerization assay Actin polymerization assay was performed using actin polymerization biochem kit (Cat# BK003, Cytoskeleton) according to the provided protocol. Pyrene-labeled rabbit skeletal muscle actin (Cat# AP05-A, Cytoskeleton) (2.3 µM per reaction) was diluted with general actin buffer (5mM Tris-HCl pH 8.0, 0.2 mM CaCl 2 ) supplemented with 0.2 mM ATP and 1 mM DTT and incubated for 1 hr on ice to depolymerize actin oligomers. Actin stock was centrifuged for 30 min at 14,000 rpm at 4°C. Pyrenelabeled actin was incubated with recombinant ARP2/3 protein complex alone (Cat# RP01P, Cytoskeleton) or along with the human recombinant proteins: WASP-VCA domain protein (Cat# VCG03, Cytoskeleton) or ERK3 protein (M31-34G, SignalChem). ARP2/3 protein complex, WASP-VCA domain protein, and full-length ERK3 recombinant protein were used at 10 nM, 400 nM, and 4.8 nM final concentration, respectively. Actin alone was used to establish a baseline of polymerization rate. Actin polymerization was induced by addition of 1.5× actin polymerization buffer (Cat# BSA02) (10× buffer: 20 mM MgCl 2 , 500 mM KCl, 10 mM ATP, 50 mM guanidine carbonate in 100 mM Tris-HCl, pH 7.5). Actin polymerization was measured by fluorescence emission at 415 nm over 1 hr (30-60 s interval time) at RT with 360 nm excitation wavelength using multimode microplate reader Tecan SPARK. Buffer background signal at each interval was subtracted and relative fluorescence units (RFUs) are depicted. Immunofluorescence (IF) and confocal analyzes Cells cultured on coverslips were fixed in 3.7% formaldehyde (Cat# CP10.1, Roth) for 15 min, followed by washing with PBS pH 7.5 and 3 min permeabilization using 0.1% Triton X-100 (AppliChem). After washing twice with PBS, cells were blocked with 1% BSA in PBS for 15 min and washed once with PBS. Filamentous actin was labeled with Oregon Green 488 Phalloidin (Cat# O7466, Invitrogen) or alternatively Rhodamine Phalloidin (Cat# R415, Invitrogen) in blocking buffer for 1 hr at RT in the dark. Nuclei were stained with 10 µg/ml of DNA dye (Hoechst 33342; Cat# H3570, Invitrogen). For co-staining of endogenous ERK3 and F-actin, cells were incubated in 1:250 dilution of anti-ERK3 antibody (mouse Cat# MAB3196, R&D or rabbit Cat# ab53277, Abcam) in blocking solution for 1 hr at RT. For the ARP3 and CDC42 staining, the anti-ARP3 antibody (Cat# ab151729, Abcam) and anti-CDC42 antibody (Cat# 610929, BD Transduction) were used in blocking buffer for 1 hr at RT in the dark. Nuclei were stained with 10 µg/ml of DNA dye (Hoechst 33342; Cat# H3570, Invitrogen). Afterward, cells were washed with PBS and incubated with secondary anti-mouse IgG-Cyanine3 (Cat# A10521, Thermo Fisher Scientific) at 5 µg/ml, DNA dye (Hoechst 33342), and Green Phalloidin in blocking solution for 1 hr at RT in the dark. Samples were washed twice with PBS and cells were mounted onto glass slides using Moviol (+DABCO) (Sigma). For exogenous ERK3 detection upon overexpression of the ERK3 WT or serine 189 phosphorylation mutants (S189A/S189D) in shERK3 HMECs, V5-tagspecific antibody was used at 1:100 dilution (Cat # R960-25, Invitrogen). Cells were imaged using a Leica DMi8 confocal microscope (×63, oil immersion objective). ERK3-depleted cells were used as a control for the ERK3 staining. For the colocalization analyses, a mask was created to select excluding nuclei where no colocalization was observed. The ARP3 and ERK3 images were scaled and thresholded in the same way to maximize the dynamic scale and then had the colocalization of the Pearson's correlation and Spearman's rank correlation coefficients (PCC and SCC) on a pixel-wise basis with the 'Coloc 2' algorithm in ImageJ. For both the Pearson and Spearman correlation coefficients, a value of +1 is perfectly correlated or colocalized, a value of 0 means that the two signals are randomly localizing, and a value of -1 indicates that the two signals are perfectly separated. Scanning electron microscopy (SEM) Control and ERK3-depleted HMECs were seeded in 12-well plate. Cells were fixed overnight with 2.5% glutaraldehyde, rinsed with PBS, and then again fixed and stained with 2% osmium tetroxide. Afterward, samples were rinsed with distilled water, frozen, and then freeze-dried with Crist Alpha LSC Plus Freeze Drier. The SEM scans were performed with a Philipps ESEM XL30 scanning electron microscope. Filopodia quantification and analyses To detect, quantify, and analyze the number and lengths of the filopodia of the fixed and actinphalloidin-stained control and ERK3 knockdown HMECs, the FiloQuant open access software and routines (Jacquemet et al., 2017;Jacquemet et al., 2019) were applied in ImageJ and in some cases were applied in Imaris version 9.3.1 (Imaris, RRID:SCR_007370) for additional verification. Briefly, the raw cell images were linearly adjusted for brightness and contrast for optimal filopodial observation and also had a mask applied to just analyze the filopodial region of interest in each cell and applied to each cell in the same way. Additionally, the following parameters were initially applied to each cell in the same way: Cell Edge Threshold = 20, Number of Iterations = 10, Number of Erode Cycles = 0, Fill Holes on Edges = checked, Filopodia Threshold = 25, Filopodia Minimum Size = 10 pixel. The results of the detected filopodial filaments were the overlayed in white over the actin-phalloidin green fluorescence of the filapodia for visual verification in ImageJ and in some cases in Imaris version 9.3.1. The images of both GFP fluorescence and transmission of the live cells were acquired with a Leica SP8 confocal microscope using a 10 × 0.3 NA objective, with 488 nm excitation (at approximately 150 µW) and emission window of 500-590 nm for GFP detection and with scanning differential interference contrast transmission imaging in a 1500 µm × 1500 µm frame format with 400 lines per second, 0.71 µm/pixel (2048 × 2048 pixel per frame), and with two times averaging per line with a frame acquisition of every 30 min per selected position within the chamber. The images were first acquired in multiple regions of each cell type for 20 h. The image sequences were imported into Imaris version 9.3.1 and detected automatically by fluorescence with both the whole-cell spot and whole-cell surface analysis. As the surface and spot analysis center positions differed in control analysis by less than 1%, the whole cell spot automated analysis was applied to all images in the same way with a 16 µm per cell diameter estimate. The automated tracking also occurred in the same way for all image sequences within Imaris with the autoregressive motion algorithm, with a maximum average distance of 60 µm per step and with zero step gap applied. The speed (or cell speed) was determined by dividing each x, y step by 1800s (or 30 min). The acceleration (or cell acceleration) was determined by subtracting a step speed from the previous step speed and dividing by 1800s (or 30 min). The displacement length was determined by subtracting the initial x, y position from the final x, y position and then to determine the difference vector length for each track over 20 hr of acquisition. The track length added each absolute x, y vector step for an entire track over 20 hr of acquisition. The track mean speed averaged the speed of the steps for individual tracks. The graphs were created with GraphPad Prism (RRID:SCR_002798), and cell distribution according to the analyzed parameter was visualized by violin plots. Significance was determined using nonparametric Mann-Whitney test. Transwell cell migration assay Migratory properties of control and ERK3-depleted HMECs and MDA-MB231 cells were assessed using two-chamber Transwell system (Cat# 3422, Corning). Cells were seeded in 12-well plates. HMECs were deprived of supplements from the medium 24 hr prior the migration. HMECs and MDA-MB231 cells were then trypsinized and resuspended in supplements-free or serum-free medium, respectively. 500 µl of supplements-free/serum-free medium was mixed with 100 ng/ ml of EGF and added into the bottom chamber and 1 × 10 5 cells in 120 µl of supplements-free/ serum-free medium were added into each insert. Plates were incubated at 37°C for 24 hr. To quantify the migrated cells, cells were removed from the upper surface of the insert using cotton swabs and inserts were washed with PBS. Afterward, cells that migrated to the lower side were fixed within 3.7% formaldehyde for 10 min and stained with Hoechst solution in PBS for 20 min at 37°C. Cells were then visualized using Leica DMi8 microscope (×5 dry objective), images of 3-4 regions per each experimental condition were taken. Number of cells was quantified by Fiji/ImageJ software (Fiji, RRID:SCR_002285) using particle analyses for each field of view and averaged for each membrane. Directional migration was presented and the percentage of the respective control (expressed as 100%). Phosphorylation site identification on ARP3 Phosphorylation of the ARP2/3 protein complex subunits by full-length ERK3 was detected by in vitro kinase assay and in-gel digestion of the respective ARP2/3 complex proteins followed by subsequent spectrometry analyses. Phosphopeptide enrichment Dried tryptic peptide samples were dissolved in loading buffer (1 M glycolic acid, 6% trifluoroacetic acid, 5% glycerol, and 80% acetonitrile) under continuous shaking. TiO 2 beads (Titansphere, TiO 2 , GL Sciences Inc) were washed in loading buffer three times before transferring them to the dissolved tryptic peptide samples. After 1 hr of continuous shaking, the supernatant was collected and transferred to a new tube containing freshly washed TiO 2 beads for a second incubation. The TiO 2 beads were collected separately and gently washed with 200 μl of loading buffer, 200 μl 80% acetonitrile/2% trifluoroacetic acid, 200 mM ammonium glutamate, and 200 μl of 50% acetonitrile/1% trifluoroacetic acid, respectively. The TiO 2 beads were dried and bound peptides were eluted sequentially in 10 min at first with 50 μl of 10% ammonium hydroxide, pH 11.7, then with 50 μl of 15% ammonium hydroxide/60% acetonitrile, and finally with 50 μl of 1% pyrrolidine. Eluted peptides were acidified by adding 75 μl 50% formic acid and cleaned up using OMIX C18, 10 µl SPE tips (Agilent). LC-MS analysis The samples were solved in 10 µl 0.1% formic acid and 5 µl was analyzed by LC-MS using a timsTOF Pro (Bruker Daltonik, Bremen, Germany), which was coupled online to a nanoElute nanoflow liquid chromatography system (Bruker Daltonik) via a CaptiveSpray nanoelectrospray ion source. The peptides were separated on a reversed-phase C18 column (25 cm × 75 µm, 1.6 µm, IonOpticks; Fitzroy, VIC, Australia). Mobile phase A contained water with 0.1% (vol/vol) formic acid, and acetonitrile with 0.1% (vol/vol) formic acid was used as mobile phase B. The peptides were separated by a gradient from 0 to 35% mobile phase B over 25 min at a flow rate of 300 nl/min at a column temperature of 50°C. MS acquisition was performed in DDA-PASEF mode. ERK3-dependent tumor cell motility in vivo All in vivo experiments were performed in accordance with the Swiss animal welfare ordinance and approved by the cantonal veterinary office Basel-Stadt. Female NSG mice were maintained in the Department of Biomedicine animal facilities in accordance with Swiss guidelines on animal experimentation (license number 2464). NSG mice are from in-house colonies. Mice were maintained in a sterile-controlled environment (a gradual light-dark cycle with light from 7:00 to 17:00, 21-25°C, 45-65% humidity). Intravital imaging Mice were anesthetized with Attane Isofluran (Provet AG) and anesthesia was maintained throughout the experiment with a nose cone. Tumors were exposed by skin flap surgery on a Nikon Ti2 A1plus multiphoton microscope and imaged at 880 nm with an Apochromat ×25/1.1 NA water immersion objective at a resolution of 1.058 µm per pixel. Cell motility was monitored by time-lapse imaging over 30 min in 2 min cycles, where a 100 mm Z-stack at 5-mm increments was recorded for each field of view starting at the tumor capsule. Three-dimensional time-lapse videos were analyzed using ImageJ. Images were registered using demon algorithm in MATLAB to correct for breathing movement (Kroon, 2023). Tumor cell motility was quantified manually. A tumor cell motility event was defined as a protrusion of half a cell length or more over the course of a 30 min video. Statistical analyses All experiments were repeated at least three times and exact replicate number (n) is specified for each figure. GraphPad Prism 9 was used to analyze data. Analyses performed for each data set, including statistical test and post-test, were specified for each figure. Where applicable, data are presented as mean ± SEM of at least three independent experiments. Significance levels are displayed in GP (GraphPad) style: *p<0.0332, **p<0.0021, ***p<0.0002, ****p<0.0001.
v3-fos-license
2019-02-19T19:56:36.273Z
2019-02-18T00:00:00.000
67858091
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-019-1077-1", "pdf_hash": "9c0d36e059c7f163a4a687559c8e14d339decc00", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41593", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9c0d36e059c7f163a4a687559c8e14d339decc00", "year": 2019 }
pes2o/s2orc
Clinical outcomes and complications of cementless reverse total shoulder arthroplasty during the early learning curve period Background Reverse total shoulder arthroplasty (RTSA) is a treatment option for patients with severe osteoarthritis, rotator cuff arthropathy, or massive rotator cuff tear with pseudoparalysis. We are to deduce not only the early functional outcomes and complications of cementless RTSA during the learning curve period but also complication-based, and operation time-based learning curve of RTSA. Methods Between March 2010 and February 2014, we retrospectively evaluated 38 shoulders (6 male, 32 female). The average age of the patients was 73.0 years (range, 63 to 83 years), and the average follow-up was at 24 months (range, 12–53 months). The visual analog scale (VAS), University of California Los Angeles (UCLA) score and constant score were used to evaluate the clinical outcomes. We evaluated patients radiographically at 2 weeks, 3 months, 6 months, 1 year, and then annually thereafter for any evidence of complications. Results The VAS score improved from 4.0 to 2.8 (p = 0.013). The UCLA score improved from 16.0 to 27.9 (p = 0.002), and the constant score improved from 41.4 to 78.9 (p < 0.001), which were statistically significant. While active forward flexion, abduction, and internal rotation improved (p value = 0.001, < 0.01, 0.15), external rotation did not show significant improvement (p = 0.764). Postoperative complications included acromion fracture (one case), glenoid fracture (one case), peripristhetic humeral fracture (one case), axillary nerve injury (one case), infection (one case), and arterial injury (one case). Our study presented an intraoperative complication-based learning curve of 20 shoulders, and operation time-based learning curve of 15 shoulders. Conclusions The clinical outcomes of RTSA were satisfactory with overall complication rates of 15.7%. An orthopedic surgeon within the learning curve period for the operation of RTSA should be cautious when selecting the patients and performing RTSA. Trial registration Retrospectively registered. Introduction Reverse total shoulder arthroplasty (RTSA) was introduced first by Grammont et al. in 1987 as a treatment for patients with cuff tear arthropathy. Other indications include revision of a failed arthroplasty, malunions of proximal humeral fractures, and pseudoparalysis of the shoulder [1,2]. The advantage of the design of RTSA was based on the concept of reversing the shoulder joint by fixing a metal ball to the glenoid and introducing a spherical socket into the proximal part of the humerus [3,4]. This approach lowers the humerus and medializes the center of rotation of the shoulder joint, which increases the deltoid muscle moment arm, allowing for recruitment of more deltoid muscle fibers for arm flexion and abduction [5]. In Europe, RTSA has been performed for more than 20 years. Favard et al. have reported the satisfactory results of long-term follow-ups longer than 10 years [6,7]. However, it was not approved for use in the USA until 2004, due to highly reported complication rates ranging from 0 to 68% [8]. The most frequent complication is scapular notching followed by complications with the humeral or glenoid component (e.g., loosening) [5]. The rate of humeral loosening is considered to be high for RTSA compared with conventional total shoulder arthroplasty [9]. To avoid the risk of loosening, many surgeons have used cemented components for humeral fixation in RTSA. On the other hand, Michael et al. reported cementless fixation of a porous-coated RTSA humeral stem clinical and radiographic outcomes equivalent to those of cemented stems at minimum 2-years follow-up and mentioned several advantages of cementless fixation: (1) no risk of cement-related complications, (2) decreased operative time, (3) simplified operative technique, and (4) greater ease of revision [10]. Currently, there are convertible modular system RTSA available, which makes easier revision between total shoulder arthroplasty, and RTSA with decrease of surgical time, no removal of well-fixed humeral stem, and excellent post-conversion functional outcomes [11]. The purpose of this study was to analyze the results and complications during the learning curve of cementless RTSA and describe complication based and operation time based learning curve for RTSA. Materials and methods We retrospectively reviewed the charts of 38 consecutive patients who underwent a reverse total shoulder arthroplasty performed by single surgeon between March 2010 and February 2014 and who underwent at least 12 months follow-up. The choice of implant was Com-prehensive® reverse shoulder system (Biomet Inc., Warsaw, IN, USA) with cementless cobalt chrome humeral component. All surgical procedures were performed by single orthopedic shoulder surgeon. The indications for reverse total shoulder arthroplasty were the following: rotator cuff tear arthropathy, massive irreparable rotator cuff tear with chronic loss of elevation that failed to respond to physical treatment, posttraumatic glenohumeral arthritis, and primary osteoarthritis of the shoulder with a massive irreparable cuff tear (Table 1) [12]. Exclusion criteria were poor deltoid function on preoperative electromyography (EMG), a C-spine problem of a related origin, or patients who failed to follow-up. Six males and 32 females were enrolled into following research. Twenty-six prostheses were placed in the right shoulder, and 12 were placed in the left shoulder. The average age of the patients was 73 years (range, 63 to 83), with an average follow-up of 24 months (range, 12 to 53 months). On preoperative MRI scan, rotator cuff tears were revealed as follows: 3 cases of 1 tendon tear (8%), 10 cases of 2 tendon tear (26.3%), and 25 cases of 3 tendon tear (65.7%). The IBM SPSS (IBM Co., Armonk, NY, USA) was used for all data analyses. Paired t test has been used to compare the preoperative and postoperative clinical scores and range of motion. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. As the following study was performed in retrospective manner, formal consent was not required. Clinical and radiographic evaluation All patients were examined preoperatively and postoperatively by two different peer orthopedic surgeons. The visual analog scale (VAS), University of California Los Angeles (UCLA) score, and constant score were used to evaluate the clinical outcomes. Clinically, the range of motion (ROM) of shoulder was measured preoperatively and postoperatively to evaluate the functional outcomes. Patients were asked to perform the following motions: (1) forward flexion, lifting the arm in front of the body, with the palm facing the side of the body and the arm held straight; (2) abduction, arm swinging out from the side of the body, palm facing the side of the body and the arm held straight; (3) external rotation, elbow bent to 90°and swinging the forearm away from the body; and (4) internal rotation, elbow bent to 90°and swinging the forearm toward from the body. We defined the learning curve as the point in the series where there was the lowest risk of complications or leveling out of operative time in subsequent shoulders compared to earlier shoulders. We evaluated patients radiographically at 2 weeks, 3 months, 6 months, 1 year, and then annually thereafter for any evidence of complications, including changes in the humeral glenoid component position, osteolysis, or scapular notching. Operative technique The surgery was performed with patients in the beach chair position. The deltopectoral anterior approach was used in all cases, and when possible the cephalic vein was protected. The upper portion of the pectoralis major tendon was released, and the medial border of the deltoid muscle was retracted laterally and partially released from its distal insertion by subperiosteal dissection. A longitudinal incision was made through the tendinous portion of the subscapularis muscle and capsule. The subscapularis tendon was tagged with nonabsorbable sutures for easy identification during closure. To expose the humeral head, the humerus was externally rotated and extended. Using a trocar pointed reamer and ratcheting T-handle, a pilot hole was bored through the humeral head along the axis of the humeral shaft, just lateral to the articular surface of humeral head and just posterior to the bicipital groove. The tapered humeral reamer was inserted up to the engraved line above the cutting teeth. The resection guide boom was placed onto the reamer shaft. The prosthesis was implanted at approximately 20°of retroversion. A saw blade was placed through the cutting slot in the guide, and the humeral head was resected. The calcar planer was used to refine the resected surface. A 3.2-mm Steinmann pin was inserted into the glenoid at the desired angle and position. The cannulated baseplate reamer was positioned over the top of the Steinmann pin. The glenoid was reamed to the desired level. After seating the glenoid baseplate, appropriate peripheral screws were inserted. Then select the appropriate glenosphere trial and assemble to a trial taper adaptor. And the assembly was removed from the glenoid baseplate. The glenosphere implant was placed into the impactor base using the glenosphere forceps. After the humeral stem was assembled on to the humeral stem inserter, the stem was inserted into the humeral canal. Appropriate humeral tray and bearing was assembled. Postoperative rehabilitation An abduction brace was applied immediately after surgery and worn for 4 to 6 weeks, and pendulum and early passive wrist and elbow range of motion exercises were initiated at postoperative day 2. Shoulder passive motion exercises were started 2 weeks postoperatively via continuous passive motion machine (ARTROMOT-K1, Ormed GmbH & Co, KG, Germany). After 4 to 6 weeks, the abduction brace was removed, and activity was allowed as tolerated. Functional and clinical outcomes The average VAS score improved from 4.0 points before surgery to 2.8 points (p = 0.013) at the time of follow-up. The average UCLA score improved from 16.0 to 27.9 (p = 0.002), and the constant score improved from 41.4 to 78.9 (p < 0.001); these increases were statistically significant. Mean forward flexion, abduction, and internal rotation was improved from 99.9°, 69.2°, L5 to 135.4°, 124.8°, L3 respectively. (p value = 0.001, < 0.001, = 0.015) However, there was no statistical improvement in external rotation postoperatively (p value 0.764) ( Table 2). Radiologic outcomes and complications The 38 patients were followed for 12 to 53 (mean, 24) months and 6 complications occurred (Table 3). Three patients had fracture in the postoperative period after slipping down; two of the patients were treated conservatively ( Fig. 1), and one of them required revision. One patient with superficial infection was resolved with use of IV antibiotics without implant removal. One patient had an injury of the axillary artery intraoperatively and underwent an immediate arterial repair. One patient with axillary nerve palsy resolved itself spontaneously over time without surgical intervention. In our study, all the complications occurred within 2 years after RTSA (Table 3). On radiographic evaluation, there was no evidence of humeral component loosening, osteolysis, or scapular notching. According to Kaplan-Meier's survival analysis, the survivalship of RTSA implant was revealed to be approximately 76% throughout the follow up period (Fig. 2). Learning curve Throughout the consecutive 38 cases of reverse total shoulder arthroplasty, the cutoff points were shown at every 3 shoulders. Complication rate was revealed as 15.7% (6 of 38 patients). Only 2 out of the 6 complications occurred intraoperatively in the first 20 shoulders and 4 occurred after at least 2 months postoperatively. In comparison of operation time between former 18 cases and latter 18 cases, which revealed to be average 108.6 min (range 71~147 min), and average 87.6 min (range 61~121) respectively. It is implied that after gaining certain amount of experience, the decrease of operation time was achieved. Significant decrease of operation time was noted after 15th RTSA (Fig. 3). Discussion The introduction of RTSA represents a new era in shoulder surgery [13]. It could be a treatment option for patients with cuff tear arthropathy or for patients who failed conventional total shoulder arthroplasty. Multiple studies have reported highly variable complication rates of RTSA ranging from 14 to 75% [14]. When selecting patients and deciding to perform RTSA, it is important to consider the complication rates. We therefore described the types and rates of early complications in cementless RTSA during the learning period, characterized a learning curve for our RTSA series to establish where the greatest reduction in operation time and complication rates occurred, and evaluated the clinical and functional outcomes of RTSA. Some authors reported the short-term clinical and functional outcomes of RTSA appeared to be promising [15]. Sirveaux et al. reported an increase in the mean constant score from 22.6 points preoperatively to 65.6 points postoperatively, with 96% of the patients having little or no pain and an increase in mean active forward flexion from 73°to 138° [16]. Bryan et al. reported that patients who were managed with a RTSA to treat posttraumatic arthritis or a revision arthroplasty had less improvement and higher complication rates than patients with a cuff tear arthropathy or primary osteoarthritis associated with a massive cuff tear [12]. In our study, the VAS, UCLA, constant scores, and range of motion except external rotation were all improved. Since the reverse total shoulder arthroplasty system relies on the deltoid muscle to power and position the arm, instead of the rotator cuff, we do not only think rotator cuff condition is a factor that might have influence over postoperative ROM, but also should not be included as covariate. The most frequently reported complication of RTSA is scapula notching followed by glenoid and humeral loosening, periprosthetic fracture, acromial fracture, neurological injury, and infection [5]. Several studies reported the variable rates of scapular notching ranging from 0 to 97% [8,14]. According to Mollon et al. [17], patients with scapular notching present pooper clinical outcomes, less strength, less range of motion, and significantly higher complication rates. Also it is revealed by Roche et al. [18] that scapular notching plays a role in initial glenoid baseplate instability. Considering the consequence of scapular notching, the effort to prevent one cannot be overstated, and every shoulder surgeon who performs reverse total shoulder should be cautious and meticulous when preparing glenoid and placing the baseplate. There were no complications related to humeral component loosening or scapular notching in our study. Large series with long-term follow-up are necessary to properly evaluate scapular notching. However, several studies reported the safest methods to prevent scapular notching are inferior positioning of the glenoid baseplate and larger size implants with shallow concave components [19,20]. Zumstein et al. reported a combined incidence of acromial and scapular spine fracture of 1.5% (12 out of 782) [21]. Postoperatively, increased deltoid tension and medialization of the center of rotation could increase the load across the acromion. [20,22]. Most acromial fractures can be treated conservatively; however, if the scapular fracture is accompanying an acromial fracture then surgical treatment may be required [4]. The overall incidence of postoperative acromial fracture in our series was 3.0%. They were treated nonoperatively with satisfactory outcomes. As the experience of RTSA gained up, we have adjusted the cutting level of humerus head in order to reduce muscular tension around the implant, which may cause stress fracture of acromion. After placement of trial, check of tension on conjoin tendon of shoulder is made. If the tension is too much, additional 1~2 mm cut of humeral head is performed. The incidence of infection after RTSA is reported to be 0 to 4%. The prevalence of neurologic injury after RTSA is approximately 1 to 4.3%. A commonly injured nerve is the axillary nerve, which could be injured from direct damage during the surgery, stretch injury from retractors, or postoperative compression of hematoma. In most neurological cases, surgical intervention is not [13,23]. In our study, there was one patient with axillary nerve palsy resolved spontaneously over time without surgical intervention. Gilot et al. reported the incidence of radiographic aseptic loosening of the humeral component in RTSA when comparing the cemented and press-fit used group. No loosening occurred in the press-fit group. No statistically significant difference was found in humeral stem loosening [24]. Wiater et al. reported clinical and radiographic results of cementless RTSA. They concluded that there was no significant difference clinically or radiographically between the cemented and cementless groups. They mentioned several advantages of cementless fixation, including no risk of cement-related complications, decreased operative time, simplified operative technique, and greater ease of revision [10]. Bogle et al. reported that cementless trabecular metal porous-coated implants of RTSA are associated with secure glenoid fixation and minimal radiographic evidence of humeral stem loosening or subsidence at short-term follow up [25]. Additionally, trabecular metal (TM) porous-coated ingrowth implants have shown good results and reliability in the total hip arthroplasty and have the potential to provide stable long-term fixation in the shoulder [10,26]. Because of these advantages, the cementless RTSA could be a good option for a surgeon who is just getting used to the operation. Sershon et al. reported a 14% complication rate, including 3 revisions within 4 years, after reverse shoulder replacement of 36 shoulders; there was a total survival rate of 91% in patients with a mean age of 54 years [27]. Sirveaux et al. reported that survivorship of the prosthesis was 88% (84 to 92) at 5 years, 71.9% (63 to 81) at 7 years, and 28.8% (7 to 50) at 8 years postoperatively [16]. Previous studies have shown higher complications rates and length of hospital stay for shoulder arthroplasties performed by less experienced surgeons [28,29]. Rockwood et al. emphasized that only an experienced shoulder surgeon can successfully perform the procedure and is aware of alternative procedures such as the use of hemiarthroplasty [30]. Numerous studies suggest surgeon experience can affect pre-or postoperative clinical results [1,28,31]. Wierks et al. reported the learning curve for experienced shoulder surgeon appeared to be seven patients, after which the complication rate decreased. They reported that there were more complications in the first ten procedures performed than in the second ten procedures [8]. According to currently published literatures [8,14,32], learning curve was only described by comparison of the complication rate of previously and lately conducted reverse total shoulder arthroplasty, and figure out the point of decrease of complication rate. In our article, we have analyzed 38 cases of reverse total shoulder arthroplasty and tried to define not only the complication-based learning curve but also operation time-based learning curve as well. As for the complication-based learning curve, we have figured out the point of decrease of complication rate throughout the series of 38 cases. The whole complication rate revealed to be 15.7% (6 of 38 patients). Only 2 out of the 6 complications occurred intraoperatively in the first 20 shoulders and 4 occurred after at least 2 months postoperatively. However, there were no discernible patterns among intraoperative and postoperative complications. As for operation time-based learning curve, significant stabilized and decreased operation time was noted after 15th RTSA. In summary, intraoperative complication based-learning curve and operation time based-learning curve verified by our study is 20 cases and 15 cases respectively. The following study has several limitations. First, the sample size was small. Second, the follow-up period is Fig. 3 Operation time for each reverse total shoulder arthroplasty. Following graph shows the operation time for consecutive 38 cases of reverse total shoulder arthroplasty. The cutoff points were shown at every three shoulders and significant stabilized and decreased operation time was noted after 15th RTSA. As for cases with intra op, or post complication, there were no discernible pattern quite short (24 months) and the complication rate might increase with time. Previous studies with long-term follow-up showed increased complication rates. Third, the study was conducted only using Comprehensive® reverse shoulder system (Biomet Inc., Warsaw, IN, USA) with cementless humeral component, and it has not only been compared to its contemporaries and their survivorship but also to the cemented RTSA systems. Lastly, we have not compared the clinical outcomes and complications of RTSA between ones that performed during learning curve phase and expert phase. Conclusion The short-term follow-up of cementless RTSA showed satisfactory early clinical and functional outcomes; however, given the relatively high complication rate, further study with long-term follow-up is required. The orthopedic surgeon must be cautious when deciding to perform RTSA unless he or she is familiar with the anatomy and function of the shoulder. Acquired experience will help surgeons refine patient selection with greater confidence in the procedure and decease the operation time, yielding more satisfactory and promising clinical outcomes.
v3-fos-license
2017-01-15T08:35:26.413Z
2017-01-01T00:00:00.000
16473281
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/cancers9010007", "pdf_hash": "261443360ac239bc3faeb44aa621a4c366aae9d9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41594", "s2fieldsofstudy": [ "Medicine" ], "sha1": "261443360ac239bc3faeb44aa621a4c366aae9d9", "year": 2017 }
pes2o/s2orc
AR-Signaling in Human Malignancies: Prostate Cancer and Beyond In the 1940s Charles Huggins reported remarkable palliative benefits following surgical castration in men with advanced prostate cancer, and since then the androgen receptor (AR) has remained the main therapeutic target in this disease. Over the past couple of decades, our understanding of AR-signaling biology has dramatically improved, and it has become apparent that the AR can modulate a number of other well-described oncogenic signaling pathways. Not surprisingly, mounting preclinical and epidemiologic data now supports a role for AR-signaling in promoting the growth and progression of several cancers other than prostate, and early phase clinical trials have documented preliminary signs of efficacy when AR-signaling inhibitors are used in several of these malignancies. In this article, we provide an overview of the evidence supporting the use of AR-directed therapies in prostate as well as other cancers, with an emphasis on the rationale for targeting AR-signaling across tumor types. AR Targeting in Prostate Cancer In 1941, Charles Huggins published his seminal paper describing the remarkable palliative effects of surgical castration in men with advanced prostate cancer [15]. We now understand that the beneficial effects of castrating therapy are a direct result of inhibiting AR-signaling, and as such targeting the AR has remained the backbone of prostate cancer therapy since the 1940s. As it stands, ADT is most often achieved through the use of luteinizing hormone releasing hormone (LHRH) agonists/antagonists as opposed to surgical castration; however, both achieve the same effect of lowering testosterone levels to the castrate range (i.e., <20-50 ng/dL) [16]. While ADT is initially highly effective, it does not represent a cure, and the vast majority of men with advanced prostate cancer will progress on ADT, developing castration-resistant prostate cancer (CRPC) [17,18]. Work over the last decade has shown that the AR remains a viable therapeutic target even in the castration-resistant setting. This was born out of the observation that AR target genes (e.g., PSA) are often expressed at high levels in patients with CRPC, and that expression of AR will go up in response to ADT [19,20]. It has also come to light that alternative sources of androgens, including those generated intratumorally, may also drive tumor growth in this setting [21,22]. As such, a number of next-generation AR-directed therapies have been developed to further inhibit AR-signaling, with abiraterone and enzalutamide both approved on the basis of Phase III data demonstrating improved overall survival compared to controls [23][24][25][26][27]. Abiraterone is a CYP17 inhibitor that targets extragonadal androgen biosynthesis in the tumor microenvironment and adrenal glands. Enzalutamide is an AR antagonist that is more effective than the first generation non-steroidal antiandrogens (e.g., bicalutamide, nilutamide). Because both of these agents target the ligand-AR interaction-abiraterone through ligand depletion and enzalutamide through antagonizing the AR-ligand binding domain-it is not surprising that numerous groups have documented evidence of cross-resistance between these drugs [28][29][30][31][32][33][34][35]. AR in Breast Cancer Like prostate cancer, breast cancer is a hormonally regulated malignancy. Indeed, shortly following the discovery that surgical castration was effective in men with advanced prostate cancer, Charles Huggins began exploring oophorectomy and adrenalectomy (with hormone replacement) as treatments for advanced breast cancer [53]. It is worth noting, however, that the German surgeon Albert Schinzinger was first credited with proposing oophorectomy as a treatment for breast cancer in the late 19th century [54]. While most hormonal-based therapies for breast cancer involve inhibiting estrogen receptor (ER)-signaling in hormone receptor positive subtypes, it has recently come to light that AR-signaling is likely an important modulator of breast cancer cell survival and may also be a viable target [55,56]. Several lines of clinical data support the biologic importance of AR-signaling in breast cancer, although AR positivity has been found to have variable prognostic impact across studies. Vera-Badillo, et al. conducted a systemic review of 19 studies that assessed AR immunohistochemistry (IHC) in 7693 patients with early stage breast cancer and found AR staining present in 60.5% of patients; interestingly, AR positivity was associated with improved overall survival (OS) [57]. The authors also found that AR positivity was more common in ER positive compared to ER negative tumors (74.8% vs. 31.8%, p < 0.001). However, it should be noted that AR antibodies used across studies was not consistent, nor was the cutoff defining "positivity", making it difficult to draw firm conclusion regarding the overall prevalence of AR positivity across breast cancer subtypes. Another study analyzing AR expression from tissue microarrays (TMAs) of 931 patients reported that 58.1% stained positive for AR, and that the association of AR with improved OS was only true for patients with ER positive tumors [58]. Apocrine tumors (ER negative, AR positive) with HER2 positivity associated with poorer survival, while AR did not appear to impact OS in triple negative breast cancer (TNBC) cases. A study by Choi and colleagues focused specifically on TNBCs (n = 559), found that AR was expressed in 17.7% of these cases, and that AR positivity was a negative prognostic feature. Two subsequent meta-analyses found that AR expression associated with better outcomes across tumor subtypes, however (i.e., ER positive, ER negative, and TNBC) [59,60]. Targeting AR in Breast Cancer As mentioned, AR and ER are both nuclear hormone transcription factors and share a number of similar biologic features [55]. Upon binding their respective ligands, they undergo conformational changes, dissociate from heat shock proteins, dimerize and bind to DNA response elements where they promote transcription of target genes [3,61]. A number of studies have documented mechanisms whereby crosstalk between AR and ER exists, with most evidence supporting a model in which AR inhibits ER signaling through a variety of mechanisms-providing a biological basis for why AR positivity may associate with improved outcomes in ER positive breast cancers. AR is able to compete with ER for bindings at ER response elements (EREs), and transfection of MDA-MB-231 breast cancer cells with the AR DNA binding domain has been shown to inhibit ER activity [13]. Because the transcriptional machinery of both ER and AR involves a number of shared coactivator proteins, AR also likely inhibits ER activity through competing for binding of these cofactors [62,63]. Interestingly, there is also evidence that AR and ER can directly interact, with the AR N-terminal domain binding to the ERα ligand binding domain leading to decreased ERα transactivation [64]. The biologic action of AR in ER-negative breast cancers may differ significantly. AR is expressed in 12% to 36% of TNBCs, and in contrast to ER-positive breast cancers, data suggests that AR may be able to drive progression in some ER-negative cell lines [65][66][67][68][69][70][71]. Supporting the biologic importance of AR, and its viability as a therapeutic target, preclinical data has shown that AR antagonists (e.g., bicalutamide, enzalutamide) exert an anti-tumor effect in a number of ER-negative breast cancer models [65,67,72]. AR positive TNBCs are generally referred to as molecular apocrine tumors; however, more recent work has defined TNBCs on the basis of their molecular phenotype [73,74]. Work by Lehmann and colleagues have defined six subtypes of TNBC on the basis of their gene expression profiles: basal-like 1 and 2, immunomodulatory, mesenchymal, mesenchymal stem-like, and luminal androgen receptor (LAR) [74]. Interestingly, in spite of being ER-negative, the LAR subtype shares a gene expression signature similar to the luminal, ER-positive breast cancers. Chromatin immunoprecipitation (ChIP)-sequencing studies demonstrate that AR-binding events are similar to those of ERα in ER-positive breast cancer cell lines, indicating that AR may be able to substitute for ER in this context [14]. It should be noted that in addition to LAR tumors, other ER-negative, AR-positive breast cancer subtypes are sensitive to the effects of androgens [65,67]. Ni and colleagues have shown that in HER2-positive, ER-negative cell lines, AR mediates activation of Wnt and HER2 signaling in a ligand-dependent manner [67]. Further speaking to the importance of AR across breast cancer subtypes, Barton and colleagues reported that the next-generation AR antagonist enzalutamide is effective in several non-LAR TNBC subtypes. Interestingly, it has been shown that constitutively active AR splice variants (AR-Vs)-a well-described resistance mechanism in prostate cancer-are present in a large subset of breast cancer tumors, and that treatment of MDA-MB-453 cells (ER/PR-negative, HER2-negative, AR-positive) with enzalutamide can lead to the induction of AR-Vs [75]. The fact that a well-known resistance mechanism to AR-directed therapy appears relevant to breast cancer provides further support for the importance of AR-signaling in breast cancer. Clinical Trials Targeting AR-Signaling in Breast Cancer Early clinical data reported by Gucalp and colleagues supported AR as a therapeutic target in AR-positive, ER-negative/PR-negative breast cancers [76]. They conducted a single-arm, Phase II study testing bicalutamide 150 mg daily in patients with >10% nuclear AR staining. The primary endpoint was clinical benefit rate (CBR) defined as complete response (CR), partial response (PR) or stable disease >6 months. Overall, 51 of 424 (12%) screened patients were AR-positive as defined by the study. Twenty-eight patients were treated per protocol, with only 26 being evaluable for the primary endpoint. The study reported a clinical benefit in five patients (all with stable disease), which exceeded the predefined threshold (CBR = 4/28 patients) needed to justify further study. A single-arm Phase II study testing enzalutamide in AR-positive TNBCs was more recently reported [77]. The primary endpoint was the CBR in "evaluable" patients which were defined as those with ≥10% AR staining and a response assessment. After testing 404 patient samples, 55% were found to have AR staining in ≥10% of cells. 118 patients were treated with enzalutamide, and 75 were "evaluable". Of the evaluable patients, the CBR at 16 and 24 weeks was 35% and 29% respectively. The median progression free survival (PFS) in this group was 14 weeks. In patients with an AR gene signature (n = 56), clinical outcomes were numerically improved compared to the overall "evaluable" group and those lacking the gene signature (N = 62)-suggesting that further refinement of predictive biomarkers beyond AR IHC is necessary. Abiraterone, an inhibitor of extragonadal androgen biosynthesis, has also been tested in breast cancer [78]. In a randomized Phase II trial, abiraterone was compared to the aromatase inhibitor exemestane or the combination. In contrast to the aforementioned studies, this study focused on ER-positive patients and did not require positive AR staining in order to enroll. The authors cited two reasons for not mandating AR-positivity: (1) upwards of 80% of ER-positive breast cancers are also positive for AR; and (2) inhibition of CYP17 will also decrease estrogen levels. The primary endpoint was PFS. A total of 297 patients were randomized between treatment arms, with 102 receiving exemestane, 106 receiving exemestane plus abiraterone and 89 receiving abiraterone. Of note, enrollment to the abiraterone monotherapy arm was discontinued early after a pre-specified analysis determined that futility conditions had been met. After a median follow up of 11.4 months, there was no difference in median PFS between when abiraterone was compared to exemestane (3.7 vs. 3.7 months, p = 0.437), or when abiraterone plus exemestane was compared to exemestane (4.5 vs. 3.7 months, p = 0.794). Of note, there was also no difference in PFS in the subset of patients with AR-positive disease. Given that some studies have shown signs of activity for AR-signaling inhibitors, a number of additional trials are either planned or underway testing AR-directed therapies in breast cancer patients (Table 1). However, it seems likely that these agents will only be effective in a subset of patients, and as such, the development of predictive biomarkers will be critical. Whether the AR will prove to be a clinically important target in breast cancer remains to be seen, but evidence to date does support further testing of drugs designed to inhibit this oncogenic pathway. Other Tumor Types In addition to prostate and breast cancer, there are a number of other malignancies in which AR-signaling appears to play a role in driving tumor growth. As such, there are several ongoing clinical trials testing AR-directed therapies across an array of cancer types (Table 2). A brief overview of the rationale for targeting AR in these malignancies is provided below. Bladder Cancer In 2016, it is estimated that 58,950 American men will be diagnosed with bladder cancer compared to only 18,010 women [79]. Even after controlling for environmental risk factors (e.g., tobacco exposure) men still have a 3-4-fold increased risk of developing bladder cancer [80][81][82]. The observed epidemiologic differences in bladder cancer risk between the sexes points to the potential for sex steroid pathways to play a role in the pathogenesis of this disease [83]. Women have also been found to have a worse prognosis compared to men after adjusting for stage at presentation, further bolstering the case that underlying biologic differences between the sexes influencing outcomes [84]. Androgen receptor has been found to be variably expressed in urothelial carcinoma specimens, with AR staining present in 12% to 77% of patients [85][86][87][88][89]. In general, AR expression appears comparable in men and women [85,86]. There is no clear relationship between AR expression and clinical outcomes, and gene expression profiling studies do not demonstrate a clear relationship between AR expression levels and The Cancer Genome Atlas (TCGA) subtype [86,90,91]. Preclinical studies evaluating the effect of androgens and AR-signaling on urothelial carcinoma tumorigenesis have found that AR-signaling may promote tumor formation. In vitro siRNA studies have found that AR knockdown can lead to decreased tumor cell proliferation and increased apoptosis, possibly mediated through AR's effect on cyclin D1, Bcl-x(L) and MMP-9 gene expression [92]. In a separate set of experiments, mice engineered to not express AR in urothelial cells were found to have a lower incidence of bladder cancer following exposure to the carcinogen BBN [N-butyl-N-(4-hydroxybutyl)-nitrosamine] [93]. In vitro experiments found that this effect may be due to modulation of p53 and DNA damage repair. Studies have also implicated AR in modulating various other oncogenic signaling pathways (e.g., EGFR, ERBB2, β-catenin), offering more evidence for the importance of AR-signaling as it pertains to bladder cancer biology [94,95]. Kawahara and colleagues recently published a paper describing a series of in vitro and in vivo experiments in AR-positive and AR-null bladder cancer models [96]. They found that DHT increased AR-positive bladder cancer cell line viability and migration in culture, while AR antagonists (i.e., hydroxyflutamide, bicalutamide and enzalutamide) inhibited viability and migration. Similarly, apoptosis was decreased following exposure to DHT, and anti-androgens had the opposite effect. Importantly, enzalutamide was found to inhibit AR-positive bladder cancer xenograft growth in vivo. On the basis of these findings, two clinical trials have opened to test enzalutamide in patients with bladder cancer. One is testing enzalutamide monotherapy as a chemoprevention strategy in patients with non-muscle invasive bladder cancer [clinicaltrials.gov: NCT02605863], and the other is testing it in patients with advanced bladder cancer in combination with gemcitabine plus cisplatin [clinicaltrials.gov: NCT02300610]. Renal Cell Carcinoma Androgen receptor is expressed in the distal and proximal tubules of normal kidneys and is expressed in approximately 15% to 42% of renal cell carcinomas (RCC) [97][98][99]. IHC studies correlating AR expression with clinical outcomes have not been consistent, with some reporting an association with decreased survival, while others have found that AR expression was correlated with a favorable pathologic stage and an overall favorable prognosis [97,100,101]. In a study evaluating AR transcript levels using real-time PCR, it was found that AR mRNA expression levels correlated with pathologic T stage and cancer specific survival. Multivariate regression analysis found AR transcript levels were independently associated with cancer specific survival. Of note, AR mRNA levels did not differ between sexes. A more recent analysis of the TCGA data revealed that high AR protein and transcript levels was associated with improved overall survival in patients with clear cell RCC (the most common pathologic subtype), but not other histologic subtypes of RCC (i.e., papillary or chromophobe) [102]. Interestingly, in clear cell RCC cases they found that AR mRNA expression did not differ between men and women, but that AR protein expression was significantly higher in men. The authors concluded that AR might function as a tumor suppressor in this context. In vitro experiments have reported that exposure to DHT causes proliferation in AR-positive RCC cells, while enzalutamide can reduce cell viability [103]. Other groups have found that AR may mediate tumor growth through activating HIF-2α/VEGF-signaling [104]. Preclinical studies have shown that enzalutamide can inhibit RCC cell migration and invasion by modulating HIF-2α/VEGF expression at the mRNA and protein levels. A neoadjuvant Pilot study testing enzalutamide in RCC patients is currently underway, with the primary goal to determine the effects of enzalutamide on RCC apoptosis and cellular proliferation [clinicaltrials.gov: NCT02885649]. Pancreatic Cancer Although the incidence of AR expression is not well defined in pancreatic cancer, AR does appear to be expressed [105]. A number of in vitro/in vivo studies have tested the effects of antiandrogens and/or androgen deprivation in pancreatic cancer models, and have, for the most part, shown that inhibiting AR-signaling exerts anti-tumor effect [106][107][108][109][110][111][112][113]. Preclinical work has demonstrated that this effect may be mediated through IL-6, with a model whereby IL-6 activates AR-signaling via STAT3 and MAPK. Importantly, IL-6 has been shown to enhance pancreatic cell migration, an effect that is blocked through AR knockdown with an AR siRNA [114]. Greenway reported the results of a randomized trial comparing flutamide (a non-steroidal antiandrogen) vs. placebo (n = 49) in patients with both localized and metastatic pancreatic cancer [115]. It should be noted that histologic confirmation of pancreatic cancer was not required, and 32 included subjects were diagnosed on the basis of clinical presentation/imaging studies. This trial reported a median survival of 226 vs. 120 days in the flutamide and placebo groups, respectively (p = 0.079, Wilcoxon; p = 0.01, log-rank). Several other studies in patients with pancreatic cancer have not shown hormonal therapies to be beneficial, however [116][117][118][119][120][121]. Preliminary results from an ongoing Phase I study testing enzalutamide in combination with gemcitabine and nab-paclitaxel in patients with metastatic pancreatic cancer have recently been reported [122]. They have treated 19 patients, and report that 37% had tumor tissue positive for AR. Among 15 evaluable patients, two had a partial response and 13 had stable disease. Pharmacokinetic (PK) analyses did not find any evidence that enzalutamide altered the PK of either chemotherapeutic agent. Whether enzalutamide will prove to be an effective treatment for pancreatic cancer remains to be seen. Hepatocellular Carcinoma Androgen receptor appears to be expressed in subset of hepatocellular carcinomas (HCC), although, like pancreatic cancer, the incidence has not been well defined [123][124][125][126]. The majority of studies show that AR-positivity is associated with worse outcomes, including decreased progression free and overall survival as well as increased tumor size [126][127][128][129]. Studies have also linked AR-signaling with increased risk of developing hepatitis B and C related HCC [130][131][132][133]. AR has been found to promote HCC growth, migration and invasion in several preclinical studies, possibly through increasing oxidative stress and DNA damage, as well as suppressing p53 [134][135][136]. In vitro and in vivo studies targeting AR with either AR-siRNA or ASC-J9 (an AR protein degrader) resulted in decreased tumor growth [134]. A randomized Phase II study testing enzalutamide vs. placebo in HCC is currently underway [clinicaltrials.gov: NCT02528643]. Ovarian Cancer In 1998, Risch hypothesized that epithelial ovarian cancers may develop as a result of androgens stimulating epithelial cell proliferation, and as it stands, a number of lines of evidence support the role for AR-signaling in the pathogenesis of the disease [137,138]. AR is highly expressed in ovarian cancers, with approximately 44% to 82% of tumors staining positive for AR [139][140][141]. Polycystic ovarian syndrome (PCOS), and its resultant hyperandrogenic state, are associated with hyperplastic and metaplastic changes in the surface epithelium of the ovaries, and women with ovarian cancer are more likely to have a history of PCOS compared to control cases [142,143]. The use of exogenous androgens (i.e., danazol, testosterone) has been associated with a >3-fold increased risk of developing ovarian cancer [144]. Preclinical models also support the hypothesis that androgens play a role in the development of epithelial ovarian cancers, with a number of oncogenic signaling pathways implicated in this process (e.g., TGF-β, IL-6/IL-8, EGFR) [138,[145][146][147]. However, as it stand, the prognostic impact of AR expression in epithelial ovarian cancers is not clear [138]. A handful of clinical trials testing AR-signaling inhibitors in women with ovarian cancer have been completed, with no clear signs of activity. A single-arm Phase II study testing flutamide in ovarian cancer patients progressing on platinum chemotherapy has previously been reported [148]. Out of 68 women enrolled, only two objective responses (one complete and one partial response) were observed. In a second single-arm Phase II study, flutamide was given to 24 ovarian cancer patients who failed chemotherapy and only one partial response was observed [149]. Finally, in a single-arm Phase II study, Levine and colleagues treated 35 women with ovarian cancer who were in second or greater complete remission with bicalutamide and goserelin (LHRH agonist) [150]. This trial failed to meet the pre-specified metric to justify further studies testing this regimen, which was arbitrarily set at median PFS >13.5 months. More recent preclinical work has shown that enzalutamide is able to significantly inhibit the growth of ovarian cancer xenografts [151]. On this basis, a Phase II study has been launched to test enzalutamide in women with AR-positive, advanced ovarian cancer [clinicaltrials.gov: NCT01974765]. Endometrial Cancer Similar to prostate and breast cancer, endometrial cancers are hormonally dependent, and hormonal agents targeting ER-/PR-signaling are options for select patients [152]. Given the similarities to breast and prostate cancer, Tangen and colleagues sought to explore the potential for targeting AR-signaling in advanced endometrial cancer [153]. They found that the majority of hyperplastic endometrial specimens evaluated (93%) had evidence of AR expression. This number decreased in primary tumors, and high-grade tumors (i.e., grade 3) were found to express less AR than low-grade tumors (i.e., grade 1) (53% vs. 74%). Metastatic specimens from 142 patients revealed AR expression in 48% of samples. On multivariate analyses, AR status did not provide additional prognostic value, however. Short-term cell culture experiments demonstrated that cell proliferation was inhibited by enzalutamide, and stimulated by the synthetic androgen R1881, providing justification for a Phase II study testing enzalutamide in combination with carboplatin and paclitaxel [clinicaltrials.gov: NCT02684227]. Mantle Cell Lymphoma Mantle cell lymphoma shows a male predominance, and interestingly, male sex appears to associate with higher mortality based on a retrospective SEER analysis [154]. While it is not clear what underlies the poor outcomes in men with mantle cell lymphoma, AR is expressed across an array of hematopoietic cells, and may account for gender differences in the function of platelets and the immune system [155][156][157]. Furthermore, in contrast to other lymphomas, AR appears to be hypomethylated in mantle cell lymphoma-indicating that epigenetic silencing of AR gene expression may not be present in mantle cell lymphoma [158,159]. To our knowledge, large studies examining AR protein expression in mantle cell lymphoma samples have not been conducted. On the basis of these observations a pilot study was recently launched to assess the clinical effects of enzalutamide in patients with mantle cell lymphoma [clinicaltrials.gov: NCT02489123]. Salivary Gland Cancer AR is expressed in the majority of lacrimal gland ductal carcinomas, and as a result AR staining is often used as part of the workup to confirm the diagnosis [160][161][162][163][164][165][166]. To date, there have been a handful of case reports/series documenting favorable outcomes in patients with salivary gland cancers treated with AR-directed therapies. A small case series (n = 10) reported a clinical benefit when ADT-most often single agent bicalutamide-was given to patients with salivary ductal carcinoma, with 50% of patients experiencing clinical benefit (i.e., stable disease, n = 3; partial response, n = 2) [167]. A case report has also reported favorable outcomes when ADT was combined with radiation therapy in a patient with AR-positive salivary gland cancer [168]. A single arm Phase II study testing enzalutamide in AR-positive salivary gland cancers is ongoing [clinicaltrials.gov: NCT02749903]. Conclusions AR signaling is involved in a number of normal physiologic processes, and there is varying levels of evidence for its role in promoting cancer growth and progression across an array of malignancies. To date, prostate cancer remains the only malignancy with Level 1 evidence supporting the use of AR-directed therapies as an integral part of its treatment paradigm. However, mounting preclinical, epidemiologic and early phase clinical trial data support the further exploration of these drugs in diseases as varied as breast and salivary gland cancers, and it is likely that in the ensuing decade next generation AR-directed drugs will extend their reach beyond prostate cancer.
v3-fos-license
2018-04-03T03:47:04.873Z
2017-04-11T00:00:00.000
36497009
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0175455&type=printable", "pdf_hash": "a847124472f2c74fe3be9fa0d29651bb366f25e8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41595", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a847124472f2c74fe3be9fa0d29651bb366f25e8", "year": 2017 }
pes2o/s2orc
Short cervical lengths initially detected in mid-trimester and early in the third trimester in asymptomatic twin gestations: Association with histologic chorioamnionitis and preterm birth Objective To determine whether short cervical lengths (≤20 mm) that were initially detected in mid-trimester and early in the third trimester are independently associated with increased risks of subsequent histologic chorioamnionitis and spontaneous preterm birth (SPTB, defined as a delivery before 34 weeks) in asymptomatic women with twin pregnancies. Material and methods This is a prospective study including 292 consecutive asymptomatic women with twin gestations. Cervical length measurements were carried out at 20 to 24 weeks’ gestation and at 28 to 32 weeks’ gestation. Both placentas of each twin pair were examined histologically after delivery. The generalized estimation equations models and logistic regression analysis were used for statistical analyses. Results Multivariable generalized estimation equations analysis revealed that short cervical length at mid-trimester was independently associated with an increased risk for subsequent histologic chorioamnionitis, whereas short cervical length initially detected early in the third trimester was not. By using the likelihood of SPTB as an outcome variable, multivariable logistic regression analysis indicated that short mid-trimester cervical length and histologic chorioamnionitis were independently associated with a greater risk for SPTB. Similarly, based on the multivariable analysis, a short third trimester cervical length was independently and significantly associated with a greater risk for SPTB. Conclusions In asymptomatic women with twin pregnancies, a short mid-trimester cervical length is independently associated with an increased risk of both subsequent histologic chorioamnionitis and SPTB, whereas a short cervical length initially detected early in the third trimester is independently associated with preterm delivery, but not subsequent histologic chorioamnionitis. Introduction Twin pregnancies have increased during the last decade, reaching 33.2 per 1000 births in 2009, and have a six to ten times increased risk of preterm birth compared with singleton pregnancies [1]. Although the causes of preterm birth are multiple, in the context of twin pregnancies, uterine overdistention and intra-uterine infection/inflammation are traditionally recognized as major potential mechanisms of preterm birth [2,3]. In the clinical perspective, ultrasonographic assessment of cervical length can be utilized as an effective tool to simultaneously reflect these two mechanisms of preterm delivery [4][5][6][7]. An association between shortened cervical length at 20 to 24 weeks' gestation and preterm delivery has been established in asymptomatic women with twin pregnancies [8,9]. A short cervical length at mid-trimester is also associated with an increased risk of intra-uterine infection/ inflammation [5][6][7]10]. However, it is unclear if the increased risk of preterm delivery in asymptomatic twins associated with short mid-trimester cervical length is related to the presence of intra-uterine infection/inflammation, as intra-uterine infection/inflammation also carries a higher risk of preterm delivery [10,11]. On the other hand, several studies on asymptomatic twin pregnancies have highlighted that the relationship between a short cervical length and the risk of preterm delivery is affected by the gestational age at which the cervical length is measured; the lower the gestational age at the diagnosis of a short cervix, the higher the risk of preterm birth [12,13]. In contrast to a short cervical length at mid-trimester, the effect of a short cervical length initially detected early in the third trimester on intra-uterine infection/inflammation and preterm delivery is unknown, despite the fact that cervical shortening in the third trimester is a relatively common finding in twin gestations [14]. Indeed, short cervical length initially detected early in the third trimester is generally considered as a normal physiologic change during pregnancy that does not have clinical implications, although evidence is lacking. The purpose of the study was to determine whether short cervical lengths that were initially detected in the mid-trimester and early in the third trimester are independently associated with increased risks of subsequent histologic chorioamnionitis and spontaneous preterm birth (SPTB) in asymptomatic women with twin pregnancies. Materials and methods This study was a single-center prospective cohort study conducted at Seoul National University Bundang Hospital (Seongnamsi, Korea) between June 2008 and January 2015. The institutional review board (IRB) of the Seoul National University Bundang Hospital approved this study (project No. B-0804/056-001) and written informed consent was obtained from all study subjects. Women with twin pregnancies who attended routine antenatal clinics to undergo anomaly scan and cervical length measurements were consecutively recruited in the study at 20 to 24 weeks of gestation. Women were excluded for the following conditions: singleton and triplet or higher-order multiple pregnancy; prior or subsequent cervical cerclage; losses to follow-up after mid-trimester cervical length measurements; symptomatic preterm labor; preterm premature rupture of membranes; major congenital anomalies; and dead fetus. The women underwent an initial cervical length measurement at the time of routine ultrasound examination between 20 and 24 weeks' gestation, followed 4 weeks later until 28 weeks, and then every 2 weeks until 32 weeks. Except for the women who delivered before 28 weeks' gestation, cervical length measurements were carried out during each of the following time periods: 20 to 24 weeks' gestation and 28 to 32 weeks' gestation. The primary outcome measures were histologic chorioamnionitis and SPTB at < 34 weeks' gestation. Additionally, we analyzed the data for SPTB at < 32 weeks' gestation. Transvaginal ultrasonography to measure cervical length was performed by Maternal Fetal Medicine faculties or fellows by using either a Voluson 730 Expert (GE Healthcare, Milwaukee, WI, USA) or an Aloka SSD 5500 (Aloka Co. Ltd., Tokyo, Japan) ultrasound machine equipped with a 6.0-MHz transducer. The detailed description of cervical length measurements was published elsewhere [15]. Cervical length was measured by placing the electronic markers at the furthest points between the internal os and external os, measuring it as a straight line. The shortest of three measurements obtained was taken as the cervical length. The women and their responsible obstetricians were not blinded to cervical length measurements and any interventions were instituted at the obstetricians' discretion. Immediately after delivery, placentas were collected and labeled as Ⅰ (1 cord clamp) or Ⅱ (no cord clamp) according to birth order. The placental specimens were processed in the pathology department according to the protocol of the College of American Pathologists [16]. Both placentas of each twin pair were used in the histopathologic analysis. After the maternal and fetal surfaces were grossly examined, the placental plate was sectioned at 1cm intervals and examined for any focal lesions. A full-thickness section of the placenta (including the maternal and fetal surfaces) was taken from the mid-zone of the placenta for further histological evaluation. A membrane roll was sampled, from the point of rupture to the edge of the placental disc, and a section of the umbilical cord (which was also sectioned at 1cm intervals) was submitted for histologic evaluation. Consequently, specimens for microscopic examination were taken for each case: one from an umbilical cord, one from a roll of fetal membranes, and one from full-thickness section of the placental disk parenchyma. Additional specimens were taken from any gross lesions. The presence of acute inflammation was noted and classified in each placenta as grade 1 or 2 according to criteria previously published [17]. Acute histologic chorioamnionitis was defined as the presence of acute inflammatory change in any tissue sample (amnion, chorion-decidua, umbilical cord, or chorionic plate). Funisitis was diagnosed by the presence of neutrophil infiltration into the umbilical vessel walls or Wharton's jelly. For each placenta, the total grade of histologic chorioamnionitis was calculated as the sum of histologic grades in the amnion (0-2), chorion-decidua (0-2), umbilical cord (0-2), and chorionic plate (0-2). Clinical chorioamnionitis was defined according to the criteria proposed by Gibbs et al. [18]. A short cervical length was defined as a cervical length of 20 mm. When multiple scans were performed on each subject, the shortest cervical length was used for defining a short cervical length in the study. SPTB was defined as delivery before 34 weeks' gestation after spontaneous onset of preterm labor or premature rupture of membranes. The early third trimester was defined as 28 to 32 weeks of gestation. Statistical analyses were performed using SPSS version 22.0 for Windows (IBM SPSS Statistics, Chicago, IL, USA). The Shapiro-Wilk test was used to assess whether data are normally distributed or not. Comparisons of continuous variables were performed with Student's t test or Mann-Whitney U test and proportions were compared with the χ 2 -test or Fisher's exact test, as appropriate. For determining the associations of histologic chorioamnionitis and funisitis with the explanatory variable (i.e., short cervical length), we used generalized estimation equations (GEE) model to account for correlated binary responses from the same twin pair; thereafter, a multivariable GEE model was conducted to examine the relationship of histologic chorioamnionitis and funisitis to short cervical length after adjusting for baseline variables. On the other hand, the usual uncorrelated logistic regression analysis was performed to assess the association between pregnancy characteristics that did not produce cluster-correlated data (gestational age at measurement, short third trimester cervical length, and IVF) and SPTB. In these analyses wherein a single outcome is measured in the same subject, acute histologic chorioamnionitis and funisitis within a mother were entered as the presence of them in either or both twins, respectively. Only variables with P values of < 0.1 in the univariate analysis were entered in the multivariable analysis. All statistical analyses were performed by using a twosided test with a significance level of 0.05. Results During the study period, a total of 378 consecutive twin pregnant women were recruited at 20 to 24 weeks' gestation for this study. Of these 378 women, 1 had a huge cervical myoma; 17 women underwent cervical cerclage because of a history of cervical incompetence or a short cervix; 61 women delivered outside of our hospital and were lost to follow-up, and 2 women had an incomplete data set. Five women with medically indicated preterm birth at < 34 weeks' gestation were further excluded from the analysis (preeclampsia [n = 4] and twin-twin transfusion syndrome [n = 1]). Thus, 292 women were suitable for evaluating the relationships among a short mid-trimester cervical length, acute inflammatory lesions in the placentas and SPTB. The mean (SD) cervical length in mid-trimester was 36.1 (8.7) mm at a mean gestational age of 21.3 (1.2) weeks. The cervical length in mid-trimester was 20 mm in 12 women (4.1%). Histologic evidence of chorioamnionitis and funisitis was present in 14.0% (41/292) and 3.1% (9/292) of first-born twins and 11.0% (32/292) and 2.7% (8/292) of second-born twins, respectively. The prevalence of histologic chorioamnionitis and funisitis present in either or both twins was 17.5% (51/292) and 4.8% (14/292), respectively. The mean (SD) gestational ages at birth were 33.3 (5.4) weeks for women with histologic chorioamnionitis that was present in either or both twins and 36.2 (2.0) weeks for those in whom histologic chorioamnionitis was not present in either twin (P < 0.001). Table 1 describes the clinical characteristics of the study population according to the presence or absence of short mid-trimester cervical length ( 20 mm). Women with short cervical lengths tended to have cervical lengths measured at a later gestational age, although they did not show statistical significance (P = 0.067). However, there were no differences in maternal age and rates of nulliparity, prior preterm births, chorionicity, and IVF. Women with short cervical lengths delivered significantly earlier and had a significantly higher median body mass index (BMI) at the time of mid-trimester ultrasound and higher risks of SPTB before 32 and 34 weeks' gestation than those with normal cervical length. Moreover, based on the univariate analyses, short cervical length at mid-trimester was significantly associated with the development of histologic chorioamnionitis, funisitis, total grade of histologic chorioamnionitis, and clinical chorioamnionitis (histologic chorioamnionitis, OR = 6.790, 95% CI = 2.185-21.096, P = 0.001; funisitis, OR = 8.405, 95% CI = 2.040-34.619, P = 0.003; total grade of histologic chorioamnionitis, OR = 15.739, 95% CI = 3.977-62.288, P < 0.001; GEE model in analyses). These associations remained significant after adjustment for potential confounders, such as gestational age and BMI at mid-trimester ultrasound (histologic chorioamnionitis, OR = 8.151, 95% CI = 2.568-26.869, P < 0.001; funisitis, OR = 6.303, 95% CI = 1.781-22.306, P = 0.004; total grade of histologic chorioamnionitis, OR = 628.967, 95% CI = 1.061-372888.866, P = 0.048; multivariate GEE model in analyses; clinical chorioamnionitis, OR = 31.109, 95% CI = 6.347-152.466, P < 0.001). By using the likelihood of SPTB as the outcome variable, multivariable logistic regression analysis was performed to estimate the independent associations of a short mid-trimester cervical length and histologic chorioamnionitis with SPTB. Only four variables with P < 0.1 shown to be associated with SPTB in the bivariate analysis were included in the multiple logistic regression analysis: short mid-trimester cervical length, clinical chorioamnionitis, histologic chorioamnionitis, and funisitis. As shown in Table 2, short mid-trimester cervical length and the presence of histologic chorioamnionitis were independently and significantly associated with a greater risk for SPTB at < 32 weeks and < 34 weeks. To analyze the relationship between short cervical length ( 20 mm) that was first detected in the early third trimester, acute inflammatory lesions in the placentas and SPTB, we further excluded 23 women [12 women due to short cervical lengths ( 20 mm) first detected in midtrimester, 9 women due to preterm delivery at 28 weeks, 2 women due to missed follow-up scans]. Thus, a total of 269 women were suitable for this evaluation. The prevalence of a cervical length of 20 mm that was first detected in the early third trimester was 28.6% (77/269). The clinical characteristics of the study population stratified according to the presence or absence of short cervical lengths ( 20 mm) first detected in the third trimester are shown in Table 3. There were no differences in demographic and clinical characteristics, except for a lower rate of cesarean delivery in the short third trimester cervical length group (57% vs. 70%, . Also, a short cervical length first detected in the third trimester did not have an association with the subsequent risk of clinical chorioamnionitis. Multivariable logistic regression analysis results, including covariates with P < 0.1 shown to be associated with SPTB in the bivariate analysis, are shown in Table 4. Short cervical length first detected in the third trimester was significantly associated with a greater risk for SPTB at < 34 weeks after adjusting for potential confounders (i.e., gestational age at the third trimester ultrasound, BMI at the third trimester ultrasound, and clinical chorioamnionitis). Table 5 presents diagnostic indices of short cervical lengths initially detected in the mid-trimester and early in the third trimester to predict SPTB <34 weeks' gestation, histologic chorioamnionitis, and funisitis. A short mid-trimester cervical length for these outcome variables showed high specificity and negative predictive value but were not sensitive with a low positive predictive value. Variables Odds ratio 95% confidence interval P value Discussion The principal findings of this study are as follows: (1) in asymptomatic women with twin pregnancies, a short mid-trimester cervical length ( 20 mm) is independently associated with an increased risk of subsequent histologic chorioamnionitis and is strongly associated with SPTB, independent of the presence of histologic chorioamnionitis; (2) a short cervical length initially detected early in the third trimester is associated with SPTB, but not subsequent histologic chorioamnionitis. These findings suggest that the role of short cervical lengths related to the risk of intra-uterine infection/inflammation may vary according to the age of gestation at the time of diagnosis, and that cervical length assessment early in the third trimester, as well as at mid-trimester, is useful to identify women at highest risk for SPTB. This is the first study to examine the relationship between short cervical lengths according to trimesters of pregnancy, chorioamnionitis, and SPTB [Searches of PubMed (January 1966 through July 2016), EMBASE (January 1966 through July 2016), and The Cochrane Library were conducted using the following search terms: 'histologic chorioamnionitis', 'preterm birth', 'twins', and 'short cervix' or 'short cervical length']. We found that in asymptomatic women with twin pregnancies, women with short mid-trimester cervical lengths were more likely to have subsequent histologic chorioamnionitis than those with normal cervical lengths. This observation is in line with results of previous studies on multiple pregnancies by Guzman et al. and Pelaez et al. [10,19] and suggests that the association between a short mid-trimester cervical length and the risk of SPTB may be potentially mediated by the present or subsequent development of intra-uterine infection/inflammation. Similarly, with respect to intra-amniotic inflammation, a significant correlation between the degree of cervical shortening and amniotic fluid cytokine levels was previously noted in women with singleton pregnancies and short cervical lengths in the mid-trimester [6]. Indeed, these findings are not unexpected, because a short cervical length may predispose to ascending infection of vaginal microbial flora, leading to intra-amniotic infection/inflammation and acute inflammatory lesions of the placenta, or vice versa. However, contrary to a short cervical length at mid-trimester, we found a lack of association between a short cervical length first detected early in the third trimester and subsequent histologic chorioamnionitis. This discrepancy is not clearly explained, but may be due to the difference in immunity to ascending infection from the lower genital tract that varies according to gestational age; the higher the gestational age, the higher the immunity to the microorganisms in the lower genital tract and the lower the risk of infection (fetoplacental, intra-amniotic, and neonatal infection) [2,17,20]. In support of our view regarding immune [21]. In line with previously published data [8,9,22,23], a significant association of a short midtrimester cervical length with SPTB was observed in the current study. We also found a significant association between short cervical length first detected early in the third trimester with a greater risk for SPTB, which is similar to the findings of Goldenberg et al. and Vayssiere et al. [22,23]. They demonstrated that cervical length data obtained between 26 and 28 weeks correlate with the risk of preterm delivery in twin pregnancies [22,23]. However, in terms of study design, our study is different from both studies [22,23] in that we excluded women with short cervical lengths ( 20 mm) at mid-trimester, in order to evaluate the direct effect of a shortened cervical length first detected in the third trimester on the chorioamnionitis and SPTB. To date, no effective interventions have been shown to reduce the risk of SPTB in mothers of twins with a short cervix [24]; thus, implementation of routine cervical length screening may be limited in clinical practice. However, our data and those of other groups [22,23] may provide evidence justifying serial cervical length measurements early in the third trimester in twin pregnancies, as this information can be used in antenatal preparation (i.e., antenatal corticosteroid administration and transfer of the mother to a tertiary facility) of targeted pregnancies identified as high risk for SPTB. Further studies are needed to investigate if the subset of women in whom a shortened cervical length is first identified in the early third trimester may also be appropriate candidates for the potential benefits of the interventions (i.e., cervical pessary, progesterone supplementation, and bed rest) that have limited evidence for their effectiveness in twins with a mid-trimester short cervix because mechanisms for preterm delivery arising from a short cervix could be different between the second and third trimester. Our finding that a short mid-trimester cervical length in asymptomatic women with twin pregnancies is independently associated with a high likelihood of subsequent clinical chorioamnionitis and funisitis is in accordance with the results of previous studies on singleton gestations [25,26]. They demonstrated a significant link between a short mid-trimester cervical length, and clinical chorioamnionitis, early onset sepsis and neonatal morbidity and mortality [25,26]. Collectively, these findings suggest that a short mid-trimester cervical length may play an important role in the development of both fetal and maternal inflammatory responses, potentially through ascending infection [27,28]. Our finding that higher BMI at mid-trimester ultrasound was associated with short mid-trimester cervical length is consistent with the results of previous studies on singleton gestations [29,30]. Although we cannot fully explain these findings, they are related to the fact that higher BMI may induce uterine contractions and cervical ripening, leading to shortening of the cervix, by increasing factors involved in obesity, such as chronic low-grade inflammation and metabolic and hormonal alterations. In fact, several previous studies have reported the significant association between higher maternal BMI and SPTB [31,32]. The current study has several limitations. First, the results of cervical length measurements were reported to the women and their managing obstetricians, which may have potentially led to the initiation of various clinical practices to reduce preterm birth, although no intervention (i.e. progesterone, cerclage, and bed rest) has been shown to be beneficial in women with twin pregnancies and a short cervix at mid-trimester [24,33,34]. Second, our data were analyzed at a cut-off cervical length of 20 mm to define short cervical length, which may affect outcomes. Although the cut-off values to define a short cervical length were previously reported to be 20 to 35 mm in twin gestations, a recent meta-analysis has suggested 20 mm as the best cut-off for the prediction of SPTB at < 32 and 34 weeks' gestation [8]. Third, the analysis in the current study was limited by the small number of cases (n = 12) of a short mid-trimester cervical length. The main strengths of our study are 1) the prospective nature of data collection; 2) the relatively large sample size; 3) 100% of placentas were examined, distinguishing the placenta of the first baby from that of the second baby; 4) application of the GEE approach for correlated binary data to avoid making misleading conclusions; and 5) the serial measurements of cervical lengths by transvaginal ultrasound from second trimester to third trimester. Conclusions Our study has shown that in asymptomatic women with twin pregnancies, a short mid-trimester cervical length is independently associated with an increased risk of both subsequent histologic chorioamnionitis and preterm delivery, whereas a short cervical length initially detected early in the third trimester is independently associated with preterm delivery, but not subsequent histologic chorioamnionitis.
v3-fos-license
2018-10-16T22:29:58.863Z
2012-01-01T00:00:00.000
56354076
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0350-06081273109P", "pdf_hash": "91544b286ec8c95b9cfcd938888a6c7bf04c4c05", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41597", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "sha1": "91544b286ec8c95b9cfcd938888a6c7bf04c4c05", "year": 2012 }
pes2o/s2orc
The impact of geology on the migration of fluorides in mineral waters of the Bukulja and Brajkovac pluton area , Serbia One of the hydrogeochemical parameters that classify groundwater as mineral water is the content of fluoride ions. Their concentration is both important and limited for bottled mineral waters. Hydrochemical research of mineral waters in the surrounding area of Bukulja and Brajkovac pluton, in central Serbia, was conducted in order to define the chemical composition and genesis of these waters. They are carbonated waters, with content of fluoride ranging from 0.2 up to 6.6 mg/L. Since hydrochemical analyses showed variations in the major water chemistry, it was obvious that, apart from hydrochemical research, some explorations of the structure of the regional terrain would be inevitable. For these purposes, some additional geological research was performed, creating an adequate basis for the interpretation of the genesis of these carbonated mineral waters. The results confirmed the significance of the application of hydrochemical methods in the research of mineral waters. The work tended to emphasize that “technological treatment” for decreasing the concentration of fluoride in mineral waters occurs in nature, indicating the existence of natural defluoridization. Introduction Research of mineral waters is of great importance due to the wide variety of their utilization and consumption.Some of them are used for balneotherapeutic purposes, others as medicinal waters, or in the form of bottled mineral water.It is significant to know the con-tent of trace elements.Set of norms and regulations on natural mineral waters define the minimum as well as the maximum allowed values of the content.Fluoride ions have an important place among trace elements; low values cause dental caries, while high values produce dental fluorosis or even skeletal fluorosis.The optimal values are between 0.5 and 1.5 mg/L (FORDYCE The impact of geology on the migration of fluorides in mineral waters of the Bukulja and Brajkovac pluton area, Serbia Апстракт.Један од хидрогеохемијских параметара за издвајање подземне воде као минералне је и садржај флуоридног јона.Садржај овог јона је изузетно важан и ограничавајући код флашираних минералних вода.Хидрохемијска истраживања минералних вода у околини плутона Букуље и Брајковца, у централној Србији, су спроведена ради дефинисања хемијског састава и одређивања порекла испитиваних вода.Оне су угљокиселе, са садржајем флуоридног јона од 0,2 до 6,6 mg/l.Пошто су хидрохемијске анализе показале разлику у хемијском саставу макро компоненти, било је јасно да је неопходно спровести и истраживања регионалнe грађе.За ове потребе, нека додатна геолошка истраживања су спроведена, стварајући неопходну основу за интерпретацију порекла испитиваних угљокиселих минералних вода.Резултати су потврдили велики значај примене хидрохемијских метода у истраживању минералних вода."Технолошки третмани" смањења концентрација флуоридног јона у минералним водама се одвијају и у природним условима, указујући на природну дефлуоридизацију. Кључне речи: флуориди, хидрогеохемија, минералне воде, гранитоидни плутон Букуље и Брајковца, дефлуоридизација.National Research Council 2006).The overall assumption is that the fluoride content in some mineral waters is important because of hyperactivity the ion in the biological balance of elements in the human body.As was already mentioned, the emphasis is put on the content of fluoride ions in waters which can be used as bottled mineral waters.In this case, hydrogeochemical methods play an important role within hydrogeological investigations.Namely, defining hydrogeological conditions favorable for migrations of these ions aids greatly in the recognition the hydrogeological conditions required for the formation of mineral waters with the optimal content of fluoride.Lithology is definitely regarded as one of key factors for defining the presence of a certain element.This kind of approach allows for the recognition of the main issues of hydrochemistry and hydrogeology, for example mineral water genesis, to establish the conditions and forms of migration of fluoride in groundwater, etc.Based on previous investigations, the basic principles have been defined in reference to the migrations of this important trace element in the mineral waters of Serbia (PAPIĆ 1994), and in later hydrochemical investigations, attention was paid to the interdependence of lithology and the presence of fluoride in mineral water.Different fluoride containing minerals are the main sources of fluorides in soil and groundwater (TIRUMALESH 2006;SHAJI 2007) Methods Samples of mineral waters were collected during the investigation period in 2010-2011.Water samples were taken from eight representative localities in the area of Bukulja and Brajkovac granitoid pluton and 16 physico-chemical parameters were determined in these samples, following standard and official methods of analysis.The groundwater samples were filtered through 0.4 µm membrane on site.Unstable hydrochemical parameters were measured on site, immediately after collection of the sample by potentiometry (pH-meter, WTW) and conductometry (EC, WTW).The major anions and fluoride were measured by ion chromatography (IC Dionex ICS 3000 DC).The major cations were determined by inductively coupled plasma -optical emission spectroscopy (ICP-OES, Varian). The Schlumberger water quality analysis software AquaChem and USGS software Phreeqc were used for processing the hydrogeochemical data.The packages were used for the determination of the mineral saturation indexes and for the construction of charts. Results In the following text, eight characteristic localities of mineral waters, with different fluoride contents, are described.They are located in the area of Bukulja Mountain and Brajkovac Village in central Serbia, 60 km south of Belgrade (Fig. 1). Geology The region of Bukulja is dominated by a horst structure, which is in the form of an elongated block that stretches ESE-WNW and can be clearly discerned.It is composed of Paleozoic psamite-pelite sediments, which due to regional and contact metamorphism, first transformed into sericite schists and phyllite, and then into micaschists and finally into sericite schists and gneisses which form a contact aureole of Tertiary pluton bodies.The immediate cover of the Bukulja crystalline rock is composed of Cretaceous basal clastic limestones and flysch sediments, which in the course of intrusion of the Bukulja granite monzonite and the Brajkovac granodiorite, underwent some contact metamorphic changes.These are Hydrogeochemistry From the hydrochemical viewpoint, there are three types of mineral waters, as indicated on the Durov diagram (Fig. 3 I, II and III). The first type is sodium hydrogencarbonate water (Čibutkovica, Rudovci, Darosava, Arandjelovac).They are mineral waters (TDS 1.7-3.8g/L) with a carbon-dioxide content of 0.6-1.05g/L.They have rather high contents of stron- tium, lithium, silicon and fluoride.The fluoride content ranges from 0.7 to 6.6 mg/L.Among other macrocomponents, it is worth mentioning the contents of calcium ions, which range from 60 to 204 mg/L.The values of the genetic coefficient, rNa/(rCa+rMg) (r is reacting concentration in % eqv.) range from 2.3 to 10.The mineral waters are genetically confined to Paleozoic schists and granite gneisses.The favorable migration of fluorides is affected by the slightly acid environment (pH around 6.5), carbon dioxide in gas composition, sodium hydrogencarbonate content and the relatively low calcium ion values (Table 2). The second hydrochemical type of mineral waters are the sodium hydrogencarbonate-calcium waters (Garaši, Brajkovac and Onjeg), with high contents of strontium, lithium and silicon.The fluoride content ranges from 0.2 to 1 mg/L.Among macrocomponents in their chemical composition, the high calcium ion content, which range from 240 to 400 mg/L, is worth mentioning.The genetic coefficient values rNa/rCa+rMg range from 0.4 to 1.3.These mineral waters occur at the contacts of Paleozoic schists with Cretaceous sediments.As a result of the extremely high calcium values, the fluoride ion contents are an order of magnitude lower compared to the previous type of mineral water. Third type of mineral water is calcium hydrogencarbonate water (Kruševica).The mineralization is about 1.55 g/L with a carbon dioxide content of about 0.7 g/L.This type has higher strontium and silica contents, but the contents of the other micro components are not elevated.The value of genetic coefficient rNa/Ca+Mg is about 0.3.The calcium content is extremely high and reaches 460 mg/L, consequently the fluoride ion contents are as low as 0.36 mg/L. Discussion and conclusions Correlation diagrams (Fig. 4) show positive correlation between the fluoride content and TDS, as well as between fluoride and the sodium content.It is also obvious from these diagrams that high concentrations of fluoride are present in waters with high values of the genetic coefficient (rNa/rCa+rMg).This was generally expected considering that decomposition processes of silicate and aluminosilicate minerals occur in the majority of these waters (in the presence of CO 2 ), resulting in a carbonated, sodium hydrogencarbonate composition of the water (Fig. 3). Calcium ions are negatively correlated with fluoride ions, because the content of fluoride in water is limited by the solubility product of calcium fluoride (the more calcium, the less fluoride in water).It is obvious from the Fig. 4 that low fluoride concentrations (< 0.5 mg/L) appear in waters where the concentration of calcium ions are elevated (> 200 mg/L). Saturation indexes (SI) of fluorite and calcite were calculated using chemical thermodynamics, and obtained values indicated mainly mineral waters unsaturated with respect to fluorite and oversaturated with respect to calcite (Table 3 and Fig. 4).There are two exceptions: the mineral water from Darosava, which is mildly saturated with respect to fluorite, and the mineral water from Arandjelovac, which is in equilibrium with fluorite.The fact that these two mineral waters differ from the rest of the analyzed waters could be observed on every correlation diagramnumber 3 (Darosava) and number 4 (Arandjelovac) are always significantly separated from the rest of the symbols, i.e., mineral waters, on the diagrams. The fact that the majority of analyzed waters are unsaturated with respect to fluorite is explained by the elevated concentrations of calcium (and consequently low concentrations of fluoride).The conclusion is that precipitation of fluorite is not possible under these hydrochemical conditions. By comparing geological and tectonic characteristics and results of hydrochemical research, it was established that there is an evident connection between geological structure of the Bukulja substrate and the hydrocarbonate mineral water genesis.It was concluded that, apart from lithology, joint fabrics and larger dislocation structures are of crucial importance for the water chemistry in the studied region.In addition, it should be stated that smaller ruptures determine the type of porosity that enables the accumulation of groundwater in the rock mass and its chemical transformation, while larger dislocation forms determine the stream flows of the regional water circulation.For better perception of the correlation between certain spring The impact of geology on the migration of fluorides in mineral waters of the Bukulja and Brajkovac pluton area Table 2. Representative localities of carbonated mineral waters in the investigated area -macro and micro components.areas, a hydrochemical map was constructed with major geological structures along with hydrochemical properties of the spring locations (Fig. 1). In order to present clearly the correlation between geological and hydrochemical parameters, transversal and diagonal cross sections were drawn, displaying the basic structures and lithologic properties of the rocks (Fig. 2).Associated with them are the following spring areas: -Čibutkovica-Kruševica-Rudovci -Brajkovac-Onjeg-Darosava and -Garaši-Arandjelovac In accordance with previous conclusions, it was established that the main spring areas of sodium hydrogencarbonate mineral waters (having dominant sodium content) occur along the complex regional fault which borders the Bukulja block on its north-eastern side, whereas mineral waters with dominant calcium content appear along the dislocation which borders its northern side. It is obvious that the northeastern dislocation (which connects Arandjelovac, Darosava and Rudovci) and the sets of joints that accompany it cut muscovite granite, gneiss, igneous and clastic flysch rocks, which in turn influence the formation of sodium waters. In the spring area of Čibutkovica, the hydrogencarbonate mineral waters have distinctly sodium characteristics, which prove that the southern dislocation does not act as a groundwater recharge.Recharge is most probably realized in the metamorphic complex that forms the northern hinterland of the spring area. In contrast, along the southern dislocation, Bukulja crystalline rocks are at many places in contact with Table 3. Representative localities of carbonated mineral waters in the investigated area -water type, genetic coefficients and saturation indexes (SI). Upper Cretaceous clastic-carbonate flysch, which increases the amount of calcium in the spring areas of Garaši and Brajkovac.The Onjeg locality belongs to this group, its water having a higher content of calcium due to the dissolution of the limestone thick layers that form a tectonic block between the two reverse faults. The water of Kruševica spring is characterized by a high content of calcium, but the contents of the micro components are not elevated, except for strontium and silica.This is due to a shallower zone of groundwater formation in the sandy Tertiary sediments. It should be emphasized that the two mineral waters belonging to the first type are bottled as the mineral water "Knjaz Miloš" from Arandjelovac (Bukovička spa) and "Dar voda" from Darosava.The fluoride concentrations in these waters are higher than 1 mg/L; hence, they are called fluoride waters.Due to the biological activity of fluoride, its content is limited to 5 mg/L for bottled mineral waters.If the level is higher than 1.5 mg/L, the term "contains more than 1.5 mg/L of fluoride: not suitable for regular consumption by infants and children under 7 years of age" should appear on the label in close proximity to the name of the product.The European Directive on the exploitation and marketing of natural mineral waters and spring waters sets standards for excluding harmful elements such as fluoride ions, iron, manganese, sulfur and arsenic.It is obvious from the obtained results that some mineral waters in Serbia should be subjected to water treatment, which seems to be difficult in practice, and sometimes nature itself plays the role of a "technologist".Two possibilities are offered here: the right choice of locations for abstraction of mineral water with satisfactory chemical composition, which is a hydrogeologist's task, and the application of artificial defluoridization by means of aluminum oxide, lime, ion exchange resins or similar methods, which is a technologist's task.It is important to emphasize the impact and application of hydrochemical methods throughout hydrogeological research, which includes defining the conditions and factors of migrations of fluoride ions in mineral waters, the defining of the basic hydrochemical types of waters with high and low levels of ions and of gas composition, as well as the thermodynamic conditions in aquifers with accumulated mineral waters. Fig. 2 . Fig.2.Geological cross sections of the Bukulja and Brajkovac granitoid massifs (Legend is the same as for Fig.1). The impact of fluorides on the physiological functions of the human body is manifold.Fluorides affect normal endocrine function, as well as the function of the central nervous system and the immune system (Committee on Fluoride in Drinking Water, US Table 1 . Description of representative localities of carbonated mineral waters in the investigated area.
v3-fos-license
2017-04-13T10:40:27.392Z
2012-09-24T00:00:00.000
330215
{ "extfieldsofstudy": [ "Medicine", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045254&type=printable", "pdf_hash": "fa050c820381c48ce31724ea64d919115887a189", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41599", "s2fieldsofstudy": [ "Physics" ], "sha1": "7dcd0413c4833e0bf8cd3ada41739a7cb27b7d5c", "year": 2012 }
pes2o/s2orc
Quantum Entanglement and Spin Control in Silicon Nanocrystal Selective coherence control and electrically mediated exchange coupling of single electron spin between triplet and singlet states using numerically derived optimal control of proton pulses is demonstrated. We obtained spatial confinement below size of the Bohr radius for proton spin chain FWHM. Precise manipulation of individual spins and polarization of electron spin states are analyzed via proton induced emission and controlled population of energy shells in pure 29Si nanocrystal. Entangled quantum states of channeled proton trajectories are mapped in transverse and angular phase space of 29Si axial channel alignment in order to avoid transversal excitations. Proton density and proton energy as impact parameter functions are characterized in single particle density matrix via discretization of diagonal and nearest off-diagonal elements. We combined high field and low densities (1 MeV/92 nm) to create inseparable quantum state by superimposing the hyperpolarizationed proton spin chain with electron spin of 29Si. Quantum discretization of density of states (DOS) was performed by the Monte Carlo simulation method using numerical solutions of proton equations of motion. Distribution of gaussian coherent states is obtained by continuous modulation of individual spin phase and amplitude. Obtained results allow precise engineering and faithful mapping of spin states. This would provide the effective quantum key distribution (QKD) and transmission of quantum information over remote distances between quantum memory centers for scalable quantum communication network. Furthermore, obtained results give insights in application of channeled protons subatomic microscopy as a complete versatile scanning-probe system capable of both quantum engineering of charged particle states and characterization of quantum states below diffraction limit linear and in-depth resolution. PACS numbers: 03.65.Ud, 03.67.Bg, 61.85.+p, 67.30.hj Introduction Major progress of experimental techniques as well as theoretical models during the last few decades, has made possible the comprehensive analysis of the ion beams collision dynamics [1,2]. Obtained results have facilitated development of versatile analytical instruments which can provide material characterization, modification and analyses [3,4] over a wide range of scientific disciplines. In addition, focused ion beam techniques beyond sub-nanometer scale [5][6][7][8][9] have gained an important role as silicon based nano-domain engineering [10,11,12] has become one of the most important tool in materials research, low dimensional system electronics, semiconductor manufacturing and nanotechnology overall. Recent experimental investigations of quantum information processing via single electron devices in gate defined quantum dots [13,14] confirm silicon based spin quantum-information processor as a promising candidate for future quantum computer architectures [15]. In that context series of investigations of electrically [16,17,18] and optically [19] induced ion kinetics in solid state quantum systems reveal that focusing of coherent ions through oriented crystal, may enhance precise confinement and manipulation of individual spins in quantum information processing [20,21]. The most prominent recent results relating the spin dynamics control to ion channeling techniques in thin crystals presented in series of theoretical studies [22][23][24][25][26] when the ion differential cross section is singular [27] give opportunity for precise manipulation of intrinsic properties of charged particles. The logarithmic singularity under continuum approximation For transverse energy E \ vEQ and the beam incident angle Q%y c , Q and y c denote the critical angle for channeling and effective ion atom potential, respectively. The effective potential area c A l0 (E \ ) ð Þ which corresponds to maximal enhancement in ion flux density includes strictly harmonic terms under continuum approximation where A l 0 denotes the equipotential surface closed by field contour in the central part of the axial channel. The corresponding integration boundaries are: A 0~p S 0 2 a {1 and A i~p r cm 2 a {1 denote demarcation line of the channel cross-section area. The central part of the axial channel is then represented by the annulus of inner radius , where d, r c and a represent the mean spacing between the atomic rows, ion impact parameter and the ratio of total axial channels number versus number of atomic rows forming the channel, respectively. Accordingly to equations (1.31), when the ion beam incident angle is close to zero Q%0, the anisotropy for central part of axial channel is induced only by harmonic component of the interaction potential. This implies that first equipotential circle represents dominant effective potential area for ion flux density denoted as c A l0 (E \ ) ð Þ . Hence, the area of maximal enhancement in ion flux density is confined to central equipotential curve of axial channel c A l0 ð Þ&ln A 0 k pEQ 2 and further converges to zero as A l0 ?0 if the incident angle, i.e. the tilt angle of the beam, corresponds to condition Q~ffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A l0 k=pE ð Þ p . The results obtained for MeV proton beam energies show the nonequlibrium density of states across central part of the channel as nonuniform flux redistribution. This reveals the strong effect of anharmonic components in effective continuum interaction potential even in vicinity of low index crystal axis for S100T Si. In this paper we present theoretical study of localization and coherent control by superfocused channeled protons, CP beam induced polarization of individual electron spins in pure 29 Si nanocrystal. We analyze precise control of entangled proton trajectories and discrete quantum states of phase space in connection to selective spin manipulation. The harmonic motion of highly correlated channeled protons is tuned by external RF field, by varying the CP energy and tilt angle relative to main S100T crystal axis. The calculations include the quasiharmonic approximation as well as the effect of multiple scattering by valence electrons and assume the anharmonicity of the interaction potential [28,29]. Quantum entanglement of focused ion trajectories in final states corresponds to central part of the S100T Si axial channel. It is analyzed in phase space by convoluted transfer matrix method [30]. According to Liouville's theorem [31], the ensemble of channeled particles (for large impact parameters) experiences series of correlated, small angle collisions in initial stage of elastic interaction with atoms of the crystal lattice. Therefore, a proton flux distribution can be calculated via probability function of quantum trajectory reversibility, i.e. the probability for appearing of backscattered particles along initial propagation direction. The resultant flux distribution further considers unnormalized probability map of trajectories of channeled particles in phase space. We have analyzed the nonequilibrium state of channeled protons density profiles in configuration and scattering angle plane in connection to anharmonic expansion terms of proton -crystal effective potential. Calculation assumes the initial state of static equilibrium, considering 92 nm crystal's length and channeling conditions which correspond to infinitesimal crystal tilt angles, from zero up to 20% of critical angle for channeling. Degree of correlation between separate trajectories of channeled protons was calculated by two separate mapping procedures between configuration and angular phase plane. Thus, the nonharmonic-higher order terms of continuum interaction potential were analyzed via distribution function of channeled protons in transverse position plane and scattering angle plane. The subsequent parts of paper are organized as follows: Section 2 following the recent experimental attempts to realize electron spin processor in silicon capable of quantum information processing introduces quantum model for excitation and coherent control of electron spin states via entangled proton trajectories. Exchange coupling is analyzed under quasiharmonic approximation of interaction potential taking into account the constraint of singular proton flux density. The theoretical model is further explained by Moliere's approximation of the Tomas Fermi interaction potential. This formalism comprises Liouville's theorem to give simple explanation for mapping procedures for proton beam transformation matrix in configuration and angular phase space. Section 3 compares and discusses profiles of proton density distributions for transverse position plane (configuration space) and scattering exit-angle plane or angular space (figure 1.), gives the evolution of proton fluxes with various tilt angles and further illustrates (figures 2, 3.) comparative analysis of proton trajectories mapped in six dimensional phase space considering several L and Q variables in effective ion -crystal anharmonic potential. Mapping procedure for entangled proton trajectories is further obtained considering localization, selective excitation and unitary transformations of singlet/triplet spin states in quantum phase space (shown in figures 4, 5.). Section 4 explains numerical model and simulation parameters. Results Coherent manipulation and precise control of single electron spin rotation represents first step toward quantum information processing (QIP) [20,32]. In order to achieve high level of precision of single electron spin unitary rotations we propose highly correlated spin chain of superfocused protons as a direct probe method for induction of local electron spin excitations in silicon. In this context propagation of the single spin excitation as a procedure for quantum state entanglement [33] can be intermediated via mixed quantum state between channeled protons (CP) spin system and induced coherent oscillations of electron spin system in silicon. In spin-lattice system, the condition of conservation of transverse energy when CP have equal probability to access any point of physical area corresponding to the channeling conditions, i.e., reaching the state of statistical equilibrium, has been modified by Barrett factor [34]. This constraint explains simultaneous existence of equilibrium particle distribution and population enhancement in different fractions of the phase space volume in process of ion transmission through media of sufficient small length. As a result, phase space distributions of CP in separate non-equipotential areas of the channel exhibit fractal characteristics over total phase space volume, as shown by figure 6. We have investigated the proton flux profile in scattering angle plane and transverse position plane. The boundary conditions of nonuniform density distributions are analyzed in case of small impact parameters along main S100T Si crystal axis. The obtained results show that enhancement effect of channeled protons flux bijectivelly corresponds to flux maxima in coordination space. In that sense we have analyzed degree of anisotropy including the anharmonic, higher order terms, k i , iƒ4, in effective continuum interaction potential. As a result, the channeled proton (CP) induced transition frequency v between two electronic states includes higher order contributions Bv~Bv{ 1 = 2 E \ (r,Q,L) j j 2 a 2 (l,Q){a 1 (l,Q) ð Þ zO nƒ4 : ð2:1Þ hyperfine couplings terms, respectively, B is reduced Planck constant, B o is static magnetic field along z-axis. Under static magnetic field the singlet DST and polarized triplet DT 0,+ T are degenerate, nearly independent. As a result, the quantum state of system r in rotating frame corresponds to position of spin down polarized axis (bottom half of the Bloch sphere) as r6D0TS0D6U, likewise the polarized spin up axis position (top half of the Bloch sphere) denotes r6D1TS1D6U, where U (Eq. (10)) couples additional degrees of freedom to initial quantum state, i.e. it represents the transformation matrix of mixed quantum state under CP polarization. In finite magnetic field, the CP perturbed electron Zeeman frequency for external field up to 1 MeV allows decorrelation of longitudinal Overhauser field B Z and shifts the level of singlet spin-down configuration from the ground into the excited SDT 0 T and SDT z T state. This coherent superposition of system energy levels (1, 0) and (0, 1) with triplet state (1, 1) is consistent with dipole-dipole mediated nuclear diffusion and leads to periodic superposition of spin states with precession period on a 1 s time scale assuming that Overhauser fields Þare Gaussian distributed on long time scales. Thus, the external field close to 1 MeV is large enough to cause the strong spin dependency of tunneling effect. Large field produces strong asymmetry for spin up and spin down charge energy. It is important first to establish a non-zero external magnetic field B so that each of nuclear spin principle axis orientations can be effectively optimized. This produces efficient coupling of longitudinal component of electron spins to quantized transverse component of nuclear spins. Thus, the quantized nuclear spin states are mediated via anisotropic part of the hyperfine interaction, i. e. the universal control of the nuclear spin state is achieved via unitarily transformed term, BS z I z [35]. Namely, up to 1 MeV the external field induces coupling of nonparalell nuclear spin quantization axis to electron spin states and it allows the anisotropic pseudosecular term for universal control, otherwise the pseudosecular term is suppresed. Hence, adding a stronger external field to the quartic potential (Eq. (2.1)), alters the potential minima and changes confinement energies of orbital wave states which in turn induces a DOS transition from apsolute equilibria to saddle point in phase space. Instead of applying the oscillating RF field to spatially resolve and manipulate spin resonance frequencies (or in order to measure response of the quantum dot by current flowing through the dot or by near quantum point), the induced transition can be generated upon CP excitation of the spin system. Thus, excited spin system displaces the center of the electron wavefunction along the oscillating superfocused CP field direction and change its potential depth. As a result the electron wavefunction frequency can be spatially distorted in order to coincides/shifts with applied CP field. A single spin excitation is then polarized along z axis coinciding with proton beam alignment. In addition, the resultant mixed state conserves the total angular momentum of the exchange Hamiltonian along z axis: This allows diagoanalization of the system Hamiltonian into subspaces of excited spins, i.e. the spin ensemble along s z basis, corresponds to degenerate Z eigenvalues. The effective single spin read out [36] can be further realized by electrical detection of spin recharge events in tunneling proximity to a metal by adjustment of the Fermi level between two initially split electron eigenstates (corresponding to spin-up and spin-down orientation). Excitations of electron spin localized below Fermi threshold causes electron tunneling and leaving of initially occupied eigenstate. Discharged, empty spin state below Fermi level is further filled by an electron with oppositely oriented spin. In present case, numerical solutions of entangled proton trajectories, for different reduced crystal thicknesses and tilt angles, correspond to short range correlated proton -lattice interaction potential in vicinity of S100T Si axis. General considerations The interaction between the proton and the crystal's atoms includes elastic collisions, assuming classical, small-angles model of channeling [1,28]. For the zero Q angles, the z-axis coincides with S100T Si crystallographic axis, while the atomic strings which define the channel cover the x and y axis. The initial proton velocity vector v 0 is collinear with the z axis. We have modeled the system considering the Lindhard continuum approximation for axial channeling [1]. The crystal interaction potential comprises the continuum potentials of separate atomic strings. Hence, we have included the thermal vibrations of the crystal's atoms: where U i x,y ð Þ represents the continuum potential of the ith atomic string, xy are transverse components of the proton position, and s th is the one-dimensional thermal vibration amplitude. The specific electronic energy loss is determined by equation where v is the proton velocity, m e is the electron mass, n e~D U th 4p is the density of the crystal's electrons averaged along the z axis and D:L xx zL yy . The angular frequency of the electron oscillation induced by the channeled proton is The mean-square angular deviation of the proton scattering angle caused by its collision with the electrons is included as In the above equation m p denotes the proton mass and E is the proton energy. Further calculations take into account the proton beam divergence before its interaction with the crystal [24,25]. The Monte Carlo simulation method has been used for parameterization of entangled proton trajectories. Obtained numerical solutions of channeled protons equations of motion correspond to their angular and spatial distributions. The phase space density, according to the Liuoville's theorem, cannot be changed in conservative system, but one can manipulate with the form and position of the phase space elements. We can use the phase space transformations to improve the channeling efficiency. Discrete map of quantum states of channeled proton trajectories, their point transformation in the spatial (transverse) and angular phase space are presented in the following vector basis Here L and Q denote the crystal reduced length and tilt angle of CP beam. Although the phase space is six dimensional we consider four subspaces of the transverse and angular phase space. Correspondingly, the mapping of beam parameters and quantum discretiza- Tilde sign denotes the transpose operation over transfer matrices and J 2D refers to unit symplectic matrix in 2-d phase space volume. According to the Liuoville's theorem the conservation of the phase space volume results from the statement: det M~1, following the equation (9.1). Complete characterization of the phase space volume is achieved over the second order moments of transfer beam matrices: where matrix trace, Tr and value s 4 , i.e. the phase space volume occupied with the proton beam, determines the two invariants of the transfer beam matrix. We reduce system dimensionality by decoupling the 4-d phase space on 2-d: configurational x-y ð Þ and angular, h x -h y À Á phase space. The transformation matrix describing the mixed quantum ensamble then couple a digonal S X basis of electron spin system to diagonal S Z basis [37] of fully polarized superfocused CP beam and forms a nonothonormal basis In order to determine the matrix elements of CP -lattice confinement potential for singlet and triplet functions we use two single electron eigenstates denoted by spatial electron-waive functions DX T and DX 'T as S j T~:; j T{ ;: The energy splitting between triplet, DT 0 T and ground singlet state, DST is denoted by exchange interaction, The Hamiltonian is further diagonalized in singlet and triplet subspaces. In order to overcome high level truncation of the basis, where linear combinations of two electron states tends to infinity, we use constraint that singlet state refers to ground state according to Lieb Mattis theorem [38] in zero magnetic field. Applying the inhomogeneous CP field along main crystal axis, i.e., involving x and y phase space components of tilted CP beam, if the energy difference of triplet and singlet electron states is close, they became strongly mixed. The triplet DT 0 T can evolve into the singlet state DST as Likewise, DT z T and DT -T evolve into singlet state. As explained in the main text the mechanism of spin excitation and energy separation scheme, (illustrated in figure 5.), between the ground singlet state and the polarized triplet state is controlled by a combination of a CP initial energy E (Q, L) and tilt angleQ. Upon the excitation energy is applied to the quantum dot inside the Bohr radius, it is shown that spin system energy cost for adding an extra electron starts from state S (0, 1), as indicated by dotted black line, where (n, m) J(e) and E ST denote the charge state with n and m electrons, exchange and splitting energy, respectively. The energy cost for reaching (1, 1) is (nearly) independent of the spin configuration. However, the energy cost for forming a singlet state S (0, 2) is much lower than that for forming a triplet state (not shown in the diagram). This difference can be further exploited for spin initialization and detection. Figure 1 (a (1, 2), b (1, 2), c (1, 2)) gives 3-d representation of channeled protons contour plots for 92 nm S100T Si nanocrystal, L = 0.5 [24,25], for tilt angles: Q = 0.05y c , Q = 0.15y c and Q = 0.20y c , Q is the angle of external field relative to symmetry axis of the spin transformation tensor. The external field of 1 MeV is chosen to match the limits of Bohr radius with initial CP peak separation at 20% of the critical angle (relative to tensor principal axis). It allows generation of the final mixed quantum state, controlled by the pseudosecular term B~3D cos Q ð Þsin Q ð Þ, i.e. it allows efficient dipolar coupling, D between electron and nuclear spin states. The spacing between separate peaks about longitudinal z-direction of the confinement field is calculated via figure 1(a (1, 2)), shows the area of maximal enhancement in ion flux density [29] in both phase planes. The maximal confinement field is governed by exchange coupling energy, J e ð Þ. Here J e ð Þ represents the function of energy difference, e for discretized 2-d potential. Figure 1(b (1, 2)) shows that incident tilt angles above 15% of the critical angle for channeling, consequent faster amplitude and phase attenuation for angular density profile. This effect induces further splitting of the channeling pattern. Discussion Relative change of 5% for crystal tilts leads to strong yield redistribution in angular distribution profiles and it mostly affects phase profile central parts. The analysis in configuration space for tilts: Q = 0.05y c and Q = 0.15y c shows strongly picked circular cross section. Only a slight variation in pattern sharpness can be seen in density profile edges. DOS analysis for 0.20y c reveals further effects of strong perturbation, as presented in figure 1(c (1, 2)). The angular phase space profile, 1(c (1)) shows non-homogenous transition in charge state density and splitting of channeled proton distribution pattern. The characteristic splitting shows two pronounced maxima on the h x -axis followed by few nonsymmetrical peaks as lateral satellites. Their spatial positions and amplitudes are correlated via CP mediated Zeeman interaction by gb e B=2 term. This non-secular term shifts the energy levels of singlet spin states and splits DOS peaks (shift affects the Fermi level for electron spin-down and spin-up configuration). Consequently, spin DOS structure is uniquely described via two mixed quantum states. Figure 2 (a, b) shows the central position's of angular and spatial density distributions in phase space. Maximal amplitudes of tilt angles equals Q = 0.05y c , 0.10y c and 0.15y c . In figure 2(a) designated plots correspond to reduced crystal thickness in range of 0.00-300.0., for L = 1.69 mm and 0.00-0.300, for L = 99.2 nm, respectively. These dependencies determine the focusing region, i.e. specify the proton beam full with at half maximum (FWHM). A comparative analysis for the same values of Q, L and amplitude maxima in configuration plane is presented in figure 2(b), it determines the length and phase space transformation bond between scattering angle plane and mapped transverse position (configuration) plane. Figure 3 shows that yield dependence of harmonic confinement potential (governed by first two terms in Eq. 2.1) becomes zero for tilts over 0.50y c in transverse and angular phase plane. The normalization and boundary condition are restricted to effective Bohr radius: a Ã~B 2 k m e . Changing tilts while keeping fixed thickness parameter to L = 99.2 nm gives the non-monotonic dependence to exchange coupling energy J e ð Þ as a function of quantum displacement from harmonic oscillator stability point. It goes to zero asimptotically and indicates the complete separation of quantum states and inexistence of singlet-triplet transitions at higher tilts due to small orbitals overlap (Eq. 16,17). Figure 4 illustrates the localization of quantum spin waves corresponding to uncertainty principle. The CP superimposed electron spin states (spin wave probability density functions) are positioned inside the Bohr radius. Electron probability densities produce maxima over each nuclear position. The quantum proton trajectory evolution with various tilt angles is calculated for L = 0.25 in configuration space, i.e. L = 0.50 in (mapped) angular space. We analyze eight characteristic tilt shifts: Q = 0.00, 0.05y c , 0.10y c , 0.15y c , 0.20y c , 0.25y c , 0.35y c , 0.50y c . Inside the Bohr radius at distance x~+d around the peak centers, the confinement field is parabolic so that ground state of mixed wave functions coincides with harmonic oscillator state. Observed amplitude dependences of proton yield for tilts $0.10y c are attributed to stronger interaction influences of higher anharmonic terms in Eq. (2.1). This is more pronounced for the spatial CP distribution. Amplitude decreasing and changes of peaks FWHM (positions and spatial symmetry) are indicators for the state of strong system perturbation, i.e. due to the effect of quartic anharmonic terms in the exchange interaction, Eq. 2.1. The observed modulation of DOS states for tilts $0.20y c is disregarded, i.e. the main contribution to the superfocusing effect comes from the crystal tilts below 20% of the critical angle for channeling. The analysis of the asymptotic behavior of the channeled protons axial yield for proton distributions: Q = 0.00y c , L = 0.5, (when FWHM of generated focused area converges to zero making the sub-nanometer spatial resolution possible) has shown that the only case when the angular yield is singular corresponds to the zero-degree focusing effect. It is shown that an increase in the crystal tilt angle value of 15% of the critical angle for channeling facilitates the suppression of the zero-degree focusing effect. Former leads to a significant change in amplitude and width of the angular channeled protons profile, causing the splitting on two lateral non-uniform circular patterns with maxima located along h x -axis which corresponds to smaller lateral peaks. This behavior is confirmed for energy range from several eV up to several MeV which can be employed for PIXE analysis. To facilitate and control close encounter collision processes in order to induce nuclear reactions one can also use this method as intermediate process for nuclear collision cascade. However, for the proton beam energies above 100 MeV the FWHM of the channeled proton peaks in density profiles is highly narrower ƒ5 pm less then effective Bohr radius due to enhanced orbitals overlap of superfocusing effect governed by higher degree of spatial confinement. Figure 5 represents the scheme of energy splitting and exchange coupling energy between singlet DST state and DT -T triplet-polarized state localized inside the Bohr radius. Upon modulation the CP field, unitary spin rotations are performed around two non-commuting axes: h,z. Before manipulation, discretized proton spin states mediated through Zeeman interac-tion include sublevels D01T and D10T. The populations of quantum states are distributed according to electron spin polarization at thermal equilibrium: in electron manifold the nuclear quantization axes - I Z correspond to electron spin in D;T state, whereas the nuclear I Z apply to electron spin in D:T state. We then impose a pulse sequence of p=n tilts relative to q axes on Bloch sphere. This lifts the system energy close to triplet state where exchange J e ð Þ is large. It triggers the coherent transition of proton eigenstates D01T?D:T, D10T?D;T and forms the final mixed quantum entangled state containing the superposition of proton -electron eigenstates D;:T,D:;T. To provide a transition to triplet state D;;T upon initialization the system is rotated by p pulse about z axes of Bloch sphere through the angle Q~J e ð Þ=B, where J e ð Þ denotes the exchange coupling as a function of energy difference, e between the levels. Presented energy diagram shows that former sequence corresponds to the initial exchange splitting E ST further dominated by CP induced J e ð Þ mixing between energy levels detuned by Q. While increasing the confinement energy E Q,l ð Þ , the (1, 1) triplet state hybridizes and produces a tunneling effect so that different superposition states can be realized. Dependency of the field caused by lifted degeneracy of triplet state further decreases the separation between energy levels, while the exchange coupling increases the Gaussian of orbitals overlap (Eq. 16). Figure 6 (a, b, c) shows the CP simulation patterns in transverse position plane for fixed value of L = 0.175. The proton trajectory, shifts with tilt angle along y = 0 axis and spreads from intersection area of x-y plane into cusped elongated deltoidal pattern, figure 6 (a). The shifts are governed by atomic strings repulsive potential. Even a small change of tilt angle causes a strong system perturbation evident in figure 6(b) and therefore activates the higher power terms in the ion-atom interaction potential. Former influences regularity of proton trajectories and leads to a gradual reduction of DOS in central area of channel. This causes the nonuniform flux redistribution, now filled with gaps, as it can be seen from figure 6(c). Hence, it affects the continuous conservation of distribution functions in phase space volume [28,29]. Consequently, the axially channeled protons cannot encounter the state of statistical equilibrium. That effect has been resolved in scope of KAM theory [39], when the classical integrability of the Hamiltonian's is broken by sufficiently small perturbation, the system nevertheless retains its dynamics in the form of periodic oscillation moving on invariant phase space profile. Although these invariants of phase space have form of intricate fractal structure in vicinity of S100T axis, they still cover a large portion of phase space. In that sense the reduced crystal thickness can be fully discretized by performing the power low expansion of random points described for interval [n i , n i +n] between two nearest neighboring fractal points L(n) = Ln a , nR', where fractal dimension for a,0 draws logarithmic singularity for proton density distribution. Methods Simulation model considers cubic unit cell representation of the isotopically pure 29 Si nanocrystal. It includes atomic strings on three nearest square coordination lines of the S100T axial channel [24,25]. According to diamond lattice symmetry, the orthogonal mesh is projected across the channel, mapping two layers of 262 triangular areas of S100T unit cell. The proton trajectories are generated from the sequences of binary collisions via the Monte Carlo simulation method using the screened Moliere's interaction potential. The crystal is tilted in angular space along the axis h x~0 ,(x~0) where the value of the tilt angle ranges up to 20% of the critical angle for channeling, :09 mrad. Numerical calculations consider the continuum model in the impulse approximation. The motion of ions in the continuum model is determined by the Hamiltonian where y x and y y denote x and y projection of scattering small angle with respect to the S100T axis. The systems ion-atom interaction potential is obtained by integration of the Moliere's approximation of the Tomas Fermi interaction potential [40]. Z 1 and Z 2 denote the atomic numbers of the proton and the atom, respectively, e is the electron charge, d measures quantum displacement of single particle wave function relative to harmonic oscillator central position in ground state, r is distance between the proton and separate atomic strings, a 0 is the Bohr radius and . Correspondingly, the potential between two i, j sites is n is Born exponent. Coefficients B and n are experimental fitting parameters determined from ion compressibility measurements [41], likewise, the exponential repulsion between the overlapping electron orbitals within the channel is described by The measure of the orbitals overlap corresponds to denote degree of confinement field and dicretized energy, respectively. Eq. (16) includes the variation of charge density of the overlapping area due to different valence electron contribution to interaction: lattice -induced potential (across the channel) [42,43]. The overlap l~exp d 2 a 2 À Á stands only for zero external field. The one electron energies [44] for neutral Si: (1 s) 2 (2 s) 2 (2p) 6 (3 s) 2 (3p) 2 [44]. In the rotating frame, the reduced form of protons equation of motion, considering small angle approximation in transverse position plane [40], is Ly : ð18Þ Q x and Q y represent the x-y component of the proton scattering angle. The channeled proton distributions are mapped in configuration space and angular space in two steps: to transverse position phase plane, x'-y' and to scattering angle phase plane, h x -h y [24,25] in accordance with the chosen value of reduced crystal thickness, L and the tilt angle, Q. The phase space transformations are determined via Jacobian: Eq. (19) comprises the proton trajectory components: h x x,y,Q,L ð Þ and h y x,y,Q,L ð Þ . It establishes a bond transformation between differential transmission cross section, s~1=DJD and phase space manifolds in configuration and angular plane. The one-dimensional thermal vibration amplitude of the crystal's atoms is 0.0074 nm [24,25,29,46,47]. The average frequency of transverse motion of protons moving close to the channel axis is equal to 5.94610 13 Hz. It is determined from the second order terms of the Taylor expansion of the crystal continuum potential in vicinity of the channel axis [48,49] U(x,y)~2 where r c~ffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi x{x i ð Þ 2 z y{y i ð Þ 2 q , ð20:1Þ K 1 denotes the first order modified Bessel function of the second kind, d and M represent distance from atomic strings and their number, respectively. The components of the proton scattering angle, Q x~v x v 0 and Q y~v y v 0 , are solved numerically using the implicit Runge Kutta method of the fourth order [48]. The components of the proton impact parameter are obtained randomly from the uniform distributions inside the channel. The transverse components of the final proton velocity, v x and v y are presented within the Gaussian distribution of probability that the quantum spin state is recognized correctly, according to standard Since the channeled protons angular distributions can be easily measured, they are used to reconstruct the quantum information regarding the protons distribution in transverse phase space. In order to quantify the read out fidelity the informations from entangled quantum trajectories are sampled from h x -h y phase plane from 550,000 shots datasets [50]. The initial number of protons correspond to quantum trajectories spin states obtained from 5610 7 traces. To summarize, our calculations and simulation results demonstrate a hybrid proton-electron quantum interface for multipartite entanglement under constraint metric of uncertainty principle. We established the correlation between electronic spin states and offdiagonal hyperpolarized nuclear spin states under CP induced field. We used axial configuration of Si S100T channel to initialize and control each electron spin state via superimposed proton spin chain. Utilizing a dynamically decoupled sequence we have obtained the universal quantum control and controllable coupling between singlet and triplet-polarized spin states. By calculating the electron spin and CP field eigenstates via full density matrix we established the proof of non-orthogonal mixed quantum state. Upon hyperpolarization sequence, the increased sensitivity of nuclear spin subspaces dependence to electron spin states reduces the linear spin entropy and leads to maximized entanglement of mixed states in density matrix. We have shown that stability dependence of nuclear field results from anisotropic term of the hyperfine coupling, here regarded as a tunable parameter for unitary spin control. It can be chosen to enhance the feasibility of producing entangled mixed states. A resultant mixed quantum state that we demonstrated in S-T systems represents important step toward realization of scalable architecture for quantum information processing. Complementary, a scalable network of entangled electron-nuclear states would form a basis for a cluster state of quantum processors integrated in silicon. In addition, generation of entanglement process comprising the network of such correlated spin states would enhance the quantum error correction beyond any separable state and extend the precision in quantum metrology. That would allow implementation of quantum error correcting techniques (QEC codes) directly to perfectly entangled mixed states and direct protection of quantum states from interaction with environment without prior entanglement purification protocols (EPP). In that context, the off-diagonal electronnuclear eigenstates as mixed quantum states are not longer invariant under unitary spin operations and represent observables in density matrix. Finally, the controllable addressing of single spins in quantum networks, the individual control of unitary spin precessions (electron-nuclear spin phase rotations) in combination with local gfactor engineering would provide a scheme for deposition of multipartite entangled states and manipulation of quantum memory and quantum key distribution (QKD) based on transmission of gaussian-modulated individual coherent states. Another possibility for further exploration points toward active control of the channeled proton beam properties in the superfocusing effect, revealing the important role of mutual contribution of the harmonic and anharmonic terms. This emphasizes the importance in careful selection regarding the appropriate combination of the crystal tilt angle value with crystal thickness in order to gain high spatial resolution and localization accuracy. As a result the implementation of such nano-scale precision scanning method could produce a detailed map of dis-crete inter-atom positions, and create a highly resolved image, built-up through a process of the proton beam focusing. This behavior is confirmed for energy range from several eV up to several MeV which can be employed for PIXE analysis. To facilitate and control close encounter collision processes in order to induce nuclear reactions one can also use this method as intermediate process for nuclear collision cascade. However for the proton beam energies above 100 MeV the FWHM of the channeled proton peaks in density profiles is highly narrower ƒ5 pm less then effective Bohr radius due to enhanced orbitals overlap of superfocusing effect governed by higher degree of spatial confinement.
v3-fos-license
2022-09-25T15:06:05.206Z
2022-09-22T00:00:00.000
252513481
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.mdpi.com/1996-1944/15/19/6591/pdf?version=1664163836", "pdf_hash": "006574bedceb8fcb343c6a5c1156151a0da447ed", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41600", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "34f5f3793d21961ac32f6a3c43577c93329a74a7", "year": 2022 }
pes2o/s2orc
Materials to Be Used in Future Magnetic Confinement Fusion Reactors: A Review This paper presents the roadmap of the main materials to be used for ITER and DEMO class reactors as well as an overview of the most relevant innovations that have been made in recent years. The main idea in the EUROfusion development program for the FW (first wall) is the use of low-activation materials. Thus far, several candidates have been proposed: RAFM and ODS steels, SiC/SiC ceramic composites and vanadium alloys. In turn, the most relevant diagnostic systems and PFMs (plasma-facing materials) will be described, all accompanied by the corresponding justification for the selection of the materials as well as their main characteristics. Finally, an outlook will be provided on future material development activities to be carried out during the next phase of the conceptual design for DEMO, which is highly dependent on the success of the IFMIF-DONES facility, whose design, operation and objectives are also described in this paper. Introduction The global energy outlook is currently going through a time of crisis and uncertainty as a result of the excessive use of and dependence on fossil fuels over the last century. This is why a transition to new energy sources must be urgently addressed, with the need to improve efficiency and opt for a decarbonized mix in which nuclear energy seems likely to play a key role. When talking about nuclear energy, one tends to think of the present day technology (fission), but the reality is that a new means of energy production that will completely change the current paradigm is getting closer every day. Nuclear fusion energy offers the prospect of a safe, inexhaustible and waste-free energy source for generations to come. Despite this, it also presents certain science and engineering challenges that, so far, have been insurmountable due to the extreme conditions and plasma instabilities faced by the materials of these future reactors. The premise of this review is to try to analyze the horizon of new possibilities that the development of nuclear fusion is allowing and to contribute, as far as possible, to clarify what are and what could become the materials that will facilitate the success of ITER and in the future of DEMO. Nuclear Fusion Nuclear fusion is a reaction involving light atomic nuclei and nucleons, so that when two of these nuclei join to form a heavier one, energy is given off. It is the basis for the existence of stars like the Sun. This process initially requires the joining of a proton with another proton, an event known as the proton-proton chain, which was discovered in 1939 by the German physicist Hans Bethe [1]. Naturally, replicating this process on Earth requires in-depth research and development. Incidentally, there is one element of particular interest for fusion due to its simplicity and abundance, hydrogen, which is precisely the one used by the Sun. Specifically, two of its isotopes, deuterium (D) and tritium (T), are of interest. This process is characterized as an exothermic reaction, where the nucleons must be very close (∼1 fm) for the strong nuclear interaction to unite them and thus overcome the electromagnetic repulsion, called the Coulomb barrier. It is also necessary to reach temperatures of millions of degrees for this reaction to take place. Almost as soon as physicists learned that solar energy could only be the product of nuclear fusion, they discovered, however, that the temperature at the center of our star (about 15 million degrees) is insufficient for hydrogen nuclei to actually come together at the necessary distance. So how is it possible for nuclear fusion to occur in the Sun? To explain this, we must resort to quantum mechanics and one of its most renowned concepts, the tunnel effect, that allows certain particles to overcome the Coulombian energy barrier without actually reaching its maximum value. Despite this, it is still necessary to reach enormous temperatures indeed. This is why fusion is described as a thermonuclear process. As mentioned above, the most viable reaction for the first generations of nuclear fusion reactors is the one between deuterium ( 2 H) and tritium ( 3 H) (Figure 1), obtaining 17.6 MeV of energy that translates into an alpha particle or He nucleus and a fast neutron. 2 H + 3 H → 4 He (3.52 MeV) + n (14.06 MeV). (1) Why use these H isotopes and not others? This is because the cross section (which defines the probability of success of a nuclear reaction between a target and an incident particle. Its SI unit is the barn, 1 barn = 10 −28 m 2 ) of this process is very high for relatively low temperatures. Another fundamental reason is the availability of these isotopes. D can be easily found in seawater (an estimated 30 g/m 3 ). T, on the other hand, is a radioactive and unstable element (decay constant T 1/2 = 12.3 years) which is produced naturally in small quantities when cosmic rays (98% protons) strike the H atoms present in the atmosphere. It can also be obtained as a product in CANDU NPP [2]. There are approximately 40 kilograms of T on the planet, so it is vital to find an alternative method to reproduce it on a large scale. 6 Li + n = 4 He + 3 H + 4.86 MeV 7 Li + n = 4 He + 3 H + n − 2.5 MeV . ( T breeding consists of irradiating a Li blanket with the neutrons produced in the fusion reaction itself (see Equation (2)). This generates the T that will supply the reactor. Definitely, this is one of the greatest technological challenges that ITER and its related projects will have to face. On the other hand, natural lithium (92.5% 7 Li y 7.5% 6 Li) is an abundant element in the earth's crust (30 ppm) and is found in lower concentrations in the sea. The thickness of the blanket is large enough (∼m) to slow down the neutrons produced by the fusion reactions. Upon impacting with the walls, their energy is transferred in the form of heat. This heats water that turns into steam, which is then used to turn a turbine and hence generate electricity. To get an idea of the efficiency of this process, the use of 1 kg of D-T has the energy equivalence of 8000 tons of oil. The breeding blanket (BB) is one of the most complex and important components of future fusion reactors, as it is not only responsible for the extraction of energy but also for T breeding in order to have a self-sufficient facility (tritium breeding ratio, TBR > 1) [2,3]. This parameter is defined as the average number of T atoms bred per T atom burned. It should be higher than 1.15 in order to take into account T losses which cannot be avoided in a real fusion reactor [4]. Tritium transport modelling allows experts to predict how this element will move towards the systems that have to recover it in order to refuel the plasma. In doing so, two fundamental considerations must be taken into account. Firstly, tritium does not simply go from point A to B, as it is a gas that diffuses easily. This can happen especially at high temperatures, as it can enter and mix with materials in pipes, valves and other components along the way. Secondly, tritium is radioactive, so it is of great interest in terms of nuclear safety and radiological protection to know where it can accumulate. There are two options for the commercial development of nuclear fusion, magnetic or inertial confinement. This review will look at the former, as it has become the more advanced option with a higher probability of success. What Is ITER? ITER is the most ambitious energy project in the world today and is located in the town of Cadarache in southern France ( Figure 2). Up to 35 nations (the 27 countries of the EU together with Switzerland, the United Kingdom, China, India, Japan, Korea, Russia and the United States) are collaborating to build the world's largest Tokamak (Axisymmetric toroidal chamber characterized by a large toroidal magnetic field, moderate plasma pressure and relatively small toroidal current), a magnetic confinement fusion device designed to demonstrate the viability of fusion as a large-scale, carbon-free energy source. In turn, the technologies, materials and physical regimes required for large-scale commercial electricity production will be tested. Reprinted from Ref. [5]. Credit c ITER Organization, 2022. Thousands of engineers and scientists have contributed to the design of ITER since the idea of a joint international fusion experiment was first launched in 1985. The participating members have committed themselves over a period of about 40 years to build and operate the experimental device until fusion reaches a point at which DEMO can be launched. What Are the Main Objectives? The amount of fusion energy a tokamak is capable of producing is a direct result of the number of fusion reactions taking place inside it. The larger the vessel, the greater the plasma volume and thus the greater the potential fusion energy. This has its trade-offs in terms of cost, which is usually the case in projects based on economies of scale, typical of traditional nuclear energy production. ITER has been specifically designed to: 2. Demonstrate the safety features of a fusion device. 3. Test the reproduction of T with TBM (Test Blanket Modules). 4. Demonstrate the integrated operation of technologies for a fusion plant. 5. Achieve a D-T plasma in which the reaction is maintained by internal heating [6,7]. When Will the First Plasma Be Obtained? The first ITER plasma is scheduled for December 2025. As of 31 July 2022, 77.1% of the work required for the first plasma has been completed [5]. Beyond its symbolic importance, the first plasma will also be a litmus test for the project, as it will be the first occasion to verify the correct alignment of the machine's magnetic fields as well as the correct functioning of key systems (vacuum vessel, magnets, and critical plant systems). The first plasmas will use H, He or a mixture of both. This is why the initial processes do not require a D-T fuel. Since many of the heating systems are optimized for D-T type plasmas in order to achieve H-mode (a high plasma confinement operating regime which is reached when a certain heating threshold is exceeded [8]), they will operate at a reduced intensity that will gradually increase over the years. The first low-power H-plasma, which will last a few milliseconds, will be followed by other "shots" of higher power and longer duration. Finally, the first production of D-T fusion energy will take place during the nuclear phase of the machine, expected around 2035 [9]. Materials Design Requirements For the range of expected operating conditions (including possible accident scenarios) in ITER and with even greater relevance to DEMO, a qualified database must be generated to demonstrate that candidate materials meet a number of indispensable design requirements. Some of these are: • Radiation resistance and nonactivation properties after irradiation. • Resistance to creep rupture, fatigue cracking and creep-fatigue interactions. • Good resistance of mechanical and physical properties against He embrittlement. • Acceptable chemical compatibility and corrosion resistance with fusion-specific breeder materials (e.g., Be, LiPb) [10]. Plasma Facing Materials One of the most sensitive processes taking place in a nuclear fusion reactor is the interaction with the hot plasma, which is at temperatures higher than the core of the sun. PFMs and PFCs are those materials/components that cover almost the entire internal surface of the VV and represent the interface between the plasma and the rest of the tokamak. They are part of two systems: the blanket (which includes the FW) and the divertor, which occupy areas of 610 and 140 m 2 , respectively [11]. The lifetime of a PFM is limited from ∼100 dpa (a dpa, displacements per atom, is the number of times that an atom is displaced a given fluence. It is a unit for quantifying irradiation damage that is strongly dependent on the material in question). The plasmawalls interaction processes are associated with thermal loads of up to 20 MW/m 2 in continuous thermal periods and can reach the GW/m 2 range when an ELM occurs. ITER will not only seek to demonstrate the feasibility of the D-T process but will also be the first test device for PFMs and PFCs in extreme radiation scenarios. Some of the most serious damaging mechanisms to be considered in these materials are: (i) T retention. (iii) High velocity impacts of dust particles in the PFM. (iv) Possible degradation, transmutation and activation. (v) Thermally induced defects due to cracking and melting of PFM. (vi) Thermal fatigue damage produced in the joints between the PFM and the heat sink [12,13]. Eighty percent of the enormous amounts of heat and energy that will be produced in the fusion process will escape in the form of fast neutrons (14.1 MeV). Since these particles have no charge, they cannot be redirected by means of a magnetic field to a specific location. This has been one of the greatest engineering challenges from the outset, as the entire FW will be exposed to an intense bombardment of highly energetic neutrons; therefore, the components that will face the plasma must meet several indispensable design criteria: • Be strong enough to withstand such high radiation and temperature. The material chosen should have good thermal conductivity to easily evacuate heat, but at the same time cannot be readily activated, as the components are expected to last at least 20 years before being replaced. • Be capable of effectively dissipating such heat, which, recovered through a cold water circuit, will be the heat that will generate electrical power in a realistic NPP. For a material to be considered as a potential PFM component, good compatibility with the hot fusion plasma, i.e., a low atomic number, Z, is a must, as well as an excellent sputter (a physical process in which atoms in a solid-state (target) are released and pass into the gas phase by bombardment with energetic ions) resistance. • Alternatively, tokamaks with high-Z materials such as tungsten (W) must be operated in such a way as to guarantee that the net impurity influx into the plasma should be so low that the critical impurity concentration is not exceeded. This is due to the fact that since this material has a high Z, it cannot be completely ionized, causing some of its electrons to remain free and radiate energy, thus cooling the plasma [12,14,15]. A brief analysis of the loads that the PFCs will undergo in the upcoming fusion experimental projects is presented in Table 1. It can be seen that the step between ITER and DEMO is much larger than between DEMO and PROTO (DEMO's successor, expected to become the first commercial nuclear fusion reactor after 2050), hence the need for an intermediate materials testing facility between ITER and DEMO. The neutron load is the energy of the 14-MeV neutrons from the D-T reaction which pass through the FW. Although they are not deposited in the FW, they can damage it. The neutron load accumulated over the lifetime of each project is the parameter that really matters. This is substantially larger for a reactor than it is for ITER because a reactor should last for roughly 15 years before it needs to be upgraded. ITER is only an experiment. The longer the material is exposed to a neutron flux, the more frequently one of its atoms will be knocked out of place by a neutron. After many dpa events, the material will swell or shrink and become so brittle as to be useless [14]. By 2050, DEMO is estimated to have the capacity to supply 100 MW of net power to the grid and operate on a closed fuel cycle. However, for this to happen, the materials need to overcome much tougher conditions than those they will face in ITER. The need to include sufficient flexibility in the design of DEMO to accommodate improvements in plasma performance and design of core components is indispensable [17]. The main aims of DEMO are to: Solve all physical and technical issues related to the plant and demonstrate reactorrelated technology. • Achieve adequate availability/reliability operation over a reasonable time span (while ITER is expected to work with 400 s pulses and a long dwell time, DEMO will work with long pulses (>2 h) or even at a steady state) [17,18]. Materials for the DEMO reactor must be chosen considering the high doses of irradiation produced by neutrons with a thermonuclear reaction energy spectrum and the very high heat load on the inner wall of the chamber. This can lead to significant in-vessel material damage. It will be necessary to develop and test new materials for constructing the DEMO thermonuclear reactor and solve issues related to their commercial-scale manufacture [15,17]. The blanket supposed to be used in ITER is in fact unsuitable for the future DEMO thermonuclear reactor. The envisaged materials can only withstand a small neutron flux and the exit coolant temperature is too low to ensure efficient power generation. The next step after ITER is to elaborate the design of DEMO and a thermonuclear power plant. Their linear dimensions will be about 50% larger than those of ITER, and their fusion power will be 5 and 7 times higher, respectively [15]. It appears that pilot plants and reactors may experience rates of net erosion and deposition of PFC material in the range of 10 3 -10 4 kg/year, values well above those expected in ITER. The deposition of such massive quantities of material has the potential to interfere with pilot plant and reactor operation and to seriously compromise the safety of the DT cycle. For example, elevated dust levels due to exfoliated and detached deposits can lead to a high risk of dust explosion. Other adverse effects due to the accumulation of unwanted eroded material at critical locations could result in the appearance of cracks in the cooling channels due to thermal stress [19]. The DEMO design and R&D activities will benefit largely from the strongest experimental supporting evidence that will be gained from the design, construction and operation of ITER. Due to the differences (in terms of size and, especially, in terms of ITER's wider mission) between the two devices, not all ITER solutions are directly applicable to DEMO [18,20]. Tungsten Nowadays, tungsten is considered the most efficient material for components facing high heat flux, mainly due to its high melting point (T = 3422 • C), good thermal conductivity (160 W/m·K), excellent high temperature stability and low T retention [12,21]. However, major concerns regarding the use of W in fusion reactor applications include its inherent brittleness at low temperature and the embrittlement due to recrystallization and neutron irradiation. To overcome these drawbacks, several efforts have been made to modify W through grain refining, alloying, dispersion of secondary phases, and formation of composites. Although W and CFCs were initially considered as the most promising PFMs, the fact is that at the end of 2013, the decision was made to discard CFCs due to their tendency to retain T and to opt for a fully tungsten-armoured divertor [22]. A single null divertor (characterised by toroidal symmetry and one X-point or "null") is to be installed in the bottom area of the VV of the ITER tokamak. It will extract the heat and ash produced in the fusion reaction, minimize plasma contamination and protect the surrounding walls from thermal and neutron fluxes. Specifically, the divertor will consist of 54 divertor cassette assemblies (CAs) operated by remote handling (Figure 3). Each of these CAs includes a cassette body (CB) and three PFCs, namely the internal and external vertical targets (IVT and OVT) and the dome. In addition, each of these modules will house diagnostic components for plasma control, evaluation and optimization [11,12]. The IVTs and OVTs are placed at the intersection of the magnetic field lines where the particle bombardment will be particularly intense in ITER. The heat flux to which these components are subjected is estimated to be between 10 and 20 MW/m 2 . Materials and cooling methods that cannot be used in the FW may be used in the divertor, mainly due to the presence of coils located at the bottom of the chamber that bend the outermost field lines to enter the divertor [14]. ITER Design During the last decades, several PFC designs have been developed with W. The most efficient are the monoblock and flat-tile types. These models ( Figure 4) consist of modules that have been machined from a PFM and attached to a water-cooled heat sink made of a metalic alloy. The joints between the PFM and the heat sink must acquire very high mechanical strength to tolerate the high temperatures and keep the modules uniformly in position. Each of these is equipped with a cylindrical hole necessary for the junction between the PFM and the heat sink-usually made of CuCrZr. Despite its good performance, the flat tile design (Figure 4b) presents the possibility of local overheating of the shielding plate due to the incidence of plasma particles. The loss of even a single tile is considered a rather serious event, as it would lead to the degradation of joints in the adjacent tiles (so-called cascade failure) [12,23]. Therefore, it has been decided that the ITER divertor will be completely made of monoblock W due to its greater robustness versus possible accident conditions. Thus, both IVTs and OVTs will be completely covered with this water-cooled material ( Figure 5a) [24]. Radiation Effects The much-feared effect that radiation can have on the divertor components in ITER is shown in Figure 6. In this case, quite serious macroscopic damage resulting from cyclic thermal loads simulating ELMs can be observed. Specifically, 10 5 pulses with a heat flux of 12 MW/m 2 have been applied to a W sample preheated to 700 • C. It shows a very intense degradation due to the formation of a dense network of cracks on the surface. The 10 5 pulses of this experiment correspond to an operating time of only 10 standard plasma discharges in ITER. It points out the potential danger that these pulses can represent to this type of materials [12]. Smart Alloys Due to its excellent properties, W has also been chosen as a prime PFM candidate for DEMO. However, certain accidental models have revealed several drawbacks related to the use of pure W. Those are its intrinsic low temperature brittleness, neutron-induced embrittlement and recrystallization resistance [21]. To prevent this, new modalities have been developed and tested under fusion-relevant conditions. These include the modification of its granular structure and the combination with several alloying elements and compounds (Mo, Ti, Y 2 O 3 , ...) to increase strength and recrystallization resistance. The latter are the so-called smart alloys (which automatically adapt their properties to the environment [25]) (SA) [12,23]. One of the major concerns of using pure W in DEMO is related to its behavior under a LOCA with air ingress in the VV. Under those circumstances, the temperature of the tungsten cladding could reach 1000 • C and remain at such a high level for several weeks. Tungsten oxidizes and radioactive, neutron-activated tungsten oxide sublimates into the environment at such a high temperature. During an accident, the remaining alloying elements in the bulk will diffuse to the surface and form their own oxides, protecting tungsten from oxidation and subsequent sublimation into the atmosphere. Recent studies have demonstrated the benefits of systems based on W-Cr-Y, which are produced by mechanical alloying (a ball milling process where a powder mixture placed in the ball mill is subjected to a high-energy collision from the balls. The process is usually carried out in an inert atmosphere [26] ) (MA) and compacted by FAST. These SA have demonstrated very high oxidation resistance and contain Cr as an oxidizing alloying element and Y as an active element stabilizing and regulating the chromium transport in the alloy system [27,28]. Despite all their development, there are still open questions in both the understanding of the physics and the technological development of SA systems. The role of Y in the stabilization of a W-Cr solid solution still needs to be further understood as well as the technology of joining SAs with the corresponding structural materials. At the same time, the scale of fabrication at the industrial level needs to be improved. Finally, it is of vital importance to carry out a thorough evaluation of the effect of neutrons and impurities as well as transmutation on alloy performance [28]. Tungsten Fiber Reinforced Tungsten (W f /W) The intrinsic brittleness of W is of great concern during possible transients with high heat loads. To reduce this brittleness, numerous procedures have been investigated to increase the toughness of the material. However, traditional intrinsic toughening methods present limitations for applications in melting environments, where high-temperature recrystallization phenomena can occur, causing severe internal damage. To increase fracture toughness (the resistance of brittle materials to the propagation of flaws under an applied stress) and thus improve the intrinsic brittleness of W, tungsten fiber-reinforced tungsten composites (W f /W) are being developed for use in the divertor of future fusion reactors. Thus far, two main fabrication approaches have been established: powder metallurgy (PM) and chemical vapor deposition (CVD) processes ( Figure 7). For both cases, improved mechanical properties have been demonstrated. Generally, W f /W composites created by CVD contain 150 µm diameter unidirectional tungsten fibers coated by an interface and embedded in a W matrix [23,[29][30][31]. Figure 7. (a) PM W f /W prototype (b) typical fracture surface of CVD W f /W. Reprinted with permission from Ref. [29]. Credit c IOP Publishing, 2021. The CVD process (see Equation (3)) consists of applying an interface layer (e.g., Y 2 O 3 ) to the fibers, exposing the composite to WF 6 and H 2 at temperatures between 573 and 1073 K. The fibers and CVD matrix have a major influence on the microstructure, potentially leading W f /W composites to present different properties to those of pure W material when eventually exposed to a fusion environment. Despite the fact that certain fabrication aspects still need to be further investigated, CVD is potentially one of the most cost-effective processes due to its fast deposition rate and high mass production from a reduced amount of material [31,32]. Currently, the role of K-doping in W fibers is being studied, since it has been shown to delay heat exposure-induced embrittlement at least up to 1600 • C, although a strong reduction in fiber strength has also been observed in tests conducted at elevated temperatures [33]. This doped W contains nanobubbles (∼nm) that include K atoms (∼ppm) dispersed mainly at grain boundaries (a 2D defect in a crystalline structure that tends to reduce the electrical and thermal conductivity of the material) (GB). Because K bubbles hinder the movement of these boundaries and dislocations, they are able to improve thermal shock resistance and mechanical properties at high temperature as well as prevent recrystallization. In addition, it is expected that the embrittlement induced by neutron irradiation can be suppressed since it contains numerous GB, which act as sinks for the defects produced. On the other hand, the addition of rhenium (Re) is also considered as another very promising procedure, namely as a solid solution alloy reinforcement [34,35]. These materials have been shown to overcome the low temperature brittleness of W. However, their main problem is industrial scale-up, which requires more effort over time and has led to the decision not to consider this material for the DEMO start-up application but to treat it as a high-potential material for further applications (e.g., PROTO) [23]. Beryllium Beryllium (Be) has been on the candidate list as a PFM since the late 1980s. With the decision to use Be as the material for the ITER FW, research on its conditions and aspects most relevant to fusion has accelerated [36,37]. During ITER operation, the FW coating will be subjected to cyclic thermal loads, resulting in fatigue loads that can trigger melting, cracking, evaporation, and surface erosion [38]. Be has been selected due to its low atomic number (Z = 4), which minimizes radiation losses of the sputtered atoms in the plasma (that depend on Z 2 ), its good thermal conductivity and its oxygen uptake capacity that contributes to maintaining a high level of plasma purity [11,12]. Beyond ITER, the three Be grades that are considered candidates for the FW of a future fusion reactor are: These differ mainly by chemical composition, PM process used or compression method [38]. Prototype for Use in ITER A total of 440 panels (FWP) will provide a protective barrier for all systems beyond the VV. To get an idea of the technological challenge and the magnitude entailed by the ITER project, it is sufficient to analyze the temperature gradient existing between the hot plasma (150 × 10 6 • C) and the superconducting coils (−269 • C) that will confine it, separated by a mere six meters. In essence, the panels are made of a 6-10 mm layer of beryllium bonded to a copper alloy heat sink mounted on a 316L stainless steel (austenitic steel composed of Cr-Ni-Mo and with a low C content) structure ( Figure 8). Europe will be responsible for the production of the first 215 FWPs, while China and Russia will provide the rest of them [39,40]. In turn, Be is one of the reference neutron multipliers for the various TBM designs and is used in the form of pebble-beds. The process at the most advanced stage of development is the rotating electrode process (REP), which allows the fabrication of pebble-beds of typically 1 mm [11]. Beryllides Despite being considered one of the most promising materials, the main drawbacks of beryllium's application as PFM are its relatively low melting point (T f = 1278 • C) as well as its high toxicity [11,12]. Beryllium intermetallic compounds (also called beryllides) such as Be 12 Ti, Be 12 V y Be 12 Zr are the most promising advanced neutron multipliers for DEMO, specifically for HCPB design [41,42]. The main consequence of neutron irradiation is significant He and T production, resulting in swelling and loss of strength-related properties of beryllium composites. DEMOoriented R&D has focused on beryllides, as they promise to offer improved long-term material performance, as well as resulting in a much lower H production rate compared to pure Be [11]. Preliminary studies on the thermal desorption of He and T from titanium beryllium (Be 12 Ti) have shown that this material has a much lower retention tendency, in addition to having a higher melting point. Some of its strengths are: • Swelling as a result of exposure to neutron irradiation occurs to a lesser extent. • It has a higher melting point (1593 • C), lower activation and higher corrosion resistance. All these advantages have opened the door to more extensive studies of the nuclear, physical and mechanical properties of this material with the possibility of further use in nuclear technology and high-temperature instrumentation. Diamond Due to its high Z, the presence of W in the interaction with the plasma significantly affects its stability. On the other hand, diamond has a low Z, has excellent thermal properties and thanks to its sp3 structure has a low T retention rate. For all these reasons and due to its outstanding thermal conductivity, diamond can be doped into the tungsten matrix to reduce the damage on the material. One of the approaches that have been proposed has been to form a W/diamond composite material via SPS, which would allow the thermal conductivity of pure W to be improved. Another approach is to form diamond films via MWCVD, which, due to its chemical purity and perfect adhesive property, would allow to improve the PFM properties under high thermal loading conditions. Both the interfacial bonding and thermal conductivity of the composite with a Wcoated layer are strengthened compared to uncoated composites. The volume fraction of diamond particles in the composites is around 10-50%, and an 18% increase in thermal conductivity is achieved over pure W [44,45]. Figure 9 shows the microstructure of the fracture surfaces of diamond particle-based composites. As can be seen, for the uncoated composites, more obvious cracks appear between the diamond particles and the tungsten matrix. For the coated composites, the size and number of cracks are significantly reduced [45]. Consequently, the addition of diamond to W facilitates the manufacture of materials with outstanding strength and toughness at temperatures above 1200 • C. These composites are considered valid candidates for the FW of future fusion reactors, as they would combine excellent thermal creep corrosion toughness and resistance to neutron radiation damage [46]. It may seem that diamond doping is not an economically attractive option, but it should be borne in mind that nuclear fusion holds promise of becoming a highly safe, efficient and waste-free energy source. In order to achieve these goals, it is deemed necessary to work with materials (e.g., diamond) and technologies (e.g., the TBM program) that allow these promises to be fulfilled. It should be noticed that the estimated budget for the construction of ITER exceeds 25 billion euros [47], which is a clear indication that no expense is being spared. Diamond is just one of many materials bringing in such beneficial properties that will provide a full return on investment from its use. However, significant efforts are also being made to develop diamond-like carbon (DLC). This material is seen as a potential low-cost substitute for diamond in certain applications, but little is known about the temperature range over which its desirable properties are maintained. DLC coatings exist in several different forms of amorphous carbon materials that display some of the unique properties of diamond. These coatings can be amorphous, more or less flexible, hard and strong according to the composition and processing method required. Film formation can be obtained by deposition (e.g., ion beam, sputter or RF plasma) [48]. Both DLC and doped DLC films have shown attractive properties, including high hardness, low coefficient of friction or high thermal conductivity. For some applications, adherent thick DLC coatings (e.g., ∼10 µm) are desired for providing long-term durability and reliability in harsh working environments (such as those of a fusion reactor) [49,50]. By varying the production conditions, such as the bias voltage, the physical properties of the DLC can be changed to obtain coatings as hard as diamond or as soft as graphite [51]. This material has been tested for several functions in ITER. Firstly, it has been used as coating for the solid lubricant for the transmission gears of the ITER blanket maintenance equipment, thus replacing oil lubricant [51]. It was also chosen to perform qualification tests for CMS (cold mass support) sliding pads [52], where it proved to be potentially suitable as it could make the sliding interface of CMS meet all functional requirements of the CFT (cryostat feedthrough) feeder system. Finally, it has also been used in the port plug handling system. The purpose of this system is to insert and remove the ITER port plugs installed in the equatorial and upper levels of the tokamak. Since activation of these can occur, the contamination levels prevent manual access, so their safe removal is ensured by the cask and plug remote handling system (CPRHS) between the buildings. This handling process has been reproduced on a physical scale mock-up in which the test plug is equipped with a set of aluminium-bronze (DLC-coated) features [53]. Structural Materials It is clear that in order to achieve projects of the nature of ITER or DEMO it is necessary to develop materials to their maximum potential, with the clear objective of withstanding the extreme conditions of temperature, irradiation damage and production of transmutation elements. According to L. Malerba et al. [54], a structural material is one that is manufactured for the purpose of withstanding large amounts of stress, whether its origin is mechanical, thermal, vibrational, etc. These materials can be divided into two types: • Replaceable: Designed to be relatively easy to remove from the reactor. An example would be the fuel assemblies in a nuclear power plant. • Non-replaceable: They constitute the main structure of the reactor, so they are designed to mitigate the greatest possible degradation caused by external agents. An example would be the FW components. It is evident that there is a strong overlap with nuclear fission materials research. However, fusion materials present a few additional challenges. The first of these is the large amount of He that is produced, both in the D-T fusion reaction and by transmutation reactions in the structure. These He bubbles that form at vacancies and GB cause swelling and embrittlement, which extensively degrades the materials. The second effect that is unique to fusion reactors is associated with the 14 MeV fast neutron. This high-energy particle penetrates deep into the structure and collides with the lattice atoms, creating numerous defects in the material. The accumulation of this damage in the structural and diagnostic materials is one of the main headaches in the design of this type of reactors [55]. Structural materials consist of crystals that adopt certain arrangements in some atoms of their lattice. Metals and alloys usually consist of regions with many crystals called grains, whose boundaries are the aforementioned GB. Ionizing radiation gives off more or less energy to the material depending on the type and energy of the particle and the medium in which it is found. This radiation-material interaction is what generates the defects, which depend directly on the initial defects of the unirradiated sample. The origin of the atomistic defects is given by: (i) Transmutation reactions. (ii) Atomic displacements due to nuclear stopping power. (iii) Ionization and excitation due to electronic stopping power. In addition to changing the original composition of the material, the transmutation process generates H or He, elements that tend to be introduced into the cavities or voids (aggregates of vacancies). These cavities can end up coalescing, thus forming linear defects (dislocations) that eventually propagate, producing cracks or bubbles on the surface that will lead to the fracture of the material. This forces the material to change dimensions, an effect known as swelling, which is extremely detrimental to the structural integrity of the material. The most relevant material properties are determined by the crystal defects. These are differentiated between: • Point defects (e.g., Frenkel pair). Point defects are of paramount importance for understanding irradiation damage and thermal properties. The movement of dislocations describes plastic deformation and is therefore key to understanding irradiation-induced changes in mechanical properties. For their part, GB are regions to which impurities can diffuse, hence the need to know their location [56]. The structural materials that make up the cooling pipes of the BB of the reactor, responsible for T production, electrical generation and radiation protection, will be subject to a severe operating environment, with damage from fast neutron irradiation, high temperature and high stress. Requirements for these structural materials include low activation, good compatibility with different coolants, resistance to irradiation and high temperatures, among others. Thus far, three main candidates for low activation structural materials have been proposed for the FW. These are -in order of relevance-RAFM steels, SiC/SiC ceramic composites and vanadium alloys. Reduced Activation Ferritic Martensitic Steels For a conventional nuclear reaction, it is common to use stainless steel composites, but in order to withstand the extreme conditions of fusion reactions, they must reach a higher level of design. The neutrons bombarding the structure of the materials can lead to their activation, which is why low-activation materials (that do not result in long-lived radioactive isotopes) must be used. This implies that their chemical composition should be based on elements such as Fe, V, Ti, W or Ta, among others [57,58]. Two reduced-activation ferritic/martensitic steels (RAFM) have been designed for this purpose: EUROFER (Europe) and F82H (Japan). They contain the following iron additives, which make up the remaining percentages ( Table 2): Table 2. Composition of F82H and EUROFER steels. Adapted from Ref. [16]. These are considered the reference structural materials because they have already reached their "technical maturity"; in other words, a wide experience in terms of their manufacturing and processing methodologies has been gathered. Unlike fission products, these steels are non-volatile and can be reused after storage for a period of 50 to 100 years. The amount of swelling they can undergo under neutron bombardment is much less than for conventional stainless steel. As with other materials, their brittleness is due to the He and H bubbles trapped in the compound in question [14]. Test Blanket Module Program The ITER TBM program is one of the most ambitious projects to be undertaken and plays an essential role in the design and construction of DEMO. Its objective is to develop the design that will allow to reproduce T in an efficient and safe way, while extracting heat from the blanket to generate electricity. Therefore, it is of vital importance to acquire all data and information related to the TBS (test blanket system) to provide the basis for the design, fabrication and operation of DEMO and subsequent fusion reactors [11]. From a technical point of view, the TBS are located in two equatorial ports that allow four of these TBS to operate simultaneously. Initially, they were going to be implemented in 3 ports, allowing the operation of 6 designs at the same time [11]. However, a reconfiguration was undertaken due to the need to reallocate space on the tokamak that had arisen because of initial space limitations and integration issues. The selection process for the four designs that will be part of the initial ITER configuration is currently underway, with one possible option involving two water-cooled TBS and two helium-cooled TBS, although these will not start operating until the last non-nuclear phase of ITER [59]. All TBM designs proposed for testing in ITER use RAFM as the structural material for the reasons below: 1. It ensures that the BB produces very limited volumes of high-level radioactive waste, thereby seeking public acceptance of nuclear fusion. 2. It is currently the only type of material that presents the necessary structural properties and is able to meet, within the timeframe foreseen for the construction of DEMO, the necessary operational requirements. It has a good overall balance of the required mechanical properties (ductility, fracture toughness, creep or fatigue resistance), and there is extensive industrial manufacturing experience. Moreover, its optimized Cr content (8-9 wt. %) minimizes radiation-induced DBTT [11,23,60]. Four of the different designs shown in Table 3 will be part of Figure 10, which shows an overview of the 3D model of the four TBS and their associated infrastructures. Each TBS is functionally independent of the others. Various concepts of breeding blankets are studied, with the liquid Pb-Li eutectic alloy (i.e., Pb16Li) being one of the most promising ones (used in DCLL, HCLL and WCLL). Particularly, this eutectic composition has been chosen due to its low melting temperature as compared to other Pb-Li compositions [61]. These three DEMO BBs use EUROFER as structural material, and the eutectic Pb-15.7Li enriched at 90% in 6 Li as a breeder, neutron multiplier, and tritium carrier. T is produced inside the VV in the lead-lithium eutectic, transferred outside the VV by the eutectic alloy flow, and then extracted by the TERS (tritium extraction and removal system) [62]. This system is of vital importance as it allows the T generated in the blanket to be recovered through a loop through which the PbLi circulates. Thanks to this, the T is routed to the tritium plant to finally be re-injected into the plasma. A new facility, CLIPPER, is being constructed at CIEMAT to investigate this extraction. The design of a PbLi loop to perform experiments on H extraction from the liquid metal is presented in [63]. Nevertheless, if liquid PbLi is used, some issues arise, such as liquid metal corrosion, the behaviour of He and T in the liquid PbLi or the effects of magnetic fields in the fluid mechanics. There are still some unanswered questions, and different designs have been proposed. One of the key aspects of the future operation of liquid metal-based BBs is the eutectic composition of the PbLi alloy. There is a discrepancy on the exact eutectic point, varying the Li content from 15 at% to 17 at%. The presence of impurities and the content of this element can have a major impact on experimental activities. A crucial point is how it affects neutronic calculations on the TBR. It is assumed that PbLi does not remove any thermal power, remaining isothermal at a temperature of about ∼330 • C [4,61]. As a function of various parameters of the Pb16Li alloy (e.g., mass flow rate, temperature or pressure), the He generated will leave the breeder blanket in the form of dissolved gas, or, if the solubility limit is exceeded, as a gas bubble within Pb16Li. The amount of He bubbles generated in different DEMO-like blanket designs is estimated to be about 10-40 mL/h, which can accumulate in the system. Studies such as [64] provide valuable data for the design of liquid metal-based BB and further fundamental understanding of the inherent complexity of bubble behaviour inside liquid metals. From all these designs and options, experts will decide which TBMs will be used in ITER with a view to their future implementation in DEMO. Because of this, a vast amount of reviews can be found regarding each of them [65][66][67][68][69][70][71][72]. Advanced RAFM In parallel to the validation and planning steps for the use of F82H and EUROFER97 in ITER, there are also ongoing developments to modify these steels and improve their performance for DEMO. Specifically, a new generation of 9% Cr steels is being developed, known as advanced RAFM [73]. The strategic vision is to retain the basic structure and advantages of RAFM steels while improving their operational performance. Some target requirements are to operate in a higher temperature range and to be able to withstand neutron damage up to 70 dpa (with the possibility of extending this target to about 150 dpa), while keeping their reduced activation properties [11,23,60,74]. Within EUROfusion, two clear goals have been adopted for EUROFER97: • DBTT reduction with the objective of using it in water-cooled designs. • Enhancing its resistance to high temperatures, in particular to improve creep strength, with the objective of being used in He-cooled designs. Modifications are also being made to F82H with subtle changes in chemical and thermodynamic processing, including: • Limiting the amount of Ti to avoid loss of toughness. • Increasing the Ta and N ratios to reduce radiation-induced embrittlement and improve creep resistance, respectively [74]. SiC/SiC SiC/SiC composites have experienced a major development for nuclear fusion applications in recent decades. These materials consist of SiC fibers that are embedded in a high-crystallinity SiC matrix with a carbon or carbon/SiC multilayer interface. These composites are generally manufactured by CVI ( Figure 11) and have been shown to be resistant to neutron irradiation at elevated temperatures in terms of retention of mechanical properties, which is why this material is one of the leading candidates for nuclear applications. In addition, SiC itself has certain advantages over other candidates, including high temperature resistance (∼1600 • C), low neutron absorption, low activation and excellent chemical stability [56,75]. SiC is a brittle material but with a great facility to improve its fracture toughness by modifying the fiber, matrix and interface. It is composed of tetrahedrons of carbon and silicon atoms with strong bonds in the crystal lattice, which produces a very hard and tough material [55,76,77]. Figure 12 represents the predicted radioactivity of EUROFER and SiC/SiC samples irradiated in a fusion reactor after 25 years of operation at full power. After 100 years, its activity has been reduced by a factor of almost 10 6 . Therefore, it can be deduced that it is an excellent low-activation material [14]. The system composed of a silicon carbide matrix reinforced with silicon carbide fiber (SiC f /SiC) has undergone continuous evolution thanks to its great performance in hostile irradiation environments. This is why it is being considered for use in a variety of areas, both for structural fusion applications and for other high performance applications such as aerospace engineering. Flow Channel Insert Among all available TBS designs (Table 3), SiC is of vital importance for the DCLL. In this design, a liquid PbLi alloy flows through a series of channels acting as a coolant and T breeder. As a result, it reaches high temperatures (∼700 • C) and provides high thermal efficiencies (∼45%). However, the development of this concept requires overcoming a high level of R&D in various areas of study [78,79]. Among the challenges to be addressed is the development of such channels, called flow channel inserts (FCIs). These are hollow square channels (∼5 mm) that contain the liquid metal flowing at 10 cm/s. Their main objectives are: • To protect against corrosion and/or infiltration of PbLi during its operation time. • To provide thermal insulation in order to protect the steel structure (RAFM) of the blanket from the high temperatures of the PbLi. • To electrically isolate in order to avoid EM interactions generated between the fluid and the intense magnetic field present in the reactor. This minimizes MHD pressure drop. One possibility is the development of a sandwich-type material that includes a SiC porous core (thermal and electrical insulation) and a dense SiC coating (protection against PbLi corrosion and infiltration) [75,[78][79][80]. Figure 13 shows the different geometries of the FCI prototypes produced in [79] by the gel casting method after being sintered, oxidized and CVD-SiC coated. All of them have a section of 25 × 25 mm 2 and a porous core of 5 mm thickness. Precisely, the method used to create these prototypes, called gel casting, is an advanced ceramic composite preparation technique developed at ORNL [81]. This method presents numerous advantages over conventional techniques used in PM, especially for the production of FCI, where the fabrication of complex shapes with a relatively large size is required. The gel casting process is a low-industrial-cost technique with the ability to produce uniform bodies with high strength and with the possibility of reducing or limiting some defects such as particle agglomeration, pores and cracks [79,81,82]. This technology paves a new way for the preparation of ceramic parts with potentially nuclear applications. Tritium Permeation Barriers The permeation of T through structural materials is a crucial issue for both radioactive safety and the reproduction of this element (TBR > 1). Structural materials such as 316L and F82H steels cannot meet the service requirements of D-T fusion power plants because the solubility and permeation rate of H in these materials is quite large, particularly at high temperatures, which has serious consequences in terms of embrittlement. Such a reduction in steel permeation can be achieved by making use of tritium permeation barriers (TPB) [83,84]. Implicit in these barriers are high performance requirements such as radiation and corrosion resistance, low activity, high thermal mechanical integrity, breeder compatibility and applicability to large components [83]. The materials available for TPB can be classified into metal oxides, especially Al 2 O 3 , and non-oxide ceramic composites, such as SiC coatings [85]. In one of the designs of the TBM program (HCPB), for which the use of Li 4 SiO 4 and/or Li 2 TiO 3 as breeders is foreseen, certain studies [84] have proposed the use of SiC coatings due to the fact that they: • Satisfactorily withstand corrosion tests. • Reduce the permeability of steel by up to three orders of magnitude. • Have an inert character, so they do not react with the surrounding medium, keeping the Li out of the EUROFER. Plasma Facing Components Numerous PFC concept studies are currently underway due to the good compatibility between W-SiC (in terms of thermal expansion, bonding technology and operating temperature window) [86,87]. The development of technology to join SiC/SiC with itself or with other materials is essential for their integration into various nuclear applications. These bonds are intended to provide mechanical robustness, tolerance to neutron irradiation and a certain chemical stability in the operating environment [80]. When W is reinforced with a ceramic material such as silicon carbide, a so-called metal matrix composite is produced. SiC-reinforced W metal matrix composites have been fabricated and shown to present favorable properties, such as increased resistance to corrosion and abrasion. Following the Fukushima disaster, SiC-based cladding was proposed to replace the current zircaloy and is also one of the leading candidates for use as a structural protective layer for fuel particles in Gen IV reactors. This research has significantly advanced fabrication technology as well as the understanding of material properties. Due to common technological hurdles, these technologies and insights can be applied to the development of fusion materials [75,76]. Main Disadvantages As a PFC, SiC will experience high heat fluxes and damage from neutrons and charged particles. These α particles will impact the surface of the material, resulting in a variation in the amount of He. It is therefore pertinent to try to further understand the behavior of SiC under fusion-relevant conditions (He-appm, dpa and irradiation temperature). Since current research is mainly focused on microstructural characterization, knowledge of transmutation effects on macroscopic properties will be essential to evaluate the performance of SiC in fusion reactors [75,76]. In [80], neutron irradiation of the fusion spectrum has been reproduced to test whether it is indeed one of the key factors limiting the lifetime of SiC compounds. Production rates of 50-180 appm He/dpa and 20-70 appm H/dpa are predicted for gaseous transmutations and 10-45 appm Mg/dpa, 5-18 appm Be/dpa, 3-14 appm Al/dpa and 0.2-1.5 appm P/dpa, depending on the blanket concept. As with other materials, the high rate of He production tends to cause swelling due to the stabilization of the interstitial helium at intermediate temperatures during ion irradiation and of vacancy clusters at high temperatures (>1000 • C). One of the main obstacles today is the price of the components as well as the high compatibility to be achieved between the matrix, fiber and interface. In turn, the swelling of cavities in the application temperature window or the effects of large amounts of He at high temperatures are still unknown. Finally, one of the most commonly used fabrication processes (CVI) produces a microstructure that has approximately 10% porosity and is therefore permeable to gases. For these reasons, it has certain limitations such as a relatively low thermal conductivity and stress limit. As with vanadium alloys, there is some concern about the lack of manufacturing infrastructure and potential costs. There is therefore a need to develop more efficient fabrication methods and a large-scale joining technology [55,56,80]. Vanadium Alloys Since they were considered as candidates for LMFBR cladding materials in the 1970s, V alloys have always been linked to the nuclear industry. In the 1980s, their use in fusion reactors was considered due to their low activation properties, and since then they have evolved to such an extent that they are now considered one of the three most promising structural materials for fusion reactors together with RAFMs and SiC/SiC composites. Vanadium alloys play a key role, as they are contemplated for most advanced DEMO designs using liquid Li as the breeding and cooling material ( Figure 14) due to their good compatibility with this element [88][89][90][91][92]. In addition, these alloys exhibit good resistance to corrosion and irradiation swelling and, in particular, maintain high temperature resistance. With a V alloy structure, blanket designs using liquid Li can increase the coolant temperature and achieve relatively high TBR values without the need to introduce Be as a neutron multiplier. This has a very attractive consequence, namely that without Be, the system is freed from the radiological problems arising from its toxicity. Nevertheless, this concept has two main problems: the T recovery of liquid lithium and the MHD pressure drop [90,93]. The typical vanadium-based alloy is V-4Cr-4Ti. The addition of Cr provides improved strength and creep resistance, while Ti provides good resistance to irradiation-induced void swelling in the vanadium matrix (BCC structure). During the fabrication and processing of these alloys, impurity levels (e.g., C, O and N) must be carefully controlled due to possible degradation of mechanical properties. Based on these considerations, a protective or highvacuum atmosphere is normally used to prepare the alloy, whose maximum operating temperature is 700 • C [58,91,93]. Main Disadvantages The main drawbacks of this type of advanced materials stem from their poor largescale manufacturing technology. To this, the detrimental effects of He transmutation on mechanical properties and the effects of radiation on fracture properties must be added. Again, in order to carry out a conclusive characterization of all these aspects, the development of a facility equipped with a fusion neutron spectrum source is essential. The T retention characteristics of vanadium alloys leave much to be desired, as they possess a H permeability of at least two orders of magnitude more than any other blanket material and can form detrimental hydrides [88,94]. In addition, their high diffusivity and solubility coefficients create a serious problem as the embrittlement of the materials by H is something to be avoided at all costs since it contributes to their degradation [89,91]. V-4Cr-4Ti is expected to accumulate a damage level of 50-80 dpa/fpy, which may result in the formation of defects (dislocations, bubbles, etc). Possible effects of neutron damage include hardening, embrittlement or swelling. The study of irradiation in structural materials indicates that V alloys show significant resistance to irradiation damage above 400 • C. However, their hardening below 400 • C is related to the formation of point defects and clusters. Additionally, it has been found that the dissolution of Ti-rich precipitates may affect the hardness of the welded joint [93]. The hardening of these alloys occurs significantly at relatively low temperatures, as illustrated in Figure 15. Near Future As an advanced choice of structural materials for fusion applications, the manufacturing technology of V alloys has made great advances in recent years. Research on coating and corrosion, irradiation damage or H isotope retention has also progressed considerably. However, critical problems related to high temperature operation and low temperature embrittlement of the material remain to be solved [90,93]. Because efforts in recent years have focused more on a mature candidate such as RAFM, progress in the development of V-alloys slowed down as compared to a decade ago. Despite this and because advanced options need to be explored in order to mitigate risk and provide a higher long-term performance option, research into these materials is crucial and can be achieved through efficient use of the available infrastructure. From this point of view, exploring new advanced materials to improve their performance becomes more meaningful. One example may the coating of V alloys in the FW with shielding materials, such as W layers, by a vacuum plasma spraying method (VPS) [90]. Finally, for a near future development it is worth mentioning the effort being made in assessing high-entropy alloys (HEAs). By creating HEAs from elements with favourable properties in terms of nuclear activation, materials that can withstand the nuclear fusion environment can be manufactured while minimising the radioactive waste produced. Such a material could be used in the extreme thermal and irradiation conditions of a fusion blanket as they demonstrate impressive combinations of attractive properties such as high strength and high fracture toughness. In addition, these alloys may demonstrate superior irradiation damage tolerance due to high lattice distortion and sluggish diffusion from multiple principal elements in HEAs [96][97][98]. These alloys, which are multi-element, equiatomic metallic systems, can crystallize in a single phase despite having high concentrations (20-25 atomic percent) of various elements with different crystal structures [99]. Fascinating new materials may emerge as this subject develops. Nevertheless, there have been no studies done on FCC alloys containing V, and theoretical conclusions to date have only been drawn on a small range of alloys [97]. Oxide Dispersion Strengthened Steels As mentioned in Section 4.1, RAFM steels have been chosen as the structural material for the ITER blanket modules. However, these steels present certain limitations for future advanced DEMO systems. It is for this purpose that ODS steels may come to play a key role. The properties of these materials are based on a broad dispersion of oxides distributed almost homogeneously throughout the composite. These precipitates tend to be stable under high temperature conditions (∼900 • C) and practically chemically inert, providing a high performance material for nuclear applications. ODS are potential candidates for fuel cladding in SFR, as well as for Gen IV fission and fusion reactors. They consist of a series of fine oxide particles in the RAFM steel matrix, resulting in the trapping of irradiation-induced defects. The oxide particles also act as obstacles to the movement of dislocations, thus causing the strengthening of the steel at high temperatures. The size of the dispersoids (a substance that is dispersed in the form of microscopic particles in a medium that can be gaseous, liquid or solid) that impart the necessary properties to the matrix are in the nm range. Main Constituents Due to their structural variability, ODS steels have the following constituents, among others: This is the most important component, as it improves creep resistance at high temperatures by pinning mobile dislocations and delays the swelling of voids by acting as sinks for point defects produced during irradiation. It is optimized at 0.35%. • Cr: Sets the maximum service temperature. Generally, steels whose Cr content is restricted to 8-9% work below 600 • C, while those with Cr above 12% operate at around 800 • C. Finally, those with a content between 14 and 16% can achieve higher temperatures but also show embrittlement due to thermal aging. Compatibility between RAFM and ODS Steels The combination of RAFM and ODS steels can be very useful in expanding the design scope of the blanket modules, where ODS are placed in more severe environments and RAFM are used for large components in less demanding environments. The attractiveness of ODS is not only due to the oxide particles but also to the ability to control their structural morphology depending on the properties required. The fine particle distribution of Y 2 O 3 -the most frequent dispersoid-is essential to improve the high temperature strength of ODS steels [100]. This improvement is achieved by the dissociation of these particles by the MA process, which gives the material an ultrafine microstructure that provides unique properties [103]. Several studies [100] have shown that ODS steels have a higher operating performance than RAFM steels (Table 4): Their promising properties make ODS one of the most promising structural materials for the future. Their compatibility of operation between fission and fusion environments makes it possible to unify research and resources. Thus, its potential applications are focused on GenIV fission reactors and as a potential substitute -or companion-for RAFMs in DEMO [100,101]. Specifically, its use in DEMO is proposed for the DCLL blanket design. This concept has limitations of use due to the maximum acceptable temperature at the FW and the compatibility of the structural material with LiPb, which limits the allowable interface temperature to about 550 • C. The use of ODS with a temperature limit based on higher strength would increase operability, but it should not be forgotten that the welding requirements would make fabrication difficult [104]. Numerous studies are currently underway to achieve better compatibility between RAFM and ODS. Specifically, a new oxide dispersion-strengthened EUROFER steel has been tested using a two-step MA route. Starting from atomized EUROFER powder, various particles with average sizes of ∼60 µm and ∼120 µm were separately milled after addition of Ti and Y 2 O 3 at the nanometer scale. The final result has been a material with a microstructure characterized by two distinct regions: zones with high-density particles (HDPZ) and zones with low-density particles (LDPZ). The coexistence of these regions has been shown to significantly improve the mechanical properties of the new EUROFER-ODS compared to other equivalent steels [105]. Problems to Be Solved The biggest challenge presented by these materials seems to be their anisotropy (the quality of exhibiting properties with different values when measured along axes in different directions) as well as the difficulty in uniformly distributing the dispersoids and being able to control their stability under irradiation [100,101]. Recently, some studies have appeared [57] that show the worrying tendency of these steels to retain D. Specifically, the diffusivity of D in ODS is an order of magnitude lower than in RAFM and CNA steels and, consequently, the effective solubility of D is 2 to 10 times higher. TDS measurements were carried out to evaluate the deuterium desorption (the emission of a fluid previously adsorbed by a material) of these materials and, after applying a static thermal loading of deuterium at 723 K for 1 h under a pressure of 1.0 × 10 5 Pa, it could be seen that ODS steels exhibit the highest D retention and have broader desorption peaks, indicating the presence of various capture sites due to the existence of ultrafine grains and high-density oxide nanoparticles characteristic of ODS [57]. Moreover, their complex production by MA results in high fabrication cost as well as low production volume and the aforementioned high probability of anisotropy of mechanical properties and low toughness [57]. Diagnostic Materials Diagnostic systems will play a fundamental role in the control of the fusion process and will allow us to achieve a better understanding of plasma physics. To accomplish this, the tokamak must be equipped with sensors and instrumentation to fully explore the operating environment [106]. For the continued operation of ITER, it is of vital importance to predict the behavior of structural and functional materials under neutron and γ-particle radiation, as these will degrade the material properties and thus deteriorate its performance. There are about two hundred different diagnostic systems in ITER located at various locations in the machine, ranging from the FW to the outer areas of the VV. The greatest radiation damage will be induced in the components closest to the plasma, i.e., the FWS and retroreflectors (RR) [107]. Moving away from the FW, several magnetic sensors and bolometers (an instrument that measures the total amount of electromagnetic radiation coming from an object for all wavelengths) are placed between the blanket modules and the VV wall. Finally, the electronics, cameras and optical fibers are examples of external diagnostic components, also called "ex-vessel components". As can be seen in Figure 16, as the components are further away from the FW, the radiation effects they suffer are reduced. There are two types of effects on their performance: • Dynamic radiation effects: Substantially influence the performance of components from the onset of exposure to radiation environments. • Long-term radiation effects: Gradually degrade their performance capabilities [107]. Hundreds of diagnostics will be used in ITER, but the number and type will be reduced in DEMO due to the restricted space available for diagnostics and harsh operation conditions. This is so because the requirements for achieving high reliability in DEMO plasma control are much higher than in any other existing fusion device, since operational failures that could lead to disruptions and their damaging consequences in inner DEMO components must be strictly avoided. Hence, as compared to ITER, the implementation of diagnostics at DEMO is even more limited due to adverse effects that degrade the front-end components (e.g., ionizing radiation or erosion and deposition on the material). To achieve high reliability and durability, the main diagnostic methods for DEMO have been selected based on their robustness, and the front-end components are intended to be mounted in protected locations in order to reduce loads to acceptable levels [108]. The limited space available and remote maintenance considerably reduce the design freedom for the layout of the control system and its main components. For this reason, the implementation of the diagnostic components in the blanket have to be observed and studied in depth [108,109]. At the same time, the need to retract components into protected locations can only be compensated by integrating a large number of individual channels and sightlines, which in turn represents a huge design effort and will occupy significant space in the tokamak [109]. Finally, it is important to note that not all key features of the DEMO plasma scenario and technology have yet been well-defined and simultaneously demonstrated in largescale experiments under relevant conditions. Therefore, at the current stage of DEMO research, the development of diagnostic and control systems should generally proceed according to the significant uncertainties associated with the plasma scenario and the machine properties [108,109]. First Wall Samples and RetroReflectors FWS and RR are part of the set of diagnostic materials that interact with the plasma and are located in the FWPs. The FWS are designed to monitor the sputtering of Be under neutral particle bombardment and its possible fuel retention, being able to reach operating temperatures of ∼300-400 • C. The samples have a thin layer of Be on the surface and are made of the structural material CuCrZr. The sputtering resolution of Be in the range from 1 to 100 µm is obtained by special markers (5 to 10), which are placed in various depth ranges from the surface [11,107]. A candidate material for the marker is C, as it would form a strong chemical bond with beryllium (Be 2 C), thus reducing diffusion. A conceptual design of a FWS is shown in Figure 17, where the sample body allows the possibility of remote handling, while the reference point (made of Mo or W) indicates the deposition that has occurred on the sample. On the other hand, RRs are optical devices that are part of the polarimetry (a technique that measures the optical rotation produced on a beam of polarized light passing through an optically active substance) diagnosis and reflect the light in a direction parallel to the incident beam. Their function is to return the sounding laser beam to the detectors located several tens of meters away. Temperature differences can introduce thermal distortions that modify the profile of the returning laser beam, introducing errors in the measurements [11]. The reflective parts of the RR are made of W, and the same structural material is considered for them as for the FWS. It should be added that both RR and FWS will have several parts made of type 316L steel. The expected service life of the FWS is expected to be approximately 2 years, while that of the RR will be less. Both elements should be able to withstand a neutron fluence of ∼10 24 n/m 2 , and the neutron doses (∼1 dpa) are not expected to significantly affect the thermal and mechanical properties of the materials used [107]. Mirrors Optical components -mirrors, windows, lenses, etc-are of vital importance for the operational control and safety of a nuclear fusion reactor. They are present in all the diagnostic systems required for the analysis of plasma optical radiation, so that almost half of the ITER operating parameters will be measured with this type of device [11]. Nevertheless, these components cannot be used directly facing the plasma. That is why every diagnostic system includes a reflective mirror, also called a first mirror (FM). Thus, optical designs for diagnostics in ITER include a plasma-facing FM in the high-radiation region, followed by at least one secondary mirror (SM) for the signal to finally reach the lenses. This process is depicted in Figure 18. Single-crystalline materials (Rh and Mo) have demonstrated good optical performance under plasma sputtering conditions and are currently considered the leading materials for FMs [111]. Specifically, the main material of this FM is monocrystalline Mo, which is characterized by good sputtering resistance, thermal shock resistance and good compatibility with the RF wave mirror cleaning method. Therefore, it can cope with situations dominated by both erosion and co-deposition. Other materials that are also considered, but have not yet shown this compatibility in all aspects, are nanocrystal-coated polycrystalline Mo, W, Au or other high-reflectivity materials coated with a thin oxide film (Al 2 O 3 , ZrO 2 ). The fluence expected for FMs is ∼10 23-24 n/m 2 over the entire lifetime of ITER with temperatures that range from 50-300 • C, depending on the location. Stainless steel grade 316LN-IG (a modification of grade 316LN with a reduced concentration of activation-susceptible elements (mainly Co, Nb and Ta) [112] ) is the main structural material [107,112,113]. Windows In order to reach the desired temperature in the plasma, external heating methods are necessary. The question one may ask is: how does this extra energy get into the hermetically sealed VV and stay there? The answer lies in using windows that act as a "transparent" but highly resistant barrier made of artificial diamonds. Specifically, ITER's diagnostic systems will use more than 100 sets of windows in the primary and secondary vacuum boundaries [113]. F4E has signed a contract with a German company that will be responsible for the production of 60 diamond discs made by means of CVD. Each of these transparent barriers will be bonded to the window body through a thin metal foil [114]. Diffusion welding (a joining process by heat and pressure where the contact surfaces are joined by diffusion of atoms) (DFW) -either Al or Au-is considered the gold standard technique for the assembly of ITER windows [115]. Initially, candidate materials for the windows were fused silica, synthetic crystalline quartz and barium fluoride, among others [107]. The samples in Figure 19 will have a thickness of 1.1 mm and a diameter of 7 cm, but why did the scales finally tip in favor of diamond? Figure 19. Sample of diamond windows produced by CVD. Reprinted with permission from Ref. [116]. Credit c Diamond Materials, 2020. Windows are fairly common elements in X-ray or similar machines used in scientific facilities to act as a barrier. However, these more common types of windows are not prepared to withstand the conditions of ITER. For this reason, a search was initiated for a synthetic diamond window that would satisfy the required conditions, i.e., allow the corresponding microwave beam flux with the electron cyclotron heating system and at the same time protect the surrounding systems. In addition, these diamond blades not only have excellent mechanical and thermal properties, but also a considerable radiation hardness of up to approximately 10 −4 dpa. This is of particular relevance as they must comply with strict radiological regulations since they will act as a barrier against T [113,116]. Despite all these positive aspects, it has also been shown that diamond exhibits a substantial drop in thermal conductivity under irradiation as a consequence of phonon scattering [117]. This has implications for its use in DEMO due to power transmission requirements and will require careful design to minimize exposure. While there are some initial tests of windows incorporating new materials, this field will need to be developed for higher fluences in the future. For example, amorphous SiO 2 is currently the candidate for Vis-NIR-type windows, while for the millimeter-range IR-FIR, a selection of other materials is being investigated, such as CaF 2 , BaF 2 or ZnS, among others [117]. Bolometers Bolometric systems placed around the VV provide information on the spatial distribution of the radiated energy in the plasma and in the divertor region by a data tomography process. They can thus measure the total radiated power of the plasma with largely constant sensitivity from the visible range up to a photon energy of ∼25 keV [118,119]. A bolometer consists of a heat-absorbing body connected to a sink (an object that is kept at a constant temperature) through an insulating material or substrate. They were invented about 150 years ago and have since been used in various branches of physics (e.g., in astronomy to detect very slight changes in radiation). A basic bolometer consists of a thin strip of metal that absorbs radiation and is heated, after which the resistance of this strip is measured to determine how much radiation has been absorbed [118]. In the ITER bolometers (Figure 20), the absorber body is preferably a thin layer of gold up to 20 µm thick. The gold can transmute into mercury -in principle not a concern-and is also expected to improve its resistivity significantly during irradiation. The Au absorber body and Pt meanders will be applied on opposite sides of the substrate, which is made of silicon nitride [11,107]. In order to understand which materials should be used, numerous materials tests were carried out under irradiation which showed that bolometer sensors with a Si 3 N 4 substrate can withstand irradiation doses corresponding to 0.3 dpa, a value above the 0.1 dpa limit imposed by the ITER project requirements, while their measured resistance after irradiation increased by only about 20-30 Ω [120]. A total of 100 bolometers will be strategically placed around the tokamak to continuously measure the total radiation and radiation profiles, which can provide the plasma control system with much of the information necessary for its control. One of the most demanding requirements for bolometers is that they must provide data readings in as little as 1 ms. In addition, they must be robust enough to operate in vacuum with high neutron fluxes and ambient temperatures above 200 • C. To obtain a complete picture of the profile of radiation, the arrangement of the bolometers is optimized to perform tomographic reconstruction using up to 500 lines of sight crossing the plasma. These lines are located in 22 cameras mounted directly on the surface of the VV distributed in six different sectors [11,118,119]. There is no need for this complete bolometric system during the early phases of ITER due to the low power scenarios that will be run at the beginning, so that there will not be much radiated power to measure. However, components that belong to the infrastructure, such as cables and mechanical supports, do need to be placed and aligned to be ready for the next assembly phase [119]. Wide-Angle Viewing System WAVS is an optical diagnostic component intended to monitor the status of the FW and divertor for tokamak protection purposes. For this reason, it will provide real-time measurements in the visible (656 ± 1 nm) and infrared (4 ± 0.1 µm) spectrum coming from the VV to avoid any potential damage to the PFCs. The material used for all hardware is 316L stainless steel, except for the mirrors, which are made of Zerodur (an inorganic, non-porous glass-ceramic made of Si, Al and Li oxide, characterized by extremely low and homogeneous thermal expansion throughout the entire volume). It will be composed of 15 sight lines installed in four equatorial ports (nos. 3, 9, 12 and 17) that cover approximately 80% of the FW surface, with an approximate length of 10 m [121][122][123]. In Figure 21, the various components (which share an optical path) transfer the visible and infrared signals from the port-plug through the interspace and bioshield regions to the port-cell [121]. Basically, for each line of sight, the light emitted by the PFCs is collected and travels more than 10 m through a series of mirrors and lenses to the cameras located at the back end of the port-cell. In total, the WAVS system will include over 600 opto-mechanical components, in addition to other non-optical ancillary systems [123]. At the end of the system, and inside the shielded cabinet, a beam splitter fragments the wavebands into two independent channels, each having two identical chambers. Currently, it has already reached its preliminary design phase, and to reduce costs, its assembly will be modular ( Figure 22) [121]. Due to the harsh conditions of ITER, exhaustive tests (irradiation, steam ingress...) have been carried out during the last years with collaborating entities (CIEMAT or KIT) with the aim of selecting suitable materials for these optical components [123]. IFMIF-DONES For some years now, the EUROfusion program [124] has proposed to have a project for irradiation of materials that can simulate the neutron flux that they would undergo in a fusion reactor. The construction and operation of such a facility is now considered to be of vital importance for the future of ITER and DEMO. It was decided to go for a facility based on nuclear stripping (a process in which the core of a projectile grazes a target core which absorbs part of the projectile. The remainder of the initial projectile continues past the target) reactions and to include it in the project launched in the early 2000s known as IFMIF. The objective of this project is to deepen the knowledge of the behavior of the materials required for the construction of a future fusion reactor [125][126][127]. This is how the idea of IFMIF-DONES was born: a facility that will generate a highintensity neutron source with characteristics similar to those of a future nuclear fusion reactor. Prior to this, another facility known as IFMIF-EVEDA had already been put into operation, and together with DONES, both make up the entire IFMIF project. EVEDA is located in Rokkasho, Japan and has been in operation since 2007, within the framework of the broader approach (BA) agreement between the EU and Japan. Its objective is to validate and provide actual operating data with a prototype partial DONES systems, i.e., it acts as a precursor facility. The decision to start construction of IFMIF-DONES is expected imminently [126]. Is This Project So Relevant? DONES is a version aimed at characterizing structural materials. European in scope, coordinated by EUROfusion and F4E, it has been catalogued by ESFRI as a strategic research infrastructure for Europe, and Granada (Spain) has been proposed as its headquarters ( Figure 23). It will generate a neutron flux with a wide energy distribution covering the neutron spectrum typical of a fusion reactor (D-T). As is well known, ITER will be followed by the implementation of another fusion reactor, in this case a demonstration reactor (DEMO), which will allow the generation of electric power. For this energy production to be possible and profitable, it is necessary to develop materials capable of resisting high energy neutrons and high heat flux to be used in the FW and the blanket. Thus, testing materials and different blanket concepts in a fusion environment is an indispensable step for DEMO design [127]. Furthermore, since DONES will be available during the operation of ITER, the possibility that it could help this project in some aspects of its nuclear operation phase should not be ruled out [125]. PFMs in DEMO and future fusion power plants will be affected by an unprecedented flux of 14.1 MeV neutrons. The displacement and transmutation effects occurring in the FW due to He and H accumulation will limit the lifetime of components to only a few years at full power. Therefore, the replacement of its components should be carried out with a certain periodicity in order to avoid the dreaded embrittlement. ITER will only present about 3 dpa in its full operational lifetime, while DEMO will present about 30 dpa/year at full power. DONES is aiming to deliver more than 20 dpa/year in its high-flux irradiation module or HFTM, which is capable of hosting about 1000 small samples [125,127,128]. DONES is a particle accelerator, specifically of deuterons (a deuteron designates the nucleus of the deuterium atom) accelerated to relatively low energies (∼40 MeV) but with a very high current ( Figure 24). This particle beam will hit a 25 mm thick lithium target, so that the D-Li interaction generates a neutron nuclear reaction with energies similar to those faced by the FW of a nuclear fusion reactor [125]. The energy deposited on the Li target is very high, so it must be in a liquid state, running in a closed circuit known as a lithium loop. The technology used in DONES will be pioneering in many respects, largely due to the uniqueness of the accelerator, which is intended to utilize a current of 100-125 mA. At such a high intensity, the repulsion of the particles tends to cause the beam to expand and thus become more difficult to confine. To avoid this, the use of many magnets is necessary, which raises the component density well above the level of any other current accelerator. Initially, the amount of energy to be managed in the DONES project is 5 MW, which will increase to 10 MW with the extension of the project to IFMIF. Due to the presence of neutrons, there are areas of the facility that will be significantly activated, and this implies that the maintenance of the core of the facility must be done remotely with the help of robots and automatic systems. Ensuring the safety and availability of the entire facility so that radiation times are relatively short is one of the biggest challenges of the project. The neutrons to be generated are very special and have characteristics that allow the use of this facility not only in the field of nuclear fusion but also in physics and nuclear medicine studies [130]. At DONES, HFTM development will focus on irradiation on RAFM steels, W and copper alloys [128]. RAFM Steels Irradiation Irradiation of RAFM steels (typically 8-10% Cr and 1-2% W) for application as a structural blanket material is considered a priority task for IFMIF-DONES. The irradiation capsules offer a volume of 54 cm 3 for the samples and provide quasi-thermal (an isothermal process capable of indicating the temperature of the system at each step, requiring continuous thermal equilibrium) irradiation conditions. The irradiation temperature for each capsule varies between 250 and 550 • C. In central regions with considerable volumetric heating, samples will be embedded in liquid sodium to homogenize the temperature. The incident neutron flux density in the HFTM is 5 × 10 14 n·cm −2 s −1 while the average structural damage rate of the capsules reaches 12-25 dpa/fpy. The helium production rate is about 13 appm He/dpa and the hydrogen production rate is about 53 appm H/dpa. These ratios are fairly homogeneous throughout the area and are very similar to those expected in the DEMO FW. The overall lifetime of the HFTM (1-2 years) is limited mainly by three factors: the structural damage that its external surface can withstand, the creep damage to the hermetically sealed capsules at high temperature, and the lifetime of the electrical heaters and other instrumentation [128]. Cu Alloys and W Irradiation W and Cu alloys are currently considered as reference materials for FW and divertor ( Figure 3). Tungsten has been proposed due to its high melting temperature and CuCrZr alloys due to their high thermal conductivity together with their good mechanical properties. However, their behavior under the extreme irradiation conditions expected in ITER and DEMO is still unknown. DONES has been conceived as a plant similar to EVEDA but simplified to provide, in a reduced time scale and with a limited budget, basic information on the damage to these materials. The fast neutrons from the fusion reaction activate and damage the divertor and the blanket, so some of its components must be replaced periodically. The divertor impact area will be exposed during normal operating conditions to fluxes of up to 20 MW/m 2 and even GW/m 2 during transient events such as disruptions and ELMs. Precipitation hardened CuCrZr alloys have been chosen as heat sink materials for the divertor and FW. This is due to their good ductility, high thermal and electrical conductivity and high commercial availability. In addition, they exhibit high fracture toughness and high resistance to radiation damage [131]. The temperature range of W in such applications spans from about the maximum temperatures reached by heat sinks (e.g., 550 • C for EUROFER and about 350 • C for copper) to almost the melting temperature of tungsten during short transient periods [128]. For temperatures up to 550 • C, irradiation of both materials can be performed in standard HFTM capsules [126]. For a dedicated W irradiation scenario, a structural damage rate of 1-3 dpa/fpy, a helium production rate of 9 to 10 appm He/dpa and a hydrogen production rate of 20 to 29 appm H/fpy can be achieved. On the CuCrZr side, structural damage rates of 5 to 30 dpa/fpy, a helium production rate of 6-8 appm He/dpa and a hydrogen production rate of 48-50 appm H/dpa can be achieved. Thus, for both W and CuCrZr, the values expected in DEMO are even surpassed [131]. The data in Table 5 show that the damage dose rate requirements called for in the fusion roadmap for Cu and tungsten alloys for DONES meet the maximum values calculated in the area of the DEMO divertor with DCLL design. The damage dose rate and the H and He production are analyzed at the different locations and compared with the real irradiation conditions in the FW and divertor [131]. These simulations (the experimental results presented were obtained with the McDe-Licious code [131] used for neutron transport calculations) performed for the HFTM (Figure 25) are intended to evaluate whether the tungsten and CuCrZr alloys-planned to be used in the first and second layer of the DEMO divertor, respectively-meet the expected values for a DCLL type concept according to the damage requirements set out in the EUROfusion roadmap. The objective is to evaluate the amount of irradiated volume subjected to a given damage dose rate (dpa/fpy). Radiation damage limit values are thus obtained, which make it possible to identify the most favorable location for irradiating each material, taking into account the He and H ratios. The W values chosen correspond to the area closest to the plasma surface, and those for CuZrCr alloys correspond to the region behind the W layer. The main conclusion is that, for both tungsten samples and Cu alloys, the established damage requirements (5 dpa for CuCrZr and 1 dpa for W) are achieved in most cases in the irradiation area. It can therefore be deduced that the DONES HFTM will be a suitable place to carry out the corresponding tests of this type of materials with a view to their subsequent use in DEMO. Figure 26 reflects a number of interesting conclusions: Figure 26. Damage dose rate as a function of HFTM volume with the 3 materials to be studied in DONES. Reprinted with permission from Ref. [131]. Credit c IOP Science, 2017. (i) The volume in the HFTM decreases the higher the damage dose. (ii) The maximum damage dose rates achieved are 5, 27 and 38 dpa/fpy for W, EUROFER and Cu alloys, respectively. (iii) The volumes that meet the damage requirements for all three materials are sufficiently high for the need of this type of experiments [131]. Conclusions The problems of developing, testing, verifying and ultimately qualifying materials for the fusion reactor vessel environment are one of the great materials research challenges of recent times. The situation is very complex due in large part to the urgency of developing fusion reactors to meet the planet's energy and environmental needs. The development of ITER has been a giant step forward. Important measures that have been done to address important issues include the initiation of multiple irradiation campaigns, advanced high heat flux simulations, as well as the continued development of multi-scale models of materials. The lack of technologically ready materials is clearly a concern for DEMO, aiming to facilitate more effective planning and targeted materials development in line with EUROfusion's strategic plans. For the range of operating conditions expected in ITER and with even greater relevance for DEMO, a qualified database must be generated to demonstrate that candidate materials meet a series of indispensable design requirements. Throughout this paper, several materials have been analyzed, from the most studied and used today such as W, RAFM steels and Be, to the most promising ones for future projects such as SiC composites, V-alloys or ODS steels among others. Although the materials for ITER are already defined, long-term development does not stop and hundreds of studies are being carried out to find new materials or manufacturing techniques that can be used in ITER to overcome the even more difficult and demanding conditions of DEMO. Numerous promising designs have been discarded, which shows the commitment of the nuclear industry to constantly strive for the highest possible degree of perfection and safety. Technologies such as tritium generation have received special emphasis within the ITER R&D program, as it is expected to have a major impact in the future. For this reason, a review has been made of the different blanket designs that are being considered, as well as the materials that can contribute to a greater extent to a better efficiency. Clearly, there is a strong overlap with nuclear fission materials research; however, fusion materials present a few additional challenges. Structural materials will suffer a severe operating environment, with damage from fast neutron irradiation, high temperature, and high stress. Requirements for these structural materials include low activation, good compatibility with various coolants, and irradiation resistance, among others. Thus far, three main candidates for low-activation structural materials have been proposed for the FW. These are -in order of relevance-RAFM steels, SiC/SiC ceramic composites and V-alloys. For their part, diagnostic systems will play a key role in the control of the fusion process and will allow us to achieve a better understanding of plasma physics. To achieve this, the tokamak must be equipped with sensors and instrumentation to fully explore the operating environment. Finally, one of the major limitations is related to the difficulty of reproducing a realistic fusion neutron spectrum to test candidate materials for DEMO. Fortunately, the development of IFMIF-DONES seems to solve this problem. In fact, some of the most advanced materials, such as Cu alloys, W or RAFM steels, will be studied in this facility. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations and nomenclature are used in this manuscript:
v3-fos-license
2018-04-03T02:01:03.762Z
2007-06-23T00:00:00.000
1926835
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-7-118", "pdf_hash": "cfc910ad417a89b73a67c895bfab85942e87ed8e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41602", "s2fieldsofstudy": [ "Medicine" ], "sha1": "eb7625c4dba2829440a8475d486772a6219014e0", "year": 2007 }
pes2o/s2orc
Ethnic differences in the effect of environmental stressors on blood pressure and hypertension in the Netherlands Background Evidence strongly suggests that the neighbourhood in which people live influences their health. Despite this, investigations of ethnic differences in cardiovascular risk factors have focused mainly on individual-level characteristics. The main purpose of this study was to investigate associations between neighbourhood-level environmental stressors (crime, housing density, nuisance from alcohol and drug misuse, quality of green space and social participation), and blood pressure (BP) and hypertension among different ethnic groups. Methods Individual data from the Amsterdam Health Survey 2004 were linked to data on neighbourhood stressors creating a multilevel design for data analysis. The study sample consisted of 517 Dutch, 404 Turkish and 365 Moroccans living in 15 neighbourhoods in Amsterdam, the Netherlands. Results Amongst Moroccans, high density housing and nuisance from drug misuse were associated with a higher systolic BP, while high quality of green space and social participation were associated with a lower systolic BP. High level of nuisance from drug misuse was associated with a higher diastolic BP. High quality of green space was associated with lower odds of hypertension. Amongst Turkish, high level of crime and nuisance from motor traffic were associated with a higher diastolic BP. Similar associations were observed among the Dutch group but none of the differences were statistically significant. Conclusion The study findings show that neighbourhood-level stressors are associated with BP in ethnic minority groups but were less evident in the Dutch group. These findings might imply that the higher BP levels found in some ethnic minority groups might be partly due to their greater susceptibility to the adverse neighbourhood environment in which many ethnic minority people live. Primary prevention measures targeting these neighbourhood stressors may have an impact in reducing high BP related morbidity and mortality among ethnic minority groups. Background Cardiovascular disease (CVD) is the leading cause of death in industrialised countries. High blood pressure (BP) is one of the important causes of cardiovascular diseases and its role is set to continue [1]. The risk of cardiovascular disease associated with high BP is consistent and independent of other risk factors [2]. The high prevalence of hypertension is well reflected in the high prevalence of stroke and cardiovascular disease across the globe [3]. In western societies, BP levels and prevalence of hypertension differ by ethnic group with most studies showing higher levels and rates in the ethnic minority groups than in the European populations [4][5][6][7]. The explanations for the higher BP levels and the higher prevalence of hypertension in ethnic minority populations still remain unclear [8]. As in most CVD epidemiology, investigations of high BP in ethnic groups have focused mainly on individual level characteristics such as obesity, education and genes [9,10]. The environmental effect on BP and hypertension in different ethnic groups has hardly ever been examined. Evidence strongly suggests that the neighbourhood in which people live influences their health, either in addition to or in interaction with individual level characteristics [11]. A systematic review of multilevel studies [12], for example, showed fairly consistent and modest neighbourhood effects on health despite the differences in study designs, neighbourhood measures and possible measurement errors. More recently, adverse neighbourhood factors have also shown to be positively associated with coronary heart disease (CHD) [15,16] and insulin resistance syndrome [17]. There are also indications that the impact of the neighbourhood environment on ill health is greater in ethnic minority population groups than in European populations [13,14]. For example, Cubbin and colleagues' study showed a stronger neighborhood deprivation effect on cardiovascular risk factors in African Americans than in White Americans [14]. There are several mechanisms through which neighbourhood environment may be linked to the development of high BP, for example, through their influence on health related behaviours or through psychosocial pathways. Recent studies indicate a possible role of neighbourhood environments in influencing physical activity [18][19][20][21] and diet [19,22], both of which may be related to high BP [23]. It has been shown that neighbourhoods characterised by poor physical quality are associated with psychosocial stress [24]. Social participation may also have direct effects on health outcomes by influencing a series of physiologic pathways or via social influence or supportive functions that influence health-promoting or health-dam-aging behaviors [25]. Living in a stressful neighbourhood may discourage residents from taking up important lifestyle measures such as physical activity which, in turn, may lead to the development of high BP. It is also possible that the biological pathway between these neighbourhoods' factors and BP may be mediated by an abnormal neuroendocrine secretory pattern [26] due to stress. Neighbourhood stressors may vary between neighbourhoods, which may lead to differences in development of high BP. Perception of environmental stressors may differ between different ethnic groups due to differences in culture, language, migration history and socio-economic positions [23,27]. Neighbourhood stressors may therefore provide important clues for explaining the higher BP levels and hypertension rates in ethnic minority populations since many of these populations live in disadvantaged neighbourhoods with high levels of stress. The main objective of this paper was to determine whether neighbourhood environmental stressors were associated with BP and hypertension in Dutch, Turkish and Moroccan ethnic groups in Amsterdam, the Netherlands. Turkish and Moroccans are two of the largest ethnic minority groups in the Netherlands. They came to the Netherlands in the 1960s and early 1970s as labour migrants. The initial period of labour migration was followed by a period in which many guest workers brought their spouses and children over to the Netherlands. A large percentage of Turkish and Moroccan especially first generation immigrants have lower educational levels, poor Dutch language proficiency, and tend to stay within their own culture [28]. Methods The data in this study were collected at two levels. The individual (first) level included information on demographics, body mass index (BMI), BP and hypertension. The contextual (second) level included information on environmental stressors. These two levels were linked by neighbourhood, creating a multilevel design for data analysis. Data collection at the individual level The individual level data came from the Amsterdam Health Survey 2004. This cross-sectional study was carried out by Amsterdam Municipal Health Service (GGD Amsterdam) in collaboration with the National Institute for Public Health and Environment (RIVM) to monitor the health of the Amsterdam general population aged ≥18 years. The study sample was drawn from the Amsterdam municipal registers in five city districts in Amsterdam (Figure 1). The population of these districts combined is representative for the total population of Amsterdam. The sample was stratified by ethnicity and five age groups (18-34 years, 35-44 years, 45-54 years, 55-64 years and 65 years or older). Within each stratum a random sample was drawn. The Turkish and Moroccan ethnic groups were oversampled to ensure sufficient numbers of people from these groups. This method was necessary to boost Turkish and Moroccan ethnic groups in the sample because of their relatively low representation in the total population and their lower participation rate in national and local surveys in the Netherlands. In 2004, the people in the sample were invited for an interview and medical examination in a community health centre. All interviews were conducted in the language of choice of the respondent (i.e., Dutch, Turkish, Moroccan-Arabic or Berber). The final response rate was 44% (Dutch 46%, Turks 50% and Moroccans 39%). Data were weighted to correct for oversampling by ethnic groups. All participants signed a consent form. The Medical Ethical Committee of the Amsterdam Medical Centre approved the study protocols. Individual level variables Ethnicity was classified according to the self-reported country of birth and/or the country of birth of the respondent's mother or father. Ethnicity refers to the group individuals belong to as a result of their culture, which includes language, religion, diet and ancestry [29]. The term 'Moroccan' refers to people, and their offspring who migrated to the Netherlands via Morocco. The term 'Turkish' refers to people, and their offspring who migrated to the Netherlands via Turkey. The term 'Dutch' refers to people with Dutch European ancestral origin. Blood pressure was measured with a validated oscillometric automated digital device (OMRON HEM-711). Using appropriate cuff sizes, two readings were taken on the left arm in a seated position after the subject had been seated for at least five minutes. Trained nurses performed all the medical examinations. The mean of the two readings was used for analysis. Hypertension was defined as SBP ≥ 140 Figure 1 mm Hg, or DBP ≥ 90 mm Hg, or being on anti-hypertensive therapy. Education level was determined during the interview. Body mass index (BMI) was calculated as weight (kg) divided by height (m 2 ). Data collection at the contextual level All the contextual level variables originated from three different data sources (Living in Amsterdam Survey 2003, Amsterdam Living and Security Survey 2004, and The Social State of Amsterdam City Survey 2004) and were provided by the Department of Research and Statistics of Amsterdam Municipality (O+S Amsterdam). The aggregated data were dichotomised (coded low = 0 and high = 1), with low representing the eight neighbourhoods with the lowest scores and high representing the other seven neighbourhoods with the highest scores (Table 1). Neighbourhood environmental stressor variables We calculated the proportion of people in each neighbourhood who reported experience of crime (such as break-ins, theft, aggravated assault, vandalism and stolen purse) in the past 12 months, being bothered by excessive motor traffic, nuisance from frequent alcohol and drug misuse, living in a neighbourhood with cramped housing (housing density), involvement in at least one of the activities of formal or informal organisations (sports clubs, nature or animal organisations, political, women or ethnic minority organisations, union meeting, theatre/cinema, arts exhibition, church, youth organisations, library and meeting of other organisations) and quality of green space. Quality of green space was based on a scale of 1 to 10 (from very ugly to very beautiful). We calculated the mean score for each neighbourhood. To allow for the non-linear effects, and the relative smaller number of neighbourhoods, the neighbourhood-level stressor variables were dichotomised for each ethnic group. In the Netherlands, neighbourhoods are areas with a similar type of building, often delineated by natural boundaries. As a result, they are socio-culturally quite a homogenous group [30]. Data Analysis Because different ethnic groups may differ in response to the same stressor [31], the analyses were performed separately for each ethnic group. The associations between neighbourhood stressors and BP levels were determined using multilevel linear regression models with individuals at the first level and neighbourhoods at the second level using SAS Proc mixed procedure [32]. The associations were assessed by using beta coefficients (95 percent confidence intervals) in the fixed-effects part of the models. We also performed a multilevel logistic regression to determine the associations between neighbourhood stressors and hypertension using the SAS GLIMMIX macro procedure [33]. The results are shown as odds ratios and 95% confidence intervals. The method of estimation was a restricted maximum likelihood procedure. We performed two models to determine the associations between neighbourhood stressors and BP and hypertension adjusting for potential confounding factors. Model 1 included each neighbourhood variable and the individual level variables age and sex. In model 2 the same variables were included but in addition the individual level variables education level and BMI, which are know to be associated with BP, were added. Overweight and obesity are highly prevalent among Turkish and Moroccan ethnic groups in the Netherlands [34]. Results About 95 per cent of the Turkish and the Moroccan ethnic groups were first generation migrants. Table 2 shows the characteristics of the study population in each ethnic group. Turkish and Moroccan ethnic groups were younger, had lower education levels and higher BMI compared with their Dutch counterparts. Mean systolic and diastolic levels and prevalence of hypertension were lower in Turkish and Moroccans than in Dutch. These differences remained after adjustment for age and gender except for diastolic BP in Turkish people [34]. Table 3 shows systolic BP by neighbourhood stressor in each ethnic group. Among Moroccans, high density housing and nuisance from drug misuse were associated with a higher age and sex adjusted systolic BP. In contrast, high quality of green space and high social participation were associated with a lower age and sex adjusted systolic BP. These associations persisted after further adjustment for individual-level educational level and BMI in the full model. No significant associations were noted in the Dutch and Turkish ethnic groups although the directions of the associations were similar in all ethnic groups. Neighbourhood stressors and diastolic blood pressure Among Turkish, high crime and nuisance from motor traffic were associated with a higher diastolic BP in both models (Table 4). Among Moroccans, nuisance from drug misuse was associated with a higher diastolic BP in the full model. In contrast, neighbourhood high social participation was associated with a lower diastolic BP although the Betas represent difference in mean systolic BP; *P < 0.05, **P < 0.01 compared with lower neighbourhood attribute in each ethnic group, BMI = Body mass index. association was no longer statistically significant after further adjustment for education and BMI. Similar directions of associations were also noted in the Dutch group but none of the differences were statistically significant. Table 5 shows multilevel logistic regression of hypertension by neighbourhood stressor in each ethnic group. Amongst Moroccans, neighbourhoods with high quality of green space were associated with lower odds of hypertension. Similar directions of associations were also noted in the Dutch and Turkish groups but none of the differences were statistically significant. Discussion Little is known about the effects of neighbourhood-level environmental stressors on BP and hypertension in different ethnic groups in Europe. Our findings show that neighbourhood-level stressors are associated with BP in ethnic minority groups but were less evident in Dutch people living in Amsterdam, the Netherlands. Some limitations within this study should be acknowledged. As in numerous epidemiological surveys our BP levels were based on two measurements at a single visit, which might have overestimated the BP levels and the prevalence of hypertension. A further limitation was the cross-sectional nature of the study design, which indicates Betas represent difference in mean systolic BP; *P < 0.05, **P < 0.01 compared with low neighbourhood attribute in each ethnic group, BMI = Body mass index. that causal associations can only be made with caution. In addition, our contextual stress variables were based on the overall assessment of the Amsterdam general population. It is possible that the assessment of these contextual variables might vary between the ethnic groups, which might further affect our study conclusions. Other potential sources of bias could have resulted from the relatively low response rate. Nonetheless, the response rate of the survey is comparable to several national surveys in the Netherlands [28,35], indicating that any systematic bias is unlikely. In addition, the number of people who did not receive their invitations because of incorrect residential address in the municipal registers is likely to be high due to the mobility of the population in Amsterdam. Therefore our actual response rate might be higher. Our contextual factors were based on only fifteen neighbourhoods and therefore relatively underpowered for multilevel modelling. Our contextual factors were dichotomised, which might reduce the power to detect associations. However, dichotomisation was necessary because of the structure of the data, since few neighbourhoods, and few people per neighbourhood, did not permit modelling of between neighbourhood variability in the outcomes. Nevertheless, the presence of multiple neighbourhoods did permit adequate estimation of the fixed effects of neighbourhood level variables (our main research question). Evidence suggests that the health advantage of foreignborn people may be explained by the healthy migrant effect [36]. Nearly 95 per cent of both Turkish and Moroccans studied were first generation immigrants. It is possible that the healthy migrant effect might have underestimated the observed associations in our study. In addition, we were unable to assess factors such as internal migration within the study area, the degree of residential segregation, and multiple dimensions of socioeconomic deprivation over the life course, which might also affect our study conclusions. For example, the impact of internal migration between neighbourhoods within Amsterdam is likely to lead to underestimation of the observed associations in our study. Nevertheless, evidence suggests a weak association between selective migration and health in the Netherlands [37,38]. Despite these limitations, the study findings provide important information on the effect of environmental stressors on BP and hypertension among different ethnic groups. As far as we are aware, this is the first study that has assessed the effect of neighbourhood level stressors on BP and hypertension among ethnic minority groups. The neighbourhoods considered in our study were socio-culturally rather homogenous communities [39]. It has been emphasised that contextual or area bound factors may have a greater impact on health if a neighbourhood relates to a socio-culturally homogeneous community [30]. Our findings of associations between neighbourhood crime, nuisance from alcohol and drug misuse and BP among the ethnic minority groups add to the existing literature documenting associations between neighbourhood factors and cardiovascular risk factors [14][15][16][17]40]. For example, a recent study from Sweden showed a positive association between neighbourhood crime and CHD risk even after controlling for the individual level factors [16]. The associations between housing density, motor traffic, and BP are also consistent with recent reports [24,41]. For example, Galea and colleagues study found that living in a neighbourhood characterised by a poor quality built environment was associated with a greater likelihood of depression [24]. Also, in Glasgow, Scotland, an introduction of a traffic calming scheme resulted in improvements in health and health related behaviours in a neighbourhood with a high level of motor traffic problems [41]. Although the evidence for the associations between neighbourhood environment and cardiovascular risk factors are mounting, the explanations for these associations still remain unclear. Two main interpretations have, however, been proposed for the relative bad health of people living in disadvantaged neighbourhoods: a neo-material perspective and a psychosocial perspective. According to the proponents of the neo-material theory, impaired health of residents of some neighbourhoods is the result of accumulation of exposure and experiences that have their roots in the material world [42]. Under the proponents of psychosocial theory, stressors in the neighbourhood make residents feel unpleasant and this affects their behaviour (inappropriate coping strategies) and biology (psychoneuro-endocrine mechanisms), which in turn, increase their susceptibility to diseases in addition to the direct effects of absolute material living standards [43]. Our findings support the psychosocial perspective and are consistent with other studies that have demonstrated associations between neighbourhood-level psychosocial factors and other health outcomes [43][44][45]. For example, it has been shown that a significant portion of health differentials across neighbourhoods is due to stress levels differences across neighbourhoods [45]. It is possible that the biological pathway between these neighbourhoods' environment and BP may be mediated by an abnormal neuro-endocrine secretory pattern [26] due to stress, with the effect being greater in ethnic minority groups. It may also well be that ethnic minority people living in neighbourhoods with a high level of crime or nuisances from drug and alcohol misuse might feel more vulnerable or unsafe than their European counterparts to engage in important lifestyle measures that are important for hypertension prevention (such as walking). The associations between housing density, motor traffic, and BP seem to suggest that living in neighbourhoods characterised by a poor quality built environment is associated with psychosocial stress which, in turn, may place one at greater risk for developing high BP. The reasons for the stronger associations between neighbourhood stressors and BP in the ethnic minority groups as compared with their Dutch counterparts are unclear. However, it is possible that the ethnic minority groups in this study live in more disadvantaged and stressful parts of the same neighbourhood or have less effective coping mechanisms than their Dutch counterparts, which might have contributed to the stronger associations observed in this study. These findings may also be a reflection of concentration of other deleterious elements of the neighbourhood environment that, through various mechanisms, shape BP. These findings may also reflect residential segregation, as well as differential exposure to other factors such as racism [46][47][48][49]. Studies have shown that racism is positively associated with high BP in African Americans in the USA [46][47][48][49]. Although information on racism and health is limited in the Netherlands, this possibility cannot be ruled out and it emphasises the need to explore how racism might contribute to ethnic inequalities in health [50]. In contrast, our findings show that living in a neighbourhood with a high quality of green space and social participation was associated with a lower systolic BP and lower odds of hypertension in the Moroccan group. Similar non-significant associations were also observed amongst the Dutch and Turkish ethnic groups. It is likely that the quality of neighbourhood built such as green space provides an opportunity for more outdoor recreation and encourage healthier lifestyles. Takano et al's study also found that living in a neighbourhood with greenery filled public areas positively influenced the longevity of urban senior citizens [51]. It is widely recognised that good social relationships and affiliation have powerful effects on health, possibly through information exchange and establishment of health-related group norms [25]. In Johnell et al's study, low social participation was associated with low adherence with antihypertensive therapy [52]. Our study findings are in agreement with these previous reports. Several neighbourhood stressors were strongly associated with BP among Turkish and Moroccan people as compared with Dutch people. This may reflect concentration of multiple stressors in disadvantaged neighbourhoods where many ethnic minority people live. The findings of this study have important public health and clinical implications. It has been estimated that reducing the mean population BP level by even as little as 2-3 mmHg could have a major impact in reducing associated morbidity and mortality [38]. For example, a 2 mmHg reduction of systolic BP at the population level would result in an 8% overall reduction in mortality due to stroke, a 5% reduction in mortality due to CHD, and a 4% decrease in all-cause mortality. A 5 mmHg reduction would result in 14% reduction for stroke, 9% for CHD, and a 7% for all-cause mortality [53]. In this present study, the neighbourhood mean systolic BP was nearly 5 mmHg higher among Moroccan people living in neighbourhoods with high density housing and nuisance from drug misuse than their counterparts living in more advantaged neighbourhoods. The mean neighbourhood diastolic BP was also nearly 3 mmHg higher among Turkish living in high crime and motor traffic nuisance neighbourhoods than their counterparts in more advantaged neighbourhoods. Given the effect of these adverse neighbourhood stressors on BP, primary prevention measures targeting these factors may have a major impact in reducing high BP related morbidity and mortality especially among disadvantaged ethnic groups in many industrialised countries. These findings may also indicate that clinical assessment and management of BP might have to consider both individual level and neighbourhood level characteristics especially among ethnic minority patients. Conclusion The findings from this study show associations between neighbourhood stressors and BP among Turkish and Moroccan ethnic groups whereas no associations could be observed among the Dutch group. The findings might indicate that the higher BP levels found in some of the ethnic minority groups (such as African descent populations and Hindustanis Surinamese) in Europe and elsewhere may be partly due to their greater susceptibility to the adverse neighbourhood environment in which many minority people live. Primary prevention measures targeting these neighbourhood attributes may have an impact in reducing high BP related morbidity and mortality especially among ethnic groups.
v3-fos-license
2018-12-14T23:52:17.455Z
2013-10-25T00:00:00.000
54994384
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://septentrio.uit.no/index.php/NAMMCOSP/article/download/2747/2596", "pdf_hash": "b6e0aa08ecacdab511ba21eaaa503d051169156f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41603", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b6e0aa08ecacdab511ba21eaaa503d051169156f", "year": 2013 }
pes2o/s2orc
Growth and reproduction in harbour porpoises ( Phocoena phocoena ) in Icelandic waters A total of 1,268 harbour porpoises were obtained from fishing nets in Icelandic coastal waters from September to June in the years 1991 to 1997. Foetal sex ratio was 1.2:1 (male:female). The bias towards males increased further among older animals in the present collection. The modal year classes were 0 and 1 years but the oldest porpoise was a female estimated at 20 years of age. Length at birth was estimated as approximately 75 cm, and females grew faster and attained larger sizes than males. Asymptotic length was 149.6 cm for males and 160.1 cm for females. Estimated age and length at sexual maturity was 1.9 to 2.9 years and 135 cm for males and 2.1 to 4.4 years and 138 to 147 cm for females. Immature individuals were significantly shorter than pubertal and mature animals in both sexes in age classes 1 to 3. Testes weight increased only slightly with body size in immature males but increased rapidly around maturity. Pronounced seasonality was also observed in testes weight, indicating a peak in testes activity in summer. Lack of data from the summer makes the exact timing of parturition and mating unknown. Births do, however, most likely peak in June and July and lactation lasts at least 7 to 8 months. Ovulation and pregnancy rates were 0.98. Ólafsdóttir, D., Víkingsson, G. A., Halldórsson, S. D. and Sigurjónsson, J. 2002. Growth and reproduction in harbour porpoises (Phocoena phocoena) in Icelandic waters. NAMMCO Sci. Pub. 5:195-210. MATERIALS AND METHODS Harbour porpoises incidentally entangled and drowned in gillnets (6” to 10” (152 to 254 mm) mesh size) set at depths of 10 to 225 m were collected through fish markets or directly from local fishermen in all coastal areas in Iceland from September to June in the years 1991 to 1997 (Víkingsson et al. 2003). The carcasses were either necropsied fresh or thawed after being stored frozen (Fig. 1). Life history data (length, weight, sex, and lactating status of females) was obtained from each animal during necropsy. Evidence of lactation in females was observed by pressing and cutting the mammary glands. In case of doubts about lactation, tissue samples were taken from the mammary glands and fixed in neutral buffered formalin for histological examination. Foetuses were removed from the uterus and length, weight and sex recorded. Ovaries and combined testes (excluding epididymis) weights were recorded before preservation in formalin for later analyses. Lower jaws were stored frozen to obtain teeth for age determination. Sex ratios were compared between age classes using Chi-square tests where significance level was set at 95%. Length distribution of immature vs. pubertal and mature animals within each year class was compared for animals obtained in March and April using students t-test. Age determination Teeth were removed from the middle region of the lower jaw, cleaned in 0.1 g collagenase D (Activity 0.36 U/mg Lyo) in 100 ml solution of Tris buffer, 0.15 M NaCl at pH=7.6 and 37° C overnight, decalcified in RDO (Apex Engineering Products Corp.) for about 3 hours and finally sectioned on a freezing microtome. Sections were mounted on microscope slides and stained with hemotoxylin. The age reading methodology applied was in accordance with the recommendations from a workshop held in Oslo in 1990 (Bjørge et al. 1995). One year’s growth was considered to consist of 2 complete layers; one opaque and one transparent. The 2 layers were sometimes closed by formation of a new opaque layer already in March and April. The date of capture was therefore used to determine completion of a year’s cycle where a new cycle was considered to begin on June 1. Teeth were read separately by 2 readers and re-examined by both readers upon disagreement. Harbour porpoises in the North Atlantic 196 Fig. 1. Necropsy of bycaught harbour porpoises provides important data on life history parameters, condition and diet. (Photo: Institute of Marine Research, Reykjavik). Growth Total length (from tip of snout to fluke notch) and weight of foetuses was measured in millimetres and grams respectively, while postnatal animals were measured to the nearest cm and 0.1 kg. von Bertalanffy curves were fitted to length-at-age and length-at-weight data: 1) L = L inf(1.0-exp(K(t-t0))) where Linf, the asymptotic length or weight, is the maximum average length or weight of old individuals, K is the growth rate and t0 is a theoretical age at zero length (Horwood 1990). Growth curves were fitted by least squares using a non-linear procedure (S-PLUS, MathSoft, Inc.). Reproductive status The basic terminology and methods for estimating reproductive status was adopted from Perrin and Donovan (1984). Females Ovaries were weighed and sliced in sections about 1 mm thick. Sections were studied under a dissecting microscope to count and measure diameters of follicles, corpora albicantia and corpora atretica (see March and Kasuya 1984). The magnitude and development of follicles were categorised as the following: 1) none or few scattered primordial follicles in the ovaries, 2) follicles making up to 50% of the area of the ovary sections 3) large follicles filling more than 50% of the ovary sections. Female reproductive status was classified as follows 1) immature: ovaries possess solely primordial follicles and no corpus, 2) pubertal: ovaries contain secondary or third stage follicles but no corpus, 3) mature: ovaries contain at least one corpus luteum (CL) or corpus albicantia (CA). Mature females were further classified as: 1) pregnant: embryo or foetus in uterus; 2) lactating: active, milk producing mammary glands; 3) resting: non pregnant, mature females. Average age at attainment of sexual maturity (ASM) in females was estimated by 5 methods: 1) one year added to the average age of pubertal animals. Pubertal females are expected to reach maturity in the following mating season when they will be estimated one year older than at the date of capture (see section on age determination above); 2) age when 50% of individuals are mature (Perrin et al. 1977); 3) mean age of first time ovulators (DeMaster 1984); 4) simple least-squares regression of combined number of corpus luteum and corpora albicantia against age (DeMaster 1984) and 5) algorithm described by DeMaster (1978). Methods 1, 2 and 3 were also adopted to estimate the average length at sexual maturity (LSM): 1) average length of pubertal animals; 2) length class of 5 cm intervals when 50% animals are mature and 3) average length of first time ovulators. Males Testes were weighed during dissection and fixed in 10% neutral buffered formalin. Two samples, approximately 1 x 1 cm each, were taken from the peripheral (near surface) part and central (core) part of one testicle respectively (see Halldórsson and Víkingsson 2003). For the smallest testes, only one, combined peripheral/central sample was taken. The samples were dehydrated, embedded in paraffin and sectioned on a microtome to 10 mm slices, mounted on glass and stained with hematoxylin and eosin. No spermatogonia, spermatocytes or spermatids were visible in the testes, probably due to sampling entirely outside the main mating season. Males were therefore considered to be immature, pubertal or mature according to the appearance of tubula (Mackintosh and Wheeler 1929, Collet and Saint Girons 1984, Hohn et al. 1985, Halldórson and Víkingsson 2003). ASM in males was estimated by: 1) adding one year to the average age of pubertal individuals (see section on females above); 2) calculating age when 50% individuals are mature (Perrin et al. 1977) and 3) DeMaster’s (1978) algorithm. LSM was derived by methods 1 and 2 (see section on females). INTRODUCTION Interactions of harbour porpoise (Phocoena phocoena) with commercial fisheries (IWC 1994) and their susceptibility to pollution in coastal areas (Reijnders et al. 1999) have evoked increased concern in recent years.Most studies have put emphasis on investigating population size and structure and life history parameters that may reveal status of populations and their potential vulnerability to human interactions (Bjørge and Donovan 1995).Harbour porpoises are common in Icelandic waters and an offshore population estimate of 27,000 obtained from a shipboard survey in 1987 is most likely downward biased (Sigurjónsson and Víkingsson 1997).Saemundsson (1939) discussed seasonal migrations of harbour porpoises into shallow waters in Iceland during summer but systematic studies on Icelandic harbour porpoises have not been conducted in any research field to date. In order to attain basic biological information on the species in Icelandic waters, a wide ranging research project was initiated in 1991.The main emphasis was put on studies of feeding ecology (Víkingsson et al. 2003), while studies were also conducted on reproductive biology (Halldórsson and Víkingsson 2003), morphology, genetics (Tolley et al. 2001), energetics and toxicology.In this paper biological parameters associated with growth and reproduction will be discussed. Growth and reproduction in harbour porpoises (Phocoena phocoena) in Icelandic waters MATERIALS AND METHODS Harbour porpoises incidentally entangled and drowned in gillnets (6" to 10" (152 to 254 mm) mesh size) set at depths of 10 to 225 m were collected through fish markets or directly from local fishermen in all coastal areas in Iceland from September to June in the years 1991 to 1997 (Víkingsson et al. 2003).The carcasses were either necropsied fresh or thawed after being stored frozen (Fig. 1).Life history data (length, weight, sex, and lactating status of females) was obtained from each animal during necropsy.Evidence of lactation in females was observed by pressing and cutting the mammary glands.In case of doubts about lactation, tissue samples were taken from the mammary glands and fixed in neutral buffered formalin for histological examination.Foetuses were removed from the uterus and length, weight and sex recorded.Ovaries and combined testes (excluding epididymis) weights were recorded before preservation in formalin for later analyses.Lower jaws were stored frozen to obtain teeth for age determination. Sex ratios were compared between age classes using Chi-square tests where significance level was set at 95%.Length distribution of immature vs. pubertal and mature animals within each year class was compared for animals obtained in March and April using students t-test. Age determination Teeth were removed from the middle region of the lower jaw, cleaned in 0.1 g collagenase D (Activity 0.36 U/mg Lyo) in 100 ml solution of Tris buffer, 0.15 M NaCl at pH=7.6 and 37°C overnight, decalcified in RDO (Apex Engineering Products Corp.) for about 3 hours and finally sectioned on a freezing microtome.Sections were mounted on microscope slides and stained with hemotoxylin.The age reading methodology applied was in accordance with the recommendations from a workshop held in Oslo in 1990 (Bjørge et al. 1995).One year's growth was considered to consist of 2 complete layers; one opaque and one transparent.The 2 layers were sometimes closed by formation of a new opaque layer already in March and April.The date of capture was therefore used to determine completion of a year's cycle where a new cycle was considered to begin on June 1.Teeth were read separately by 2 readers and re-examined by both readers upon disagreement. Growth Total length (from tip of snout to fluke notch) and weight of foetuses was measured in millimetres and grams respectively, while postnatal animals were measured to the nearest cm and 0.1 kg.von Bertalanffy curves were fitted to length-at-age and length-at-weight data: where L inf , the asymptotic length or weight, is the maximum average length or weight of old individuals, K is the growth rate and t 0 is a theoretical age at zero length (Horwood 1990).Growth curves were fitted by least squares using a non-linear procedure (S-PLUS, MathSoft, Inc.). Reproductive status The basic terminology and methods for estimating reproductive status was adopted from Perrin and Donovan (1984). Females Ovaries were weighed and sliced in sections about 1 mm thick.Sections were studied under a dissecting microscope to count and measure diameters of follicles, corpora albicantia and corpora atretica (see March and Kasuya 1984).The magnitude and development of follicles were categorised as the following: 1) none or few scattered primordial follicles in the ovaries, 2) follicles making up to 50% of the area of the ovary sections 3) large follicles filling more than 50% of the ovary sections. Female reproductive status was classified as follows 1) immature: ovaries possess solely primordial follicles and no corpus, 2) pubertal: ovaries contain secondary or third stage follicles but no corpus, 3) mature: ovaries contain at least one corpus luteum (CL) or corpus albicantia (CA).Mature females were further classified as: 1) pregnant: embryo or foetus in uterus; 2) lactating: active, milk producing mammary glands; 3) resting: non pregnant, mature females.Average age at attainment of sexual maturity (ASM) in females was estimated by 5 methods: 1) one year added to the average age of pubertal animals.Pubertal females are expected to reach maturity in the following mating season when they will be estimated one year older than at the date of capture (see section on age determination above); 2) age when 50% of individuals are mature (Perrin et al. 1977); 3) mean age of first time ovulators (DeMaster 1984); 4) simple least-squares regression of combined number of corpus luteum and corpora albicantia against age (DeMaster 1984) and 5) algorithm described by DeMaster (1978).Methods 1, 2 and 3 were also adopted to estimate the average length at sexual maturity (LSM): 1) average length of pubertal animals; 2) length class of 5 cm intervals when 50% animals are mature and 3) average length of first time ovulators. Males Testes were weighed during dissection and fixed in 10% neutral buffered formalin.Two samples, approximately 1 x 1 cm each, were taken from the peripheral (near surface) part and central (core) part of one testicle respectively (see Halldórsson and Víkingsson 2003).For the smallest testes, only one, combined peripheral/central sample was taken.The samples were dehydrated, embedded in paraffin and sectioned on a microtome to 10 mm slices, mounted on glass and stained with hematoxylin and eosin.No spermatogonia, spermatocytes or spermatids were visible in the testes, probably due to sampling entirely outside the main mating season.Males were therefore considered to be immature, pubertal or mature according to the appearance of tubula (Mackintosh and Wheeler 1929, Collet and Saint Girons 1984, Hohn et al. 1985, Halldórson and Víkingsson 2003).ASM in males was estimated by: 1) adding one year to the average age of pubertal individuals (see section on females above); 2) calculating age when 50% individuals are mature (Perrin et al. 1977) and 3) DeMaster's (1978) algorithm.LSM was derived by methods 1 and 2 (see section on females). Sample composition Teeth and/or reproductive organs were sampled from 1,268 harbour porpoises obtained from September to June in the years 1991 to 1997 (Fig. 2).Most samples were collected in March and April, fewer porpoises were obtained from other months and no samples were obtained from July and August (for geographical distribution of samples see Víkingsson et al. 2003).The sampling may reflect porpoise migrations to shallow waters in early spring but it also seems to mirror the gillnet fishing effort in shallow Icelandic waters that peaks in early spring and ceases in many places during the summer months. Age Age readings were obtained from 1,025 harbour porpoises.Individuals up to 20 years of age were observed in the study but about 90% of the porpoises were younger than 6 years old.The modal age classes were 0 (calves) and 1 years (Fig. 3).The oldest female was estimat-ed as 20 years old and the oldest male was 16 years old. Growth Seventy foetuses were observed from September to mid June.The smallest foetus was collected in September and was 3 cm long and weighed 0.1 g.The largest foetus, obtained in April, was 75 cm long and weighed 7,450 g (Figs 4 and 5).No neonates were obtained and the size of the youngest animals that were caught in October (98 cm) give little information on size at birth.The largest foetuses observed in spring, however, indicate a length at birth of about 75 to 80 cm.Length at age data for postnatal porpoises was obtained from 497 males and 314 females and 7). Sex ratio The overall ratio of male to female foetuses was 1.2:1 (n=65), mainly from data obtained in March and April.Foetuses from September and October were not sexed and the gender was known only for five foetuses from November to February.Lack of information from early pregnancy therefore precludes investigation of potential changes of sex ratio during the prenatal stage.In postnatal animals the overall M:F ratio was 1.7:1.The sex ratio did not change significantly between age classes but the male preponderance in the sample declined slightly between age classes 0 and 1 and increased again to the ages of 10 to 12 years.Thus, the proportion of males aged 0 to 2 years was 57% to 61% while males comprised 67% to 100% of age classes 3 to 12 (Fig. 3).The two oldest animals, 17 and 20 years old, were, on the other hand, females. Reproductive cycle Seasonally biased sampling resulted in a gap in the data on reproductive biology from June to August.Information obtained from samples in spring and autumn show that parturition, ovulation and conception take place in the summer but the exact time interval for each event remains unknown.Foetal growth indicates parturition in late spring or summer (Fig. 4).Ten pregnant females were obtained in May and one in June. No newborn calves or post-partum females Harbour porpoises in the North Atlantic 200 were, however, collected in these months or earlier.It is therefore very unlikely that parturition begins in April.While it cannot be excluded that some births occur in May, the peak in births is certainly not reached before June.Increased testes weight in mature males in spring and subsequent reduction in autumn indicates mating in the summer (Fig. 11).Pubertal males were mainly observed in spring, also implying that mating occurs in the summer. The single mature female sampled in September and all 3 sampled in October were pregnant. There are therefore no signs of a prolonged mating season in the autumn, but with such a small sample size it can not be excluded that some mating may occur later than August. Three out of 4 mature females with 1 or more corpus albicans that were obtained from September to January were lactating.Informa-tion on lactation is missing from February but none of the 38 mature females obtained from March and April were lactating.Assuming that births occur in mid June, this suggests a duration of lactation of at least 7 months but not longer than 9 months. Reproductive status Immature individuals were significantly smaller than pubertal and mature animals of the same age in March and April.The significance held for animals within age classes 0 to 3 years for both females and males (Table 1). Females Of the 335 female porpoises sampled 22% were sexually mature and all but one of these were pregnant (Table 2).The exceptional female was 17 years old with ovaries possessing only small follicles, 15 old corpora albicantia and no corpus luteum.Microscopical examination was not Fig. 7. Weight at age for harbour porpoises from Icelandic waters. performed on the ovarian histology so we do not know whether the female was senescent (see Sørensen and Kinze 1994).The observed pregnancy rate per year is therefore 0.98 and regression analysis of the number of corpora against age indicated an ovulation rate about one per year (Fig. 8): Number of corpora = 0.9773 * age -1.103; SE = 1.763; n=50. The highest number of corpora was 19 in a 20 year old pregnant female. Foetuses were observed in 2 females in their second year of life.The oldest immature female was, however, 6 years old (Table 3).The average age of pubertal females was 1.5 years, implying an ASM of 2.5 years (Table 4).The mean length of pubertal females was 138.0 cm, indicating a slightly greater LSM since all pubertal females were obtained in spring, 2 to 3 months prior to the mating season.The average age and length of first time ovulators was 2.81 years and 147.6 cm respectively.The estimated age and length when 50% of the individuals are mature was 3.20 years and 146 cm respectively.Regression of the number of corpora against age gave ASM at 2.1 years (Fig. 8) and DeMaster's (1978) method 4.4 years (s = 0.318, n = 293). Males Of the 664 males sampled, 43% were mature and 51% and 6% immature and pubertal respectively (Table 2).Mature males were observed late in their second year of age (Table 3).The oldest Harbour porpoises in the North Atlantic 202 immature male, on the other hand, was estimated as 5 years old.The average age of pubertal males was 1.9 years and therefore an ASM of 2.9 years assuming that these individuals would have reached maturity around the following mating season (Table 5).The mean length of the pubertal males was 135.6 cm.Age and length when 50% of the individuals were mature was 1.9 years and 135 cm respectively.DeMaster's (1978) method estimated ASM as 2.6 years. in the older age classes and the non-random nature of the sampling precludes, however, calculation of age specific mortality rates from the observed age distribution. Growth The Icelandic porpoises reach lengths similar to those observed in the Northeast Atlantic (Bjørge and Kaarstad MS 1996, Benke et al. 1996, Lockyer 1995a, Lockyer and Kinze 2003), but seem to grow slightly larger than harbour porpoises from eastern Canada (Read and Gaskin 1990, Read and Hohn 1995, Palka and Read 1996, Read and Tolley 1997) and West Greenland (Lockyer et al. 2001).Large asymptotic weights in the Icelandic female porpoises (75 kg), compared to 65 kg in Danish waters (Lockyer and Kinze 2003) and 55 kg in British waters (Lockyer 1995a) may be explained by the seasonally biased sampling in the present study.All the largest females, except one, were pregnant and most were obtained late in the gestation period.Weight is therefore not a reliable parameter when comparing growth in different populations unless seasonal variations in body condition are taken into account. Sex ratio The preponderance of males among foetuses and postnatal animals in the present study is in accordance with most studies harbour porpoises in the North Atlantic (see Lockyer 2003).The overall sex ratio for postnatal animals in the Icelandic sample showed, however, a somewhat higher bias towards males than found elsewhere.The observed sex ratio for foetuses is likely to be unbiased whereas the sex ratio for postnatal animals observed in bycaught animals may be influenced by spatial or temporal sex segregation (Víkingsson et al. 2003).Males may also be more vulnerable to fishing activity due to size selectivity of fishing gear or differences in diving or other behaviours between the sexes leading to the observed sex ratio.Consequently, higher mortality of males may eventually lead to the dominance of females in the oldest age classes. The decline in the proportion of males between year classes 0 and 1 seen in this study is also known from other studies (Lockyer 1995a, Lockyer andKinze 2003).Lockyer (1995a, b) suggested that poorer condition of male neonates relative to females in British waters led to higher mortality of male calves and a subsequent decline in the proportion of males in year classes 1 and 2. Female neonates are larger than males in the southern North Sea and German waters (vanUtrecht 1978, Benke et al. 1998), which may support this hypothesis.The lack of neonates in the present sample makes it impossible to verify for Icelandic porpoises. Reproductive cycle The summer seems to be eventful in the reproductive cycle of harbour porpoises in Iceland, encompassing the periods of parturition, ovulation and conception.The gap in sampling from June to August is therefore very unfortunate and the exact timing of the peak in births and mating remains unknown.The information on foetal growth, pregnancy status, testes weight (present study) and histological changes in gonads (see Halldórsson and Víkingsson 2003) from the Icelandic porpoises in late spring and autumn, however, indicates similar timing of the reproductive cycle as observed in northern European porpoises (vanUtreckt 1978, Sørensen and Kinze 1994, Lockyer 1995a, Lockyer and Kinze 2003) and West Greenland (Lockyer et al. 2001) where parturition peaks in June and mating seems to occur in July and August.Histological studies on the testes of the Icelandic porpoises, furthermore, show that the mating period may be prolonged to the autumn (Halldórsson and Víkingsson 2003).Parturition and conception seem occur about one month earlier in harbour porpoises from eastern Canada than from the central North and Northeast Atlantic (Read andHohn 1995, Palka andRead 1996). Females have been found lactating in March in Danish waters, indicating that lactation there lasts up to 9 months (Sørensen and Kinze 1994).No sign of lactation was found in the present study in any of the 10 mature females in March that was not pimiparous.Assuming that the date of birth is in mid June, the present data do not support a period of lactation of more than 7 months.One lactating female was sampled in January, in accordance with a lactation period of 7 months or less.Ovulation and pregnancy rates observed for Icelandic porpoises strongly suggest annual NAMMCO Scientific Publications, Volume 5 reproduction in most females after they reach maturity.Several females had more corpora than their estimated age.This may indicate multiple ovulation in some years but it cannot be excluded that these findings were effected by inaccurate age estimation.The high pregnancy rate observed in the Icelandic porpoises is even more surprising as most animals were obtained late in spring just before parturition, when the possibility of catching females that have aborted their foetuses should be highest.A high pregnancy rate has also been reported from porpoises from the Gulf of Maine (0.93) (Read and Hohn 1995).Most studies in the North Atlantic have, on the other hand, shown considerably lower values (Møhl-Hansen 1954, Lockyer 2003, Lockyer and Kinze 2003, Lockyer et al. 2001, Read and Gaskin 1990, Sørensen and Kinze 1994, Bjørge and Kaarstad MS 1996).Pregnancy rates in Norwegian and Swedish porpoises were estimated as 0.73 and 0.67 based on the presence of foetuses and corpora lutea, respectively (Bjørge and Kaarstad MS 1996).Sørensen and Kinze (1994) reported a pregnancy rate of 0.73 from Danish waters in September, and a pregnancy rate of 0.76 has been reported from eastern Newfoundland (Palka et al. 1996).Ovulation rates from Danish waters (0.64 corpora/year) (Lockyer and Kinze 2003) and West Greenland (0.73 corpora/year) (Lockyer et al. 2001) also indicate lower fecundity in porpoises from these areas than in Icelandic waters. It cannot be excluded that segregation of mature females by reproductive status may cause the observed lack of resting females.Immature females were, however, frequently obtained and it is rather unlikely that resting females are totally separated from, or less likely to be entangled, than other female porpoises. Reproductive status Combined testes weight was less than 100 g in most immature individuals and less than 10% of mature males had testes lighter than 200 g.These results support Lockyer's (1995a) assumption that maturity is initiated when the testes reach around 200 g combined weight. Comparison of length distributions of imma-ture and mature porpoises within each year class revealed a significant relationship between maturity and body size.Attainment of maturity may thus be indirectly connected with age and we would expect LSM to be superior to ASM for comparisons of maturity between populations with different growth curves. The estimates of ASM and LSM for males and female harbour porpoises using different methodologies in the present study are somewhat variable.Apparently, the seasonally biased sampling affects the estimates obtained by various methods and each sex differently.Pubertal females are observed in spring, but according to the criteria used to define maturity, females first attain sexual maturity at first ovulation.The definition of maturity in males is based on tubule size and appearance and males may thus be defined as sexually mature before they become sexually active.The large bulk of samples from spring in the present study, therefore, affects the estimation of ASM differently for males and females.Since both age and maturity stage in females reflect the situation in the last mating season prior to sampling, the estimated ASM for females is likely to be accurate.These animals have, however, attained almost one year's growth between mating and sampling, leading to a relatively high estimated LSM. Mature males observed in spring might recently have reached maturity, however they first will become sexually active in the following mating season when they will be one year older.ASM for males is therefore estimated about 1 year lower than the actual age when they become sexually active.The continuous attainment of maturity over a longer period in males is likely to lead to unbiased estimate of LSM. In addition to the effects of the seasonally biased sampling on the estimates of ASM and LSM, various methods may serve differently for evaluating ASM and LSM from the present data.The mean age deduced by DeMaster's (1978) method is commonly used to assess ASM in marine mammals.This method, however, evenly weights all age classes regardless of sample sizes.In the present study the oldest age classes with immature females are comprised of few individuals, leading to a relatively high estimation of ASM.ASM ranging from 2.1-3.2 years deduced by the other methods are likely more accurate for the present data. The differences in estimates of LSM for females using different methods are probably due to the severe seasonality in sampling mentioned above.The LSMs calculated from first time ovulators and the average length when 50% have reached maturity are obtained from animals that have been sexually mature for almost one year when they were caught and measured.The value of 146 to 148 cm derived by these methods is therefore positively biased.The LSM of 138 cm obtained from pubertal animals in spring is, on the other hand, probably negatively biased since these animals have yet 2 to 3 months of growth before the mating period begins.Consequently, the true LSM for females probably lies between these estimates and is probably similar to the observed values from the British Isles (140-145 cm) (Lockyer 1995a) and the North Sea (143 cm) (Lockyer and Kinze 2003). The estimates of ASM and LMS for males obtained by different methods do not vary as much as those observed for females.The number of males was sufficient in all year classes that contained immature individuals and DeMaster's (1978) method is therefore likely to give a better estimate of ASM for males than for females in the present study. The relatively lower ASM observed for males in the present study as compared to other areas in the North Atlantic (see Lockyer 2003) is probably again influenced by the seasonally biased sampling.The same argument cannot be adopted for the relatively low ASM for the female porpoises in Iceland compared to other harbour porpoise populations in the North Atlantic. Concluding remarks Harbour porpoises in Icelandic waters have a similar life history to that observed in other populations in the North Atlantic.Individuals may reach over 20 years of age, but most die by the age of 10.A slight preponderance of males is observed in foetuses.The increased dominance of males observed in postnatal animals may be a result of sex segregation and/or selectivity of and vulnerability to fishing gear.Alternatively, the natural mortality of males and females may differ.Foetal growth, length at birth and postnatal growth in Icelandic porpoises are similar to those observed in northern European porpoises, which seem to become larger than those from West Greenland and eastern Canada.The timing of the life cycle seems also to be similar to that of the northern European populations.This is inconsistent with results of studies of the homogeneity between putative populations of harbour porpoises in the North Atlantic, which have shown a closer relationship between Icelandic porpoises and populations in the Northwest Atlantic than the Northeast Atlantic (Tolley et al. 2001). The Icelandic porpoises differ from other populations in the North Atlantic in having a slightly lower age at attainment of sexual maturity and high ovulation and pregnancy rates.No historical data are available on Icelandic porpoises and it is not known whether these life history characteristics are due to recent developments in the population.The age distribution of porpoises in the Icelandic bycatch seems similar to those from other areas in the North Atlantic.There are therefore no reasons to presume that fishing mortality of harbour porpoises has caused a higher selective pressure for increased fecundity at lower age in the Icelandic porpoises than in other populations in the North Atlantic.Cetacean populations are generally assumed to be regulated by density dependent processes (Perrin andDonovan 1984, Fowler 1984).These are often expressed in processes associated with recruitment and the causes may involve increased resource levels and possibly social or behavioural factors.Evidence of density dependence in reproduction has been observed in some populations of cetaceans with a history of exploitation or fishing gear entrapment (Lockyer 1984, Kasuya 1976, Read and Gaskin 1990).Investigation of the body condition of Icelandic porpoises compared to other populations may reveal whether decreased feeding competition as a consequence of a decline in population size is likely for harbour porpoises in Iceland.The difference in reproductive rates between the Icelandic and other populations may also be associated with differences in pollutant burdens between regions within the North Atlantic. Fig. 1 . Fig. 1.Necropsy of bycaught harbour porpoises provides important data on life history parameters, condition and diet.(Photo: Institute of Marine Research, Reykjavik). Fig. 3. Number of life history samples by age.Labels on top of columns show M:F ratio. Fig. 11.Combined testes weight (g) by months for harbour porpoise males in Icelandic waters. weight at age data from 280 males and 216 females.Harbour porpoises grew rapidly during the first year but growth rate declined in the following years (Figs6 and 7).Large variance is observed in growth between individual harbour porpoises but females grow in general faster and become larger than males.The youngest calves were collected in October and measured 101 cm and 104 cm but the smallest animal was a 98 cm female collected in March.Growth curves could not be constructed separately for each sex for the first year since no calves were collected in November to February.In March-April the mean length of calves had reached 126.1 cm (s = 11.11,n = 89) and 124.6 cm (s = 9.44, n = 124) in females and males respectively. The largest porpoise in the sample was a 174 cm female and the largest male was 165 cm.The asymptotic lengths and weights were 160.1 cm and 77.5 kg respectively for females and 149.6 cm and 51.7 kg for males(Figs 6 and Table 1 . Comparison of length distribution of immature vs pubertal and mature male and female harbour porpoises from March and April in Iceland. Table 2 . Reproductive status of harbour porpoises entangled in sink nets inIcelandic waters in 1991Icelandic waters in -1997. . Table 5 . Average age (years) and length (cm) at sexual maturity in harbour porpoise males from Iceland. Table 4 . Average age (years) and length (cm) at sexual maturity in harbour porpoise females from Iceland. Table 3 . Proportion mature animals at age for female and male porpoises entangled in fishing nets inIceland 1992Iceland -1997. .
v3-fos-license
2022-05-14T15:21:30.718Z
2022-05-11T00:00:00.000
248763748
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/9554768.pdf", "pdf_hash": "2c970c04a3b0e8744fdebd321b0bbf4c5c77821f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41604", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "ac69784cfde1ff4c0df53f429b755b2a76903909", "year": 2022 }
pes2o/s2orc
WTD-PSD: Presentation of Novel Feature Extraction Method Based on Discrete Wavelet Transformation and Time-Dependent Power Spectrum Descriptors for Diagnosis of Alzheimer's Disease Alzheimer's disease (AD) is a type of dementia that affects the elderly population. A machine learning (ML) system has been trained to recognize particular patterns to diagnose AD using an algorithm in an ML system. As a result, developing a feature extraction approach is critical for reducing calculation time. The input image in this article is a Two-Dimensional Discrete Wavelet (2D-DWT). The Time-Dependent Power Spectrum Descriptors (TD-PSD) model is used to represent the subbanded wavelet coefficients. The principal property vector is made up of the characteristics of the TD-PSD model. Based on classification algorithms, the collected characteristics are applied independently to present AD classifications. The categorization is used to determine the kind of tumor. The TD-PSD method was used to extract wavelet subbands features from three sets of test samples: moderate cognitive impairment (MCI), AD, and healthy controls (HC). The outcomes of three modes of classic classification methods, including KNN, SVM, Decision Tree, and LDA approaches, are documented, as well as the final feature employed in each. Finally, we show the CNN architecture for AD patient classification. Output assessment is used to show the results. Other techniques are outperformed by the given CNN and DT. Introduction e brain is the body's most important organ. e disorders that affect the brain are extremely important to manage since, in most situations, once alterations occur, they are irreversible in extreme circumstances. Dementia is defined as the loss of cognitive and functional thinking abilities. e most prevalent cause of dementia is AD. e AD strikes people in their mid-60s. Alzheimer's disease affects more than 5.5 million individuals worldwide [1]. Memory loss, language problems, and behavioral changes are all indications of AD. Symptoms of the nonmemory part include trouble locating words, eye problems, decreased cognition, and poor judgment. Brain imaging, cerebrospinal fluid, and blood are the biological signs. Normal age-related decrease in cognitive function, which is more gradual and associated with less impairment, should be distinguished from AD. e disease frequently begins with little symptoms and progresses to serious brain damage. Dementia affects people differently; therefore their abilities deteriorate at varying rates. Early and reliable identification of AD is advantageous to disease management. Neuroimaging techniques like magnetic resonance imaging (MRI) and computed tomography (CT), as well as single-photon emission computed tomography (SPECT) and positron emission tomography (PET), can be utilized to rule out other forms of dementia or subtypes. It has the potential to forecast the progression of prodromal into AD. Neurologists can use medical image processing and machine learning methods to see if a person is developing AD. Image segmentation and classification are critical tasks in MRI data analysis for detecting AD [2]. Structural MRI (SMRI) provides visual information regarding the atrophic areas of the brain caused by the tissue level abnormalities that underpin AD/MCI. PET measures cerebral glucose metabolism, which is a reflection of functional brain activity [3]. e quantity of amyloid beta-protein and amyloid tau tangles accumulated in the cerebrospinal fluid (CSF) is an early predictor of AD. SMRI has already been shown to be sensitive to presymptomatic illness and might be used as a disease biomarker [4]. MRI appears to be the most sensitive imaging examination of the brain in everyday clinical practice. It provides information on gray matter, white matter, and CSF morphology. Structural MRI can record atrophic brain areas noninvasively, allowing us to see anatomical alterations in the brain. As a result, they have been recognized as a possible indication of illness development, and ML approaches for disease detection are being researched extensively [5]. e MRI scan can be utilized in image processing to evaluate the likelihood of early detection of AD. Intensity adjustment, K-means clustering, and the region growing method are image processing techniques used in MRI to extract white and gray matter [6]. e same approach may be used to compute brain volume. Because the raw MRI brain image is too large to be utilized for classification, the MR images must be preprocessed before feature extraction and classification can be performed for illness diagnosis. rough the warping of labeled atlas, one of the most generally used approaches is to divide the image into numerous anatomical areas, that is, regions of interest (ROIs), and the regional measurements, such as volumes, are calculated as the features for AD classification [7]. To identify the most discriminative features from ROIs for multimodality classification of AD/MCI, a discriminative multitask algorithm was presented. In ML, each data item should be characterized as a feature vector. ere are numerous research advocated extracting various characteristics from MRI scans and then classifying the resulting vectors. e quality of the produced feature vectors is, nevertheless, reliant on image preprocessing due to registration errors and noise. As a result, domain knowledge is required to extract discriminative features. CNN's layered design has a big influence on its performance. Greater classification accuracy is anticipated to arise from a layer structure that is better suited for MRI images. e input images in this article are Two-Dimensional Discrete Wavelets (2D-DWT). e Time-Dependent Power Spectrum Descriptors (TD-PSD) model is used to represent the subbanded wavelet coefficients. e primary property vector is made up of the characteristics of the TD-PSD model. Based on classification algorithms, the collected characteristics are applied independently to present AD classifications. e classification is used to determine the kind of tumor. For feature extraction of wavelet subbands from three sets of mild cognitive impairment (MCI), AD, and HC test data, we employed the TD-PSD technique. Literature Review For diagnosing AD, feature vectors from MRI images must be extracted. Several feature extraction techniques have been proposed in the recent decade since the outcome of ML is determined by the extracted feature vectors. Employing many specified templates, Liu et al. [8] retrieved multiview feature representations for subjects and divided subjects within a particular class into distinct subclasses in each view space. Support vector machine-based (SVM) ensemble learning was used. Suk et al. proposed a multitask and multikernel SVM learning approach for a stacked autoencoder with a deeplearning-based feature representation [9]. Due to registration mistakes and noise, the quality of the recovered features is dependent on image preprocessing. As a result, domain knowledge is required while extracting discriminative features. It takes a long time and a lot of effort to acquire hand-crafted features. More crucially, hand-crafted features seldom generalize well. As a consequence, this study recommends employing deep learning to extract data characteristics. Sadeghipour and Sahragard [10] developed a novel approach for facial identification that is based on an enhanced SIFT algorithm. Acharya et al. [11] created an ML system that can detect AD symptoms in a brain scan. For classification, the system combined MRI with a variety of feature extraction techniques. e T2 imaging sequence was used to get the images. Filtering, feature extraction, Student's t-test-based feature selection, and k-Nearest Neighbor-(KNN-) based classification were among the quantitative approaches used in the paradigm. e findings revealed that when compared to other approaches, the Shearlet Transform (ST) feature extraction methodology provides better outcomes for Alzheimer's diagnosis. With the ST + KNN approach, the suggested tool achieved 94.54 percent accuracy, 88.33 percent precision, 96.30 percent sensitivity, and 93.64 percent specificity. According to Sadeghipour et al. [12], combining fireflies with intelligent systems would lead to breast cancer detection. e results show that by comparing the performance of the suggested system to other methods, it is evident that it is superior in both performance and accuracy. Sadeghipour and Moradisabzevar [13] investigated the development of intelligent toy cars as a method of screening children with autism. e results show that the screening of autistic children was 100 percent accurate. e study by Zhou et al. [14] investigated probabilistic inflection points for the decomposition of LiDAR hidden echo signals. Yan et al. [15] examined the structure and in vitro test results of waxy and regular maize starches after thermal processing using plasma-activated water. Eslami and Yun [16] have developed a novel approach called A + MCNN [23], the scheduling problems for health care systems [24], and the optimization of users based on a clustering method [25]. A new approach to penetration testing based on extended classifier networks has been proposed by Yazdani et al. [26]. A model of an application created for mobile Android systems was provided by Lauraitis et al. [27], which may be used to examine central nervous system movement problems occurring in individuals suffering from Huntington's, Alzheimer's, or Parkinson's illnesses. Specifically, the model detects tremors as well as cognitive deficits through the use of touch and visual stimulation modalities, among other things. According to the findings, the adoption of intelligent applications that may assist in the evaluation of neurodegenerative illnesses is a significant advancement in medical diagnostics and should be encouraged. According to Sadeghipour et al. [28], the xcsla system can be used to develop an intelligent diabetes diagnosis system. According to the results of the program implementation document (pid) on databases, the proposed technique can detect diabetes more accurately than the conventional xcs system, the Elman neural network, svm clustering, knn, c4.5, and ad tree. Farahanipad et al. [29] developed a pipeline for the identification of hand 2D keypoints using unpaired image-to-image translation. In Shi et al.'s [30] study, they investigated the effect of ultrasonic intensity on the structure and characteristics of sago starch complexes and their implications for the quality of Chinese steamed bread. Sadeghipour et al. [31] developed a new expert clinical method for the diagnosis of obstructive sleep apnea using the XCSR classifier. Rezaei et al. [32] used depth images to automate mild segmentation of hand parts. According to the results, a model without segmentation-based labels may achieve a mIoU of 42%. Quantitative and qualitative findings support our method's efficiency. Yue et al. [33] use automated anatomical labeling (AAL) template to divide the brain into 90 regions of interest (ROIs). ey choose the informative voxels in each ROI with a baseline of their values and arrange them into a vector to divide the uninformative data. e first stage characteristics were then chosen based on the voxel correlation between distinct groups. e fetched voxels were then put into a convolutional neural network (CNN) to understand the profoundly hidden properties of each subject's brain features maps. e testing findings showed that the suggested technique was reliable and had a promising performance when compared to other methods in the literature. For increasing classification accuracy and identifying high-order features that potentially provide pathological information, Li et al. [44] used a novel feature extraction approach known as radiomics. As a consequence, they defined ROIs as brain regions mostly dispersed in the temporal, occipital, and frontal areas. A total of 168 radiomic characteristics of Alzheimer's disease were found to be stable (alpha >0.8). e maximum accuracies for categorizing AD versus HC, MCI versus HCs, and AD versus MCI were 91.5 percent, 83.1 percent, and 85.9 percent, respectively, in the classification trial. Silva et al. [46] suggested a model for diagnosing AD based on deep feature extraction for MRI classification. e goal of this model was to distinguish between AD and HC. For extracting the best characteristics of the selected region, the CNN architecture was also developed in three convolutional layers. e model's effectiveness and reliability for the diagnosis of AD were shown by a comparison study with previous studies in the literature. Table 1 lists several more techniques. Methods and Materials is a wavelet expansion function that is connected to wavelet ψ(x) and scaling φ(x), we get [47] f c j 0 (k)'s are scaling coefficients, and j 0 is a starting counter. e d j (k) coefficients are wavelet coefficients (see Figure 1). e following are the expansion coefficients: It is also known as the discrete wavelet transform of f(x) if the expansion function is a series of crisp numbers. Computational Intelligence and Neuroscience e expansion series is represented by equations (2) and (3) (DWT pair) [47,48]: where M is the number of samples to be converted, and J is the number of transformation levels; it equals 2 J . To construct a 1D scaling function ϕ and associated wavelet ψ [39], 2D, φ(x, y), and 3D, ψ H (x, y), ψ V (x, y), and ψ D (x, y), are usually necessary. A two-level wavelet transformation creates four subbands, as seen in Figure 1. In this diagram 2↓, ψ H , ψ V , and ψ D indicated deviations along horizontal, vertical, and diagonal boundaries, respectively. Digital filtration and downsamplers can be used to perform 2D-DWT. e additional subbands are produced using discrete 2D scaling functions and 1D-FWT on f (x, y) [49]. Feature Extraction. e discrete Fourier transform (DFT) is supposed to explain the signal trace as a function of frequency X[k] as a product of the sampled representation of the signal as x[j] with j � 1, 2, . . . N, length N, and sampling frequency fs Hz. If we remember Parseval's theorem, the sum of the square of the function equals the whole square of its transformation. We begin the feature extraction procedure. e Fourier transform's whole notion of frequency is usually thought to be symmetrical with respect to zero frequency. It has similar sections that cover both positive and negative e Parseval theorem might indeed be used when n � 0 is used. For nonzero values of n, the Fourier transform timedifferentiation feature could be applied. e n 'th means multiplying the k by the spectrum to the n 'th power, according to this feature. e derivative of a time-domain function is alluded to as Δ n for distinct time signals. Root Squared Zero-Order Moment (m 0 ). is is a function that displays the frequency domain's total power and looks like this All channels could standardize their related zero-order moments by splitting them into zero-order moments. Root Squared Second-and Fourth-Order Moments. e second time is utilized as power, but it is subsequently shifted to k 2 P[k], which refers to the frequency function: e moment is obtained by repeating this approach: e overall energy of the signal is reduced when the second and fourth signals are taken into account. For decreasing the noise impact on all moment-based features, to normalize the domains of m 0 , m 2 , and m 4 , we perform the following power transformation: e experimental value of λ is set to 0. As a result of these settings, the first three features extracted are as follows: Sparseness. is feature calculates the quantity of vector energy in a small number of additional components. It is then followed by A feature shows a vector with all elements equivalent to a zero-sparseness index, i.e., m 2 , and m 4 � 0, due to differentiation and log(m 0 /m 0 ) � 0. All other sparseness levels, on the other hand, should have a value greater than zero. Irregularity Factor (IF). e ratio of peak numbers divided by zero-crossings up is expressed by this metric. A random signal's number of upward zerocrossings (ZC) and number of peaks (NP) can only be characterized in terms of spectral instances. e following is how the appropriate feature should be written: Computational Intelligence and Neuroscience Columns Covariance (COV). Our COV function is described as the standard deviation on arithmetic averages divided by the standard deviation on arithmetic averages: Teager energy operator (TEO). It mainly depicts the signal amplitude and instantaneous fluctuations, which are particularly sensitive to even little variations. TEO has been proposed as a method for modeling nonlinear speech signals. It was later widely employed in the audio signal processing industry. It is made up of the following parts: Proposed Feature Extraction Methods. e goal of this research is to apply machine learning algorithms to identify Alzheimer's disease. Figure 2 show the block diagram of the proposed method. To begin, we employed a two-stage 2D-DWT to break down input images into wavelet subbands. e obtained wavelet coefficients are utilized to derive classification features. e TD-PSD model is then used to extract features, with the first step using HH1, HL1, LH1, and the second stage using LL2, HH2, HL2, and LH2. e PCA approach is employed to diminish the number of features, and then AD is categorized using multiple machine learning algorithms using the retrieved feature. e following is the pseudocode for the provided method (Algorithm 1). Data Collection. In AD, structural MR imaging results demonstrated microscopic neurodegeneration and are a measure of brain atrophy (loss of synapses, dendritic processes, and neurons). In volumetric or voxel-based assessments of brain atrophy, the degree of atrophy and the extent of cognitive impairment are closely associated. ere is a relationship between cognitive decline and brain atrophy. Atrophy does not appear to be exclusive to AD on MR images. e degree of hippocampal atrophy, on the other hand, is highly correlated with autopsy Braak staging [50]. Braak staging of neurofibrillary tangles in antemortem MR imaging and postmortem AD staging match to the topographic distribution of atrophy on MR images (medial, basal, and lateral temporal lobes, as well as the medial parietal cortex) [51]. e data collection includes atrophy and clinical stages of AD. ere is negligible atrophy in the cognitively normal control individual, while there is significant atrophy in the AD patient. e MCI individual, on the other hand, has an intermediate amount of atrophy. On Kaggle [52], the dataset is accessible online. e MRI images are 256 × 256 PNG grayscale images that have been utilized to analyze and evaluate AD in three classes: AD, MCI, and an HC group. Feature Extraction and Reduction. In this section, the process of feature extraction is described. Based on the conceptual diagram of Figure 2 and pseudocode, the first step in the presented method is wavelet decomposition. e results of decomposition are presented in Figure 3. Regarding Figure 3, a two-level decomposition is done for each input image. From the first step, three subbands of low-high (LH1), high-low (HL1), high-high (HH1), and from the second step low-low (LL2), LH2, HL2, and HH2 are used for feature extraction. In the next step, each subband matrix is reshaped to a vector, and all the zeros are removed from the vectors. e final vectors are our pseudotime series that are used for feature extraction. e properties of the seven subbands are presented in Figure 4. Based on the amplitude and frequency of subbands, the LL2 subbands include the maximum number of points and properties of input images. However, all subbands are consequential in this diagnosis. Based on the feature extraction results, each image has 49 features (7 subbands with 7 TD-PSD features). Moreover, principal component analysis is employed to reduce the features. Based on Figure 5, the first seven features include almost 100% effect of all features. Consequently, the number of features is reduced to 7 based on the screen plot in Figure 5(a). Moreover, the cumulative value of the eigenvalue is presented in Figure 5(b). Results of Classification. In this section, the classification is done using different machine learning methods. e input layer of the classification methods is seven reduced features of the images, and the output layer is the three-class label of Computational Intelligence and Neuroscience 7 AD, MCI, and HC. Total 600 MRI images are employed for the classification of AD. e confusion matrixes of the presented methods are illustrated in Figure 6. e blue balls show the true values, and the red balls are the false value of the classification. Moreover, labels 1, 2, and 3 display the HC, AD, and MCI, respectively. Regarding the results of the KNN method, from 200 input HC, AD, and MCI images 193, 141, and 109 are diagnosed correctly. Based on the results, the sensitivity of the KNN for diagnosing Alzheimer's disease for HC is acceptable. Depending on the results, the SVM and LDA approaches reached the weak result for the diagnosis of AD. However, the results of DT show that the sensitivity of the method is 94%, 91.5%, and 97.5%, respectively. It means that the WTD-PSD is compatible with the DT approach for this problem. In other words, 188, 183, and 195 MRI images from HC, AD, and MCI are detected, respectively. Moreover, the precision of the method is 91.70%, 95.30%, and 96.10% for HC, AD, and MCI, accordingly. To approve the presented feature, we used CNN architecture also for this problem. e architecture of the CNN is presented in Figure 7. Input layer includes Figure 8. e horizontal axis of the ROC curve represents the rate of the false-positive index depending on the HC class. e genuine positive rate is shown by the vertical axis. e best classifier has the highest rate of true positives and the lowest number of false positives. Based on the results, the CNN and DT method shows the two best classifiers for the presented features. Moreover, the area under the curve (AUC) value is an index to compare the classifiers. e AUC and the accuracy of the machine learning classifiers are presented in Figure 9. Centered on the results, the accuracy of SVM, LDA, KNN, DT, and CNN is 45%, 53.70%, 73.80%, 94.33%, and 98.50%, respectively. Based on this chart, the CNN architecture with the highest accuracy and AUC is the more accurate and compatible method for diagnosing Alzheimer's disease using the WTD-PSD. Moreover, DT is the second method with a higher AUC. Discussion Since each data sample in ML should be defined as a feature vector, several researches have recommended extracting various features from MRI scans and then categorizing the vectors generated as a consequence of this process. Image preprocessing, on the other hand, is necessary to increase the quality of the recovered feature vectors because of registration mistakes and noise in the image. It is necessary to have domain knowledge in order to derive discriminative qualities. Discrete wavelet is employed as the input image in this study, and it has a two-dimensional representation. e subbanded wavelet coefficients are modeled using the Time-Dependent Power Spectrum Descriptors model, which is implemented in MATLAB. Each of the attributes of the TD-PSD model is represented by one of the leading property vectors. e collected characteristics are utilized in an autonomous manner to construct AD classifications, which are based on classification algorithms. On the basis of the findings, the accuracy of SVM, LDA, KNN, DT, and CNN are correspondingly 45 percent, 53.70 percent, 73.80 percent, 94.33 percent, and 98.50 percent. SVM is the most accurate of the five models. According to this figure, the CNN architecture with the highest accuracy and AUC is the most accurate and compatible technique for diagnosing Alzheimer's disease when utilizing the WTD-PSD than the other two methods. Furthermore, DT is the second most accurate approach with a larger AUC. Conclusion Many studies have advised extracting numerous features from MRI scans and then categorizing the resulting vectors since each data sample in ML should be described as a feature vector. However, image preprocessing is required to improve the quality of the recovered feature vectors due to registration errors and noise. For extracting discriminative characteristics, domain knowledge is required. e Two-Dimensional Discrete Wavelet is used as the input image in this work. e Time-Dependent Power Spectrum Descriptors model is used to model the subbanded wavelet coefficients. e leading property vector is made up of the characteristics of the TD-PSD model. Based on classification algorithms, the extracted features are applied independently to present AD classifications. e classification is used to determine the kind of tumor. We extracted wavelet subband features from three sets of MCI, AD, and HC data using the TD-PSD method. According to the KNN approach, images 193, 141, and 109 are correctly detected from 200 input HC, AD, and MCI images. According to the findings, the KNN's sensitivity for identifying AD in HC patients is adequate. According to the findings, the SVM and LDA approaches yielded a poor outcome for diagnosing AD. e DT findings, on the other hand, demonstrate that the method's sensitivity is 94 percent, 91.5 percent, and 97.5 percent, respectively. It indicates that for this issue, the WTD-PSD is compatible with the DT technique. In other words, MRI images from HC, AD, and MCI are observed in 188, 183, and 195, respectively. Furthermore, the method's precision for HC, AD, and MCI is 91.70 percent, 95.30 percent, and 96.10 percent, respectively. According to the CNN classifier's findings, the method's sensitivity for HC, AD, and MCI is 94 percent, 91.5 percent, and 97.5 percent, respectively. Furthermore, out of 200 images, 197, 198, and 196 are recognized for each class. Eventually, 91.7 percent, 95.3 percent, and 96.1 percent precision are achieved. e CNN architecture with the greatest accuracy and AUC is the more accurate and compatible technique for diagnosing AD utilizing the WTD-PSD, according to this figure. DT is also the second approach with the highest AUC. Data Availability Data are available and can be provided upon direct request to the corresponding author at ali.taghavi.eng@iauctb.ac.ir.
v3-fos-license
2018-04-03T01:17:15.790Z
1997-01-10T00:00:00.000
28897236
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/272/2/1218.full.pdf", "pdf_hash": "1b5f52a36b2878e6e079b0babe6997209c9debbb", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41606", "s2fieldsofstudy": [ "Biology" ], "sha1": "103c16a8310c704407be086051716d49597bc0e4", "year": 1997 }
pes2o/s2orc
The DNA binding domain of hepatocyte nuclear factor 4 mediates cooperative, specific binding to DNA and heterodimerization with the retinoid X receptor alpha. We recently showed that hepatocyte nuclear factor 4 (HNF-4) defines a unique subclass of nuclear receptors that exist in solution and bind DNA elements as homodimers (Jiang, G., Nepomuceno, L., Hopkins, K., and Sladek, F. M. (1995) Mol. Cell. Biol. 15, 5131-5143). In this study, we show that the dimerization domains of HNF-4 map to both the DNA binding and the ligand binding domain. Whereas the latter is critical for dimerization in solution, the DNA binding domain mediates cooperative, specific binding to direct repeats of AGGTCA separated by one or two nucleotides. Whereas amino acid residues 117-125 (the T-box/third helix region) are insufficient for cooperative homodimerization and high affinity DNA binding, residues 126-142 (encompassing the A-box region) are required. Finally, in contrast to the full-length receptor, the DNA binding domain of HNF-4 is capable of heterodimerizing with that of the retinoid X receptor α but not with that of other receptors. These results indicate that the HNF-4 DNA binding domain is distinct from that of other receptors and that the determinants that prevent HNF-4 from heterodimerizing with RXR lie outside the DNA binding domain, presumably in the ligand binding domain. 60 -90 amino acids that form two zinc finger modules followed by a C-terminal extension. The so-called ligand binding domain (LBD) in the C-terminal half of the protein consists of approximately 200 amino acids. This region performs a variety of functions including transactivation, ligand binding, and protein dimerization via heptad repeats of hydrophobic residues (7,8,23). With a few exceptions, the nuclear receptors all have the capacity to bind DNA as dimers. Several, particularly those related to the retinoid receptors, also bind DNA preferentially as heterodimers. Retinoid X receptor (RXR) is the most promiscuous of the receptors, dimerizing with at least 10 different receptors on a variety of DNA elements (23). Full-length RXR, in fact, binds DNA only poorly as a homodimer and not at all as a monomer (42). Dimerization domains in the receptors have been localized to both the DBD and the LBD using mutagenesis studies (5,7,8,15,40,41,43). Dimerization interfaces have also been visualized in the three-dimensional structures of homo-and heterodimeric complexes of DBDs of various receptors bound to DNA: a homodimer of the estrogen receptor (30), a homodimer of the glucocorticoid receptor (21), and a heterodimer of RXR␣ and the thyroid hormone receptor ␤ (TR␤) (28). The structure of a homodimer of the LBD of RXR␣, in the absence of ligand, has also been solved and shown to contain a dimerization interface (3). Nevertheless, the precise role of the different dimerization motifs in DNA binding and other receptor functions has not been clearly defined. Hepatocyte nuclear factor 4 (HNF-4) is a highly conserved receptor essential for development in organisms ranging from insect to man (4,34,44). Found predominantly in the liver, kidney, and intestine, HNF-4 transcriptionally activates a wide variety of genes including those involved in fatty acid and cholesterol metabolism, glucose metabolism, urea biosynthesis, blood coagulation, hepatitis B infections, and liver differentiation (reviewed in Refs. 32,33). A ligand, however, has not yet been identified for HNF-4. (The term ligand binding domain (LBD), however, will be used for the sake of consistency with other receptors. ) We recently showed that HNF-4, while very similar in DNA binding specificity and amino acid sequence to RXR␣, does not heterodimerize with full-length RXR or any other receptor analyzed. In fact, full-length HNF-4 bound DNA only as a homodimer and sedimented as a homodimer during glycerol gradient centrifugation. The strong homodimerization activity, as well as the exclusively nuclear localization of HNF-4, led us to conclude that HNF-4 defines a new subclass of receptors (12). In this study, we analyze in greater detail the mechanism of dimerization of HNF-4 with the goal of deciphering the role and the determinants of homo-versus heterodimerization among the nuclear receptors. The dimerization domains of HNF-4 are localized to both the DBD and the LBD, and the DBD is shown to mediate cooperative, high affinity, specific binding to DNA. The LBD, in contrast, appears to be important for dimerization in solution. It is also shown that, in contrast, to the full-length receptor, the HNF-4 DBD is capable of heterodimerizing with the DBD of RXR␣, although not with the DBDs of the retinoid acid receptor ␣ (RAR␣) or thyroid hormone receptor ␤ (TR␤). These results confirm the unique nature of HNF-4 and implicate the LBD as the major determinant of homo-versus heterodimerization of HNF-4. EXPERIMENTAL PROCEDURES Plasmid Constructions-Expression vector pMT7 was constructed by inserting double-stranded oligonucleotide MT7 (top strand: 5Ј-GTAAT-ACGACTCACTATAGGGCCCCTCGAGGCG-3Ј; bottom strand: 5Ј-A-ATTCGCCTCGAGGGGCCCTATAGTGAGTCGTATTACTGCA-3Ј) containing the T7 promoter and unique XhoI sites (underlined) into the parental vector pMT2 (13) and verified by dideoxy sequencing. pMT7 was used for in vivo expression of proteins in COS-7 cells and in vitro expression of proteins using reticulocyte lysate. Protein Expression, Purification, and Electrophoretic Mobility Shift Analysis (EMSA)-Wild-type and truncated HNF-4 proteins were overexpressed in COS-7 cells, and nuclear extracts were prepared as described previously (12). RXR.DBD, RAR.DBD, and TR.DBD were overexpressed in Escherichia coli as glutathione S-transferase fusion proteins using vectors generously provided by T. Perlmann (Karolinska Institute, Stockholm, Sweden). The fusion proteins were purified by glutathione-Sepharose columns (Sigma), and the glutathione S-transferase moiety was removed by cleavage with thrombin as described previously (26). Electrophoretic mobility shift analysis (EMSA) was performed as described previously (12,26) and as indicated in the figure legends. The double-stranded oligonucleotides used for probes in EMSA are shown in Table I. HNF-4 Contains Dimerization Domains in Both the DBD and the LBD-In order to locate the structural domain(s) responsible for the strong dimerization activity of HNF-4, several constructs were made and are diagrammed in Fig. 1. The constructs were overexpressed in COS cells and examined for DNA binding activity by electrophoretic mobility shift analysis (EMSA) using a variety of probes (see Table I). Since dimerization motifs had been previously mapped to both the DNA binding domain (DBD) and the ligand binding domain (LBD) of other receptors, those two domains were compared in HNF-4. The results, depicted in Fig. 2, indicate that, as seen previously, the full-length HNF-4 (HNF4.wt) yielded only a dimeric complex with the APF1 probe (Fig. 2, lanes 1-3). In contrast, HNF4.142, which contains the DBD but lacks the LBD, yielded both monomeric and dimeric binding species (lanes 4 -5) (see Fig. 3 for verification of the dimer). HNF4.374, on the other hand, which contains both the DBD and the LBD but not the very N or C terminus, bound DNA only as a dimer (lanes 6 -7). Furthermore, HNF4.374 dimerized with HNF4.wt, as evidenced by a shift band of intermediate mobility (lanes 9 -11), whereas HNF4.142 did not dimerize with either HNF4.wt (lane 8) or HNF4.374 (lane 12). These results indicate that HNF-4 contains at least two dimerization domains, one located between amino acids 45 and 142, corresponding to the DBD, and an additional domain located between residues 143 and 374 (i.e. the LBD), which apparently precludes monomeric binding. The HNF-4 DBD Is Responsible for DNA Binding Specificity Whereas the LBD Is Responsible for Dimerization in Solution-Our previous results from glycerol gradient sedimentation and order of dilution experiments indicated that the full-length HNF-4 is a homodimer even in the absence of DNA, i.e. in solution (12). In order to compare the dimerization activity of the HNF-4 DBD with that of the full-length protein, the following two experiments were performed. In the first, HNF4.wt and HNF4.142 were compared for cooperative binding to DNA. A saturation curve was calculated by performing EMSA with increasing amounts of COS cell nuclear extract containing ei- TABLE I Shift probes used in this study a APF1 is from the human APOCIII gene, and ApoAI site A is from the human APOAI gene. Position of the sites within the promoters have been previously described (33). Half-site through DR5 are synthetic elements. DR1, direct repeat with one nucleotide spacing, etc. b Sequence of the top strand of the double-stranded oligonucleotides. Hexameric half-sites are in capital and underlined. Not shown are 5Ј-TCGA overhangs on the APF1 and ApoAI site oligonucleotides. ther HNF4.wt or HNF4.142 and a constant amount of 32 P-APF1 as probe. The results, shown in Fig. 3, demonstrate that the saturation curve of the HNF4.142 dimer is sigmoidal, whereas that of the HNF4.wt dimer is hyperbolic. The sigmoidal shape indicates that the two subunits of the HNF4.142 dimer bind DNA in a cooperative, and thus two-step, fashion. This verifies the shift complex as a protein dimer bound to the probe, as opposed to two monomers that happen to bind to the same DNA molecule. It also indicates that HNF4.142 exists in solution as a monomer. The hyperbolic shape of the HNF4.wt curve, on the other hand, indicates that the two subunits of the dimer bind DNA in a noncooperative, and thus single-step, fashion, verifying that HNF4.wt exists in solution as a homodimer. The second experiment to test for dimerization activity in solution also examined DNA binding specificity. The binding activity of HNF4.wt and HNF4.142 on DNA containing either a single half-site or direct repeats separated by an increasing number of nucleotides (DR0 -5) was determined. The rationale was that a protein that exists as a dimer in solution might yield a DNA complex migrating as a dimer on a single half-site, whereas a protein that exists in solution only as a monomer would not. The results, shown in Fig. 4, panel A, indicate that HNF4.wt binds DR1 (lanes 5-6), as expected, as well as DR2 (lanes 7-8). There was also a low level of binding activity on the probe containing a single half-site (lanes 1, 2, 1a, and 2a) but none on any of the other probes, DR3, DR4, or DR5 (lanes 9 -14). The low binding activity on the half-site is presumably due to reduced protein-DNA contact and suggests that contacts to both half-sites are important for binding the consensus element (DR1). No monomeric binding of HNF4.wt to the half-site was detected even on very long exposures (not shown). In contrast to HNF4.wt, HNF4.142 bound a single half-site only as a monomer (panel B , lanes 1 and 2). These results, along with those of Fig. 3, confirm that HNF4.wt exists as a homodimer and HNF4.142 as a monomer in solution. The results of Fig. 4 also show that HNF4.142 binds DNA with a specificity similar to that of HNF4.wt. Both HNF4.wt and the HNF4.142 dimer bind significantly only to direct repeats separated by one or two nucleotides (DR1 and DR2, respectively) with an apparent preference for DR1 (lanes 5-8 in Panels A and B). Similar results were obtained from a competition experiment in which HNF4.142 and HNF4.wt were subjected to EMSA using as probe 32 P-APF1 and 40-fold molar excess of several different unlabeled oligonucleotides containing HNF-4 binding sites. The results indicated that the oligonucleotides competed the HNF4-APF1 complex in a similar fashion for both the HNF4.DBD dimer and HNF4.wt dimer. The only difference is that the extent of competition appeared to be somewhat less for the HNF4.142 dimer than for the HNF4.wt dimer (data not shown). Since the construct containing just the DBD and the LBD, HNF4.374, showed an identical binding specificity to that of HNF4.wt, these results, together with those of Figs. 2-4, indicate that the DBD of HNF-4 is primarily responsible for the DNA binding specificity and partially responsible for dimerization activity on DNA. The LBD, on the other hand, is apparently responsible for the strong homodimerization activity seen in the absence of DNA. Amino Acids 126 to 142 (encompassing the A-box Region) Is Required for Cooperative Homodimerization and Specific and High Affinity Binding to DNA-In addition to the highly conserved zinc finger motifs, the DBD of many of the receptors also contains an A-box and a T-box (see Fig. 1). The A-box, which contacts the nucleotide bases flanking the core recognition sequence, is important for the high affinity binding of NGFI-B monomers and thyroid hormone receptor (TR) monomers and homodimers but not that of receptors such as RXR and retinoic acid receptor (RAR) (16,28,38,40,41). For these receptors, the T-box appears to be much more important for dimerization and high affinity binding (18,28,40,41). The N-terminal portion of the T-box in RXR␣ in particular, which forms an ␣-helix (the so-called third helix), is required for homodimerization of the RXR␣ DBD on DR1 (18). We therefore wished to determined which was more important for HNF-4 DBD binding, the A-box or the T-box/third helix. Amino acid sequence alignment, shown in Fig. 5A, indicates that the A-box of HNF-4, like that of the other receptors, is fairly distinct compared with the A-box of other receptors (e.g. RXR versus HNF-4) but relatively conserved among the different species of HNF-4 (i.e. rat versus Drosophila HNF-4). The T-box of HNF-4, on the other hand, is very similar to that of several other receptors, particularly in the third helix region. In fact, HNF-4 is closest to retinoid X receptor (RXR␣, -␤, and -␥) and differs primarily by a single amino acid in this region: a negatively charged glutamic acid (E) in RXR is replaced by an asparagine (N) at residue 123 in HNF-4. In addition to changing the net charge of the region, this single amino acid difference also appears to change the secondary structure of the region by terminating the putative helix in HNF-4 (Fig. 5B). In order to examine the role of the A-box and the third helix region of HNF-4 in DNA binding, the constructs HNF4.125 and HNF4.N123E were prepared (see Fig. 1). Both end with residue 125 which corresponds to the end of the third helix in RXR. HNF4.N123E also contains a glutamic acid (E) in place of an asparagine (N) at position 123, rendering the region essentially identical to that of RXR. To help distinguish the different shift complexes, the HNF4.125 and HNF4.N123E constructs also both contain an intact N terminus, whereas HNF4.142 does not. (As expected, the N terminus did not affect the DNA binding and dimerization properties of HNF-4, data not shown.) The results of EMSA using HNF4.125 and HNF4.142 are shown in Fig. 6A. Under the conditions used, HNF4.125 binds APF1 as a monomer but not as a homodimer (lanes 1 and 2). In contrast, when HNF4.125 is mixed with extracts containing HNF4.142, a new shift complex appears which is dependent FIG. 4. DNA binding specificity of HNF4.wt versus HNF4.142. EMSA was performed as described in Fig. 2 using the probes indicated and as defined under "Experimental Procedures." Shown are the autoradiograms of shift gels (A and B) and the results of gels quantified by phosphorimaging (C). A, HNF4.wt extracts were subjected to EMSA on a 6% polyacrylamide gel in the presence (ϩ) or absence (Ϫ) of ␣445 as described under "Experimental Procedures." Shown are all the 32 P-labeled bands, including the endogenous band from the COS extracts, except for the free probe. Lanes 1a and 2a are enhanced images of lanes 1 and 2. B, HNF4.142 extracts were subjected to EMSA in the presence (ϩ) or absence (Ϫ) of 40-fold molar excess self-competition as indicated. Shift complexes due to factors endogenous to COS cells are indicated. No HNF4.142 dimer-half-site complex was detected even upon longer exposure. To determine whether residues 126 -146 are also important in the specificity of binding of HNF-4 to DNA, a competition experiment was performed. The results, shown in Fig. 6B, indicate that the HNF4.142 homodimer and the HNF4.142-HNF4.125 heterodimer can both be efficiently competed by unlabeled competitor DNA, whereas neither the HNF4.142 nor the HNF4.125 monomer can be. This reinforces the notion that the dimerization mediated by residues 126 -142 is important for the specific binding of the HNF-4 DBD to DNA. To assess the role of the A-box in binding affinity, dissociation constants (K d ) for both HNF4.142 and HNF4.125 were determined on both a DR1 and a DR2 probe and compared with that of the RXR␣ DBD. Since the RXR␣ DBD did not homodimerize on any DR2 examined, RAR␣ DBD and a native ApoAI site A probe, which can be considered a DR2 (see Table I), were used. The results, shown in Fig. 7, indicate that both the HNF4.142 and the RXR.DBD homodimer bind DR1 with high affinity (K d ϭ 4.6 and 2.2 nM in panels A and C, respectively). The EMSA for Fig. 7 was done at 4°C in order to detect the RXR and RAR homodimers. Under these conditions, a HNF4.125 homodimer shift complex was also detected, whereas it was not detected when the reaction was performed at RT (see Fig. 6). The HNF4.125 homodimer complex, however, had a very low binding affinity as evidenced by a nonsaturating curve (panel B). The binding of all the DBD monomers (HNF4.142, HNF4.125, RXR, and RAR) was also nonsaturating, yielding a straight line in each case (data not shown). In comparison to the DR1, binding to ApoAI (a DR2) occurred at a lower affinity for the HNF4.142 homodimer but was comparable with that of the RAR␣ DBD (K d ϭ 32.3 and 49.3 nM in panels D and F, respectively). Interestingly, the affinity of the HNF4.125 homodimer for ApoAI site A was higher than that for the DR1 (66.4 nM in panel E) but still significantly lower than that of HNF4.142. These results show that, unlike the full-length HNF-4 which binds DNA much more efficiently than does either the full-length RXR or RAR homodimer (12), the HNF-4 DBD binds DNA as a dimer with a similar affinity as the RXR and the RAR DBD. The results also show that while amino acid residues 117-125 (the third helix region) are insufficient for high affinity binding of the HNF-4 DBD, the A-box region is required. The DBD of HNF-4 Heterodimerizes with the DBD of RXR␣ but Not of Other Receptors-Since the HNF-4 DBD exists as a monomer in solution, we hypothesized that, unlike the fulllength receptor, it might be able to heterodimerize with RXR␣. EMSA was performed as above using the HNF-4 and RXR DBD constructs (see Fig. 1) and DR1 and DR2 as probes. The results, shown in Fig. 8 In order to determine whether the third helix/T-box region facilitates HNF-4 dimerization, EMSA was performed with RXR.DBD and extracts containing HNF4.N123E, which theoretically contains a third helix analogous to that of RXR␣ (see Fig. 5). The results show that HNF4.N123E does indeed form homo-and heterodimers somewhat more readily than does HNF4.125 on DR1 (Fig. 8A, compare lanes 10 and 11 to 5 and 6, respectively). Therefore, the third helix may facilitate but is evidently not required for homo-and heterodimerization. Interestingly, in contrast to the APF1 probe (Fig. 6), HNF4.125 and HNF4.N123E do not bind DR1 and DR2 as monomers, only as dimers (Fig. 8A, lanes 5 and 10; B, lanes 4 and data not shown). The significance of this difference is not known but could be due to different reaction conditions and/or nucleotide sequence of the probes. Heterodimerization of the HNF-4 DBD with other receptors was also examined. However, in contrast to RXR.DBD, neither HNF4.142 nor HNF4.125 nor HNF4.N123E heterodimerized with either RAR.DBD (Fig. 8C) or TR.DBD (data not shown). Of potential interest is the observation that the amount of RAR DBD homodimer complex appears to be reduced by the presence of the HNF4.DBD extracts (lanes 6,9,12). Whereas the reason for this reduction is not known, it is doubtful that it is due to dimerization of the HNF4 and the RAR DBDs in solution since NMR studies of several different receptor DBDs, includ- ing RAR␤ and RXR␣, show the DBDs to exist only as monomers in solution (11,14,18,31). DISCUSSION The results of this study show that the DNA binding domain (DBD) of HNF-4 readily heterodimerizes with the DBD of RXR␣. The results also show that the A-box region at the C-terminal end of the DBD is critical for the cooperative homodimerization and thus high affinity binding of HNF-4 DBD to DNA. Finally, this is the first report of direct evidence that HNF-4 binds DR2 elements as well as DR1 elements. Although HNF-4 has been previously shown to bind native elements comprised of nonconsensus half-sites separated by two nucleotides (2,25), this is the first published report of HNF-4 binding a synthetic DR2 element. Determinants of Exclusive Homodimerization of HNF-4 Reside within the LBD-In contrast to the full-length HNF-4 which exists in solution as a stable homodimer and binds DNA only as a homodimer (12), the HNF-4 DBD exists in solution as a monomer (Figs. 3 and 4B) and efficiently heterodimerizes with the DBD of RXR␣ (Fig. 8). These results demonstrate that the determinants responsible for the exclusive homodimerization of HNF-4 reside outside the DBD, presumably in the LBD. The conclusion that the LBD is crucial in mediating the exclusive homodimerization activity of HNF-4 both in solution and on DNA is not too surprising since the LBD is known to be important in determining the homo-and heterodimerization properties of other members of the nuclear receptor superfamily (reviewed in Ref. 10). For instance, the DBDs of the steroid hormone receptors (i.e. estrogen and glucocorticoid) exist in solution as monomers even at millimolar concentrations (11,31), whereas the full-length receptors exist in solution as homodimers (7,20,29,39). Mutations in the LBD, especially the conserved ninth heptad region, have also been shown to change the homo-and/or heterodimerization properties of nuclear re-ceptors such as TR, RAR, and RXR (1,35,43). However, this is the first example of a DBD of a receptor forming heterodimers when the corresponding full-length receptor forms only homodimers. These results unequivocally implicate the LBD in determining homo-versus heterodimerization of HNF-4. Indeed, the crystal structure of a dimer of the LBD of RXR␣ shows important contacts between amino acids of opposite charge from each of the two monomers at the dimer interface (3). Finally, since HNF-4 defines a new subclass of nuclear receptors (12), we propose that the determinants of the strong homodimerization activity of other receptors that fall into that group, such as germ cell nuclear factor and TAK1, will also reside within the LBD. HNF-4 DBD Possesses Unique Structural and Functional Properties-Of all the mammalian nuclear receptors, HNF-4 is most similar to RXR in amino acid sequence, especially in the DBD (17). Mouse RXR␣ is 56% identical to rat HNF-4 in the DBD. HNF-4 and RXR␣ also share response elements from at least six different genes as well as a consensus site of a direct repeat of AGGTCA separated by one nucleotide (DR1) (reviewed in Refs. 32,33). These similarities suggested that the structural and functional properties of the HNF-4 DBD would be similar to that of the RXR␣ DBD. The results from this study show, however, that this is not the case. First, the A-and the T-boxes appear to play different roles in the homodimerization of the HNF-4 and the RXR␣ DBDs. For example, the A-box is required for the cooperative homodimerization of the HNF-4 DBD (Figs. 6 and 7), whereas the T-box is necessary and sufficient for the cooperative homodimerization of the RXR␣ DBD (18,40,41). Second, the DBD of HNF-4 heterodimerizes with the RXR␣ DBD but not with the RAR or the TR DBD, suggesting that the HNF-4 DBD is more like the RAR or the TR DBD than the RXR␣ DBD ( Fig. 8 and data not shown). were used as probe. The reactions were incubated on ice for 30 min to reach equilibrium and then loaded in duplicate onto mobility shift gels. The gels were dried and quantified by phosphorimaging. Shown are the average of the duplicate loadings from one representative experiment, the best fit curve, and the corresponding dissociation constant (K d ) calculated by the MicroCal Origin TM program. Since HNF4.125 binding to DR1 could not be saturated, a K d could not be calculated (panel B). similar to a TR DBD homodimer even though it is much more similar to the RXR␣ DBD in amino acid sequence, particularly in the T-box region. This similarity, however, is unexpected since HNF-4 and TR homodimers bind direct repeats with different spacings. Whereas HNF-4 binds both DR1 and DR2 elements, TR binds DR4 elements as well as inverted repeats. And binding of receptor dimers to direct repeats with different spacings are expected to involve dimerization interfaces formed between different regions of the receptors (28). Despite the similar role of the A-box in HNF-4 and TR DBD homodimerization, other evidence suggests that the structure of HNF-4 DBD is different from that of the TR DBD. First, the A-box plays apparently different roles in the monomeric binding of the HNF-4 DBD and the TR DBD. Whereas the A-box is absolutely required for the monomeric binding of TR DBD (41), it appears to facilitate but is not necessary for the monomeric binding of HNF-4 DBD (Fig. 6). Second, the A-box of TR is required for heterodimerization with RXR but it is not required for heterodimerization of the HNF-4 DBD (Fig. 8). Third, structural modeling indicates that the A-box of the TR DBD will interfere with the second zinc module of RXR DBD on DR1 and DR2 elements and therefore prevent the two DBDs from heterodimerizing on those elements (28). In contrast, the HNF-4 DBD construct containing the A-box (HNF4.142) heterodimerizes with the RXR␣ DBD on DR2 (Fig. 8B). Potential steric hindrance could, however, explain why the same HNF-4 DBD construct does not heterodimerize with RXR.DBD on DR1 whereas the HNF-4 DBD construct lacking the A-box (HNF4.125) does (Fig. 8A). In conclusion, the results of this study show not only that the determinants of homo-versus heterodimerization reside within the LBD of the receptors but also that much remains to be learned about the role of the A-box and the T-box in the nuclear receptors. Furthermore, the results indicate that the dimerization interface of HNF-4 DBD is different from that of other nuclear receptors, which could explain why HNF-4, which binds DR1 and DR2, has a different DNA binding specificity than that of other receptors, including RXR, TR, and RAR, which preferentially bind DR1, DR4, and DR5, respectively. Finally, this study provides yet another example of how broad functional diversity among proteins within a highly conserved superfamily can be achieved with a minimal amount of alteration in primary amino acid sequence.
v3-fos-license
2022-08-03T15:08:11.015Z
2022-07-30T00:00:00.000
251263662
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/15/15/5276/pdf?version=1659177831", "pdf_hash": "1c2b49421265078346b0bdfedcb781e3fd4374cd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41607", "s2fieldsofstudy": [ "Engineering" ], "sha1": "9f1092c7beefbe8ca9c8f0c09d68c386c09b125d", "year": 2022 }
pes2o/s2orc
Recycled Aggregate: A Viable Solution for Sustainable Concrete Production Construction and demolition activities consume large amounts of natural resources, generating 4.5 bi tons of solid waste/year, called construction and demolition waste (C&DW) and other wastes, such as ceramic, polyethylene terephthalate (PET), glass, and slag. Furthermore, around 32 bi tons of natural aggregate (NA) are extracted annually. In this scenario, replacing NA with recycled aggregate (RA) from C&DW and other wastes can mitigate environmental problems. We review the use of RA for concrete production and draw the main challenges and outlook. RA reduces concrete’s fresh and hardened performance compared to NA, but these reductions are often negligible when the replacement levels are kept up to 30%. Furthermore, we point out efficient strategies to mitigate these performance reductions. Efforts must be spent on improving the efficiency of RA processing and the international standardization of RA. Introduction Concrete is the second most used material by volume on Earth. Portland cement is the main binder of concrete, and its production (estimated at 4.3 bi ton/year) accounts for about 8% of the global CO 2 emission [1]. Besides the CO 2 emission released by cement production, large amounts of sand and gravel are extracted to serve as aggregates-it is estimated that the natural aggregate (NA) global consumption is of 32 bi tons/year, and this number grows by~5% per year [2]. The extraction of NA generates environmental liabilities related to exposure of the natural soil layer, erosion processes, deforestation of extraction areas, siltation of rivers, exposure of the groundwater table, among others [3]. In addition to the high environmental impact caused by concrete production, construction and demolition activities generate large amounts of solid waste called construction and demolition waste (C&DW). It is estimated that this kind of waste accounts for 30-40% of the total solid waste generated worldwide, with a global generation of around 4.5 bi ton/year [4,5]. For instance, The European Union, US, and China, respectively, are responsible for the generation of 924 mi, 600 mi, and 2.36 bi tons of C&DW every year [4,5]. Despite the efforts to reuse this waste, about 35% of it is disposed of in landfills [6]. Several other wastes can be cited as problematic from an environmental point of view and are studied for application as recycled aggregates (RA). This is the case with glass waste [7], PET and other plastic waste [8], slag waste [9], and ceramic waste [10]. There are also great efforts to reuse this waste, mainly in the form of aggregates; although some have glass waste [7], PET and other plastic waste [8], slag waste [9], and ceramic waste [10]. There are also great efforts to reuse this waste, mainly in the form of aggregates; although some have some binding power, civil construction still does not substantially use recycled waste in the production of concrete and other construction materials. Figure 1 presents the results of bibliometrics from the analysis of the research carried out on the subject. The keywords present in Figure 1 were obtained using the Scopus database in research between the years 2013 and 2022 and considering the articles published in the main journals in building materials. Input keywords were: recycled aggregate, concrete (or mortar), and sustainable construction. Based on this, 1580 documents were obtained, as indicated in Figure 1. From the point of view of the construction materials studied, it is observed that the most common are concrete, mortar, geopolymers, and asphalt materials. Regarding the properties studied, the mechanical properties, water absorption, modulus of elasticity, physical properties, electrical resistivity, sorptivity, ultrasonic pulse speed, flexural strength, durability, and equivalent tests stand out. Regarding the materials used, the most outstanding are recycled concrete aggregates (RAC) and construction and demolition waste (or synonyms), followed by glass waste and slag waste. In the case of glass waste, there are recent studies evaluating the application of the material in cement and geopolymer concrete, such as the study conducted by Siddika et al. [11]. In this study, the authors highlight information on the amount of glass waste generated worldwide and highlight the versatility of the material, which can be used as coarse or fine aggregates or as a binder if the material's granulometry is appropriate. Another relevant study recently published was conducted by Ferdous et al. [12], where the authors highlight information related to global waste generation, performance, application, and future opportunities on glass waste and other similar materials. Materials such as ceramic waste and PET waste appear to a lesser extent in Figure 1. Therefore, studies with these materials need to be carried out. One of the reasons to explain the low number of studies on ceramic waste is the fact that the material has pozzolanic potential [13]. However, in the case of PET waste, the low number of studies on the material is not justified, mainly because it is a light aggregate whose use around the world is very considerable. Through Figure 1 and the highlighted analysis, it is observed that the main gaps in the literature are: evaluating wastes less used in construction materials, such as PET, ce- From the point of view of the construction materials studied, it is observed that the most common are concrete, mortar, geopolymers, and asphalt materials. Regarding the properties studied, the mechanical properties, water absorption, modulus of elasticity, physical properties, electrical resistivity, sorptivity, ultrasonic pulse speed, flexural strength, durability, and equivalent tests stand out. Regarding the materials used, the most outstanding are recycled concrete aggregates (RAC) and construction and demolition waste (or synonyms), followed by glass waste and slag waste. In the case of glass waste, there are recent studies evaluating the application of the material in cement and geopolymer concrete, such as the study conducted by Siddika et al. [11]. In this study, the authors highlight information on the amount of glass waste generated worldwide and highlight the versatility of the material, which can be used as coarse or fine aggregates or as a binder if the material's granulometry is appropriate. Another relevant study recently published was conducted by Ferdous et al. [12], where the authors highlight information related to global waste generation, performance, application, and future opportunities on glass waste and other similar materials. Materials such as ceramic waste and PET waste appear to a lesser extent in Figure 1. Therefore, studies with these materials need to be carried out. One of the reasons to explain the low number of studies on ceramic waste is the fact that the material has pozzolanic potential [13]. However, in the case of PET waste, the low number of studies on the material is not justified, mainly because it is a light aggregate whose use around the world is very considerable. Through Figure 1 and the highlighted analysis, it is observed that the main gaps in the literature are: evaluating wastes less used in construction materials, such as PET, ceramic, and glass; and verifying relevant properties and parameters that should be studied in other applications of construction materials, in addition to cementitious materials such as concrete and mortar. Particularly noteworthy are the studies on asphalt pavement and geopolymer. In this context, the use of (RA) from C&DW and other wastes in NA replacement for concrete production has drawn the attention of the technical and scientific communities in the last years. This solution can simultaneously mitigate two major problems: (i) reduce the environmental impact associated with aggregate extraction, and (ii) provide a proper destination for solid waste generated on a large scale worldwide, such as C&DW [7], glass, slag, PET, and ceramic wastes. In this paper, we shortly review the use of RA from C&DW and other wastes for sustainable Portland cement-based concrete production, drawing the main challenges and outlook related to this topic. The main novelties of the manuscript are: (i) detailing viable solutions to mitigate problems related to the use of recycled aggregates; (ii) presenting alternative solutions to the main problems encountered in RA; (iii) evaluating the potential of other wastes (e.g., ceramic and PET) for application as recycled aggregates. The analysis of these potential wastes was based on a bibliometric study, as highlighted in the previous paragraphs. Application in Concrete C&DW can be used in concrete as coarse/fine aggregate and a finely ground material (i.e., filler). Concerning aggregate, Robalo et al. [14] observed that replacing NA with C&DW progressively increases the water or superplasticizer admixture to keep the workability of concrete due to the high porosity and water absorption of C&DW. However, Rashid et al. [15] found that pre-saturating the C&DW avoided this workability loss. This pre-saturation can also improve the resistance to internal steel corrosion [16]. Duan et al. [17] replaced 0-100% NA with coarse C&DW in self-compacting concrete, observing that 25% replacement did not change the compressive strengths, while the flexural strength decreased as the aggregate replacement level increased. In turn, Sahoo and Singh [18] evaluated the punching shear capacity of C&DW concrete slab-column connections, observing that the punching-shear crack angle and the punching strength were independent of the C&DW replacement level for a given strength class. As for durability, Duan et al. [17] found that no significant damage in the concrete resistance to chloride penetration was observed for up to 50% NA replacement with C&DW. Similarly, Cantero et al. [19] observed equivalent electrical resistivity (associated with the resistance against chlorite ions migration) for concretes containing 100% NA and 50% NA + 50% C&DW. Sáez del Bosque et al. [20] evaluated the carbonation depth of concretes with 0-50% replacement of NA with coarse C&DW, finding statistically equivalent values for the control (100% NA) and 25% C&DW-containing concretes. Overall, we can observe that the fresh and mechanical properties of concrete are negatively affected by high replacement levels of NA with C&DW, while durability aspects are not so sensitive to it [21]. The worse performance of C&DW can be associated with its surface characteristics, which usually contain old mortar (harming the interfacial transition zone-ITZ-with the new cementitious matrix), in addition to microcracks generated during C&DW crushing [4]. These aspects are illustrated in Figure 2. However, concrete properties are marginally affected when the replacement levels are kept relatively low (e.g., up to 30%). Another negative feature of C&DW is its heterogeneity [24]. Exemplifying this negative aspect, Figure 3 presents the main fractions composing the amount of C&DW obtained in the city of Naples in Italy [25]. It is observed that the C&DW has 47.37% mixed material, 24.81% soil and stones, 7.03% iron and steel, 6.69% concrete, and 5.25% bituminous mixers, among other minor components. These material grades vary by C&DW, but it is important to understand through Figure 3 that the material is very heterogeneous. This aspect represents yet another difficulty in the use of this type of RA. Thus, it is highlighted that, in addition to trying to solve the problem related to adhering grout to the aggregate grains highlighted in Figure 2, alternatives must be considered to solve the problems related to the heterogeneity of C&DW. Another negative feature of C&DW is its heterogeneity [24]. Exemplifying this negative aspect, Figure 3 presents the main fractions composing the amount of C&DW obtained in the city of Naples in Italy [25]. It is observed that the C&DW has 47.37% mixed material, 24.81% soil and stones, 7.03% iron and steel, 6.69% concrete, and 5.25% bituminous mixers, among other minor components. These material grades vary by C&DW, but it is important to understand through Figure 3 that the material is very heterogeneous. This aspect represents yet another difficulty in the use of this type of RA. Thus, it is highlighted that, in addition to trying to solve the problem related to adhering grout to the aggregate grains highlighted in Figure 2, alternatives must be considered to solve the problems related to the heterogeneity of C&DW. Alternatively, C&DW can be finely ground to a "micro-aggregate", i.e., filler. In fact, recent studies [25] confirmed that carbonated cement paste presents amorphous aluminosilicate gel with pozzolanic properties. De Matos et al. [26] observed that powder C&DW led to 5% higher compressive strengths than limestone filler for the same incorporation level, and that the residue acted mainly as an inert filler within the first seven days of hydration. This was confirmed later by Frías et al. [27], who observed that after 28 days, extra C-S-H and ettringite were formed in the presence of powder C&DW, indicating the potential binding activity of the material [28,29]. This potentially improves the ITZ between RA and the new cementitious matrix [30], compensating for the negative effect on ITZ mentioned earlier. Cantero et al. [19] simultaneously replaced 10-25% Portland cement with ground recycled concrete (GRC) and 0-50% NA with C&DW RA, evaluating the mechanical behavior of concrete. The authors found that the concrete containing 10% GRC and 50% C&DW RA achieved the same strength class as the 100% Portland cement concrete. He et al. [31] observed that replacing 30% binder with GRC reduced the autogenous shrinkage of ultra-high-performance concrete while leading to comparable compressive strengths (123-128 MPa). In general, we can observe that the use of C&DW powder as filler tends to keep or even improve the properties of concrete. Alternatively, C&DW can be finely ground to a "micro-aggregate", i.e., filler. In fact, recent studies [25] confirmed that carbonated cement paste presents amorphous aluminosilicate gel with pozzolanic properties. De Matos et al. [26] observed that powder C&DW led to 5% higher compressive strengths than limestone filler for the same incorporation level, and that the residue acted mainly as an inert filler within the first seven days of hydration. This was confirmed later by Frías et al. [27], who observed that after 28 days, extra C-S-H and ettringite were formed in the presence of powder C&DW, indicating the potential binding activity of the material [28,29]. This potentially improves the ITZ between RA and the new cementitious matrix [30], compensating for the negative effect on ITZ mentioned earlier. Cantero et al. [19] simultaneously replaced 10-25% Portland cement with ground recycled concrete (GRC) and 0-50% NA with C&DW RA, evaluating the mechanical behavior of concrete. The authors found that the concrete containing 10% GRC and 50% C&DW RA achieved the same strength class as the 100% Portland cement concrete. He et al. [31] observed that replacing 30% binder with GRC reduced the autogenous shrinkage of ultra-high-performance concrete while leading to comparable compressive strengths (123-128 MPa). In general, we can observe that the use of C&DW powder as filler tends to keep or even improve the properties of concrete. Application in Other Construction Materials In addition to the application of C&DW as RA in concrete, highlighted in topic 2.1, these materials can be applied to other building materials. These other applications will be highlighted in this section. Table 1 presents some works that evaluated the use of C&DW as RA in different construction materials, except concrete and mortar. Table 1 was developed considering the database evaluated in the bibliometric study highlighted in Figure 1 and prioritizing publications carried out in more recent years. Application in Other Construction Materials In addition to the application of C&DW as RA in concrete, highlighted in topic 2.1, these materials can be applied to other building materials. These other applications will be highlighted in this section. Table 1 presents some works that evaluated the use of C&DW as RA in different construction materials, except concrete and mortar. Table 1 was developed considering the database evaluated in the bibliometric study highlighted in Figure 1 and prioritizing publications carried out in more recent years. From the information presented in Table 1, it is possible to observe that the main applications of C&DW as RA in materials other than concrete are asphalt pavement, pavement subbase, or geopolymers. It is worth noting that in this type of application, the concerns regarding the controlled properties are different. While in concrete, the main properties are related to mechanical behavior, in the case of the applications illustrated in Table 1, it is important that parameters such as water absorption, frost resistance, friction, and hardness are considered. As a result, replacement levels of C&DW and RA can reach up to 100%. In the case of asphalt pavement application, in general, C&DW is used as coarse aggregate. Although it is not the most relevant property, it is important that there is an analysis of the mechanical properties of asphalt pavement with C&DW as RA, as highlighted by Zhu et al. [41]. Another relevant issue is the transition zone between the RA and the asphalt pavement. This transition zone, studied by Hu et al. [32], also presents adhesion problems through the same mechanisms highlighted in Figure 2. Therefore, another aspect that must be carefully studied is the morphology of the RA because it directly affects the adhesion with asphalt pavement. This was the point of the investigation by Guo et al. [39], who verified the direct influence between the morphology of the RA and the mechanical behavior of the asphalt pavement due to the deficiencies of the IZT. A positive point highlighted by Guo et al. [39] is that the RA, in general, is irregular and, therefore, has good adhesion with asphalt pavement. Two important properties that should be studied when applied to asphalt pavement are resistance to high temperatures and adhesion with the pavement. Hu et al. [33] emphasize that the investigation of the stability at high temperatures of asphalt mixture with RA should be considered to enable the application of this material. Without this property being proven, it is impossible to use RA in this application, since asphalt pavements are commonly mixed at high temperatures during their application. Regarding adherence, Slabonski et al. [40] highlight that this evaluation must be carried out through a pull-out test, as illustrated in Figure 4. As the asphalt pavement is often subjected to intense traffic loads, the material's natural tendency is to lose adhesion. Therefore, this is another property that differentiates the study of the application of C&DW in asphalt pavements and in concrete. Another possible application is in pavement subbase. In this case, a property of great relevance that needs to be controlled is the resilient modulus. Corradini et al. [43] evaluated the application of C&DW as aggregate coarse in subbase pavement, studying the differences found in the resilient modulus. The authors observed that the use of C&DW is viable in this type of application since the values obtained from the resilient modulus are compatible with the application in pavement subbase. These results are consistent with the research carried out by Tefa et al. [42]. In the authors' study, it is highlighted that, although there is still skepticism among designers, contractors, and road agencies, the application of C&DW as an aggregate in subbase pavement should not be neglected because the material properties are compatible with this application and due to the high environmental gain provided by this application. Another highly researched application for C&DW as RA is geopolymers. It should be noted that these materials are defined as structural materials used in the same applications as OPC concrete [50]. However, geopolymers are obtained through the geopolymerization reaction between a precursor, usually rich in aluminosilicates, and an alka- Another possible application is in pavement subbase. In this case, a property of great relevance that needs to be controlled is the resilient modulus. Corradini et al. [43] evaluated the application of C&DW as aggregate coarse in subbase pavement, studying the differences found in the resilient modulus. The authors observed that the use of C&DW is viable in this type of application since the values obtained from the resilient modulus are compatible with the application in pavement subbase. These results are consistent with the research carried out by Tefa et al. [42]. In the authors' study, it is highlighted that, although there is still skepticism among designers, contractors, and road agencies, the application of C&DW as an aggregate in subbase pavement should not be neglected because the material properties are compatible with this application and due to the high environmental gain provided by this application. Another highly researched application for C&DW as RA is geopolymers. It should be noted that these materials are defined as structural materials used in the same applications as OPC concrete [50]. However, geopolymers are obtained through the geopolymerization reaction between a precursor, usually rich in aluminosilicates, and an alkaline activator solution [51,52]. In the study of these materials, it is necessary to use fine aggregate and coarse aggregate, and C&DW is viable for this function. It is noteworthy that in the application of C&DW in geopolymers, the same properties evaluated in concrete must be considered; that is, the mechanical properties of the material must be evaluated. Saba and Assaad [45], for example, evaluated the effect of using recycled fine aggregates in geopolymers at levels of up to 60%. The authors concluded that the results obtained are compatible with the application proposed in the research, in mortar for masonry, and verified that the use of RA helps in the retention of water in the material, which is a beneficial factor due to the high content absorption caused by masonry. Positive results were also obtained in the studies by Rahman et al. [46] and Xie et al. [47], where the authors stated the possibility of using C&DW as RA in geopolymers. However, some negative results were also reported by authors in the application of C&DW to RA in geopolymers. In general, these materials positively resist the effects of high temperature. However, with the use of C&DW as AR, this effect is harmful, as highlighted by Pawluczuk et al. [49]. The authors evaluated the mass loss of an NA and an RA based on C&DW, as highlighted in Figure 5. However, while NA had a maximum mass loss of approximately 6%, the RA had a loss of almost 10%. This is due, in part, to the transformation of quartz from a to b at around 600 • C. Therefore, it is observed that the use of C&DW as RA in geopolymers subjected to high temperatures is not indicated. Environmental Benefits of Using RA from C&DW The environmental benefits of using RA rely on three main aspects. First, it allows for reducing the natural resource demand and the CO2 emission associated with aggregate extraction and processing, in addition to the impacts related to cement production when C&DW is used as a ground powder in cement replacement. For example, Cantero et al. [19] evaluated the CO2 emission of the mixes containing GRC and C&DW RA mentioned earlier, observing emission reductions of up to 19.7% compared to plain Portland cement + NA concrete. The second aspect is related to the CO2 "capture" or "sequestration" by C&DW [53]. Portland cement reaction produces Ca(OH)2 as one of the hydration products, which is converted into CaCO3 in contact with the CO2 from the atmosphere, known as carbonation [28]. Therefore, cement-based elements (e.g., rendering mortars and concrete elements) capture CO2 from the atmosphere and, using C&DW RA, trap it inside new concrete. Finally, considering the amount of C&DW generated worldwide, and that ~35% is disposed of in landfills, we can estimate that around 1.5 bi tons of C&DW are disposed of every year. Even a partial replacement of NA with RA in concrete would help to give a proper destination for this residue generated on a large scale, in addition to Environmental Benefits of Using RA from C&DW The environmental benefits of using RA rely on three main aspects. First, it allows for reducing the natural resource demand and the CO 2 emission associated with aggregate extraction and processing, in addition to the impacts related to cement production when C&DW is used as a ground powder in cement replacement. For example, Cantero et al. [19] evaluated the CO 2 emission of the mixes containing GRC and C&DW RA mentioned earlier, observing emission reductions of up to 19.7% compared to plain Portland cement + NA concrete. The second aspect is related to the CO 2 "capture" or "sequestration" by C&DW [53]. Portland cement reaction produces Ca(OH) 2 as one of the hydration products, which is converted into CaCO 3 in contact with the CO 2 from the atmosphere, known as carbonation [28]. Therefore, cement-based elements (e.g., rendering mortars and concrete elements) capture CO 2 from the atmosphere and, using C&DW RA, trap it inside new concrete. Finally, considering the amount of C&DW generated worldwide, and that~35% is disposed of in landfills, we can estimate that around 1.5 bi tons of C&DW are disposed of every year. Even a partial replacement of NA with RA in concrete would help to give a proper destination for this residue generated on a large scale, in addition to increasing its added value. Challenges and Outlook There are some challenges in effectively using RA in concrete. The first to be mentioned is the mechanical strength reduction of concrete discussed earlier. To overcome these strength reductions, there are currently three main strategies: (i) Reducing the content of old mortar in RA: Saravanakumar et al. [54] pre-treated RA with sulfuric, nitric, and hydrochloric acids, achieving better mechanical performance in concrete than untreated RA. Thermal treatment was also used for this purpose [55], but high energy intake may be required to reach the desired decomposition temperatures (around 800 • C). (ii) Surface-treating RA before its application in concrete: He et al. [56] found that the density, water absorption, and crushing index of RA were improved with previous treatment using pozzolan slurry combined with sodium silicate and silicon-based additives, improving its performance in concrete. Li et al. [57] pre-treated RA with nanosilica suspension (spraying and soaking), observing that the micro-hardness of both the old mortar and the new mortar near the ITZ was enhanced after treatment, improving the compressive strength, water absorption, and chloride penetration resistance of RA-containing concrete. Zhang et al. [58,59] observed that pre-treating RA with a sulfoaluminate cement slurry led to a denser RA surface (with higher micro-hardness), leading to improved mechanical strength and durability. (iii) Compensating concrete performance loss using fibers: Zong et al. [60] observed that 1.2% steel fiber incorporation evened or increased the 28-day flexural strength of concretes containing 50-100% RA compared to 100% NA concrete without fibers. Similarly, Paluri et al. [61] observed 28-day compressive and flexural strength reductions of up to 23 and 18%, respectively, when NA was replaced with 50-100% RA. However, when adding 1% steel fiber, the 50% RA concrete had comparable (5% lower) compressive strength and 31% higher flexural strength than 100% NA concrete. It is worth mentioning that synthetic fibers can be advantageous from a technological point of view; however, in countries with an abundance of natural fibers like Brazil and India, these can be a better solution for the destination of agro-industrial waste [62]. Another central point is the heterogeneity of RA. C&DW can contain several different materials. Souza et al. [40] observed that C&DW was composed of mortar residues (20%), wood (19%), concrete (14%), soil (14%), brick (11%), and steel (9%), besides minor amounts of natural rocks, paper, thermoacoustic tiles, fiber cement sheets, glass, expanded polystyrene, and cement boards. However, the authors demonstrated that this composition varies depending on the construction step (new construction or renovation) as well as the execution phase (structure, envelope, or finishes). Furthermore, Galderisi et al. [63] observed that the petrography of C&DW depends on the geographical area where the waste is produced. In limestone-abundant areas, calcareous materials are found in high contents, while the residue is rich in silica and alumina in limestone-poor areas. In this context, it is hard to predict the mechanical properties of RA and, therefore, its performance in concrete. One strategy to reduce RA heterogeneity is the pre-separation of the waste fractions during the different stages of construction, i.e., during the generation of C&DW. Another approach involves RA processing, removing the contaminants listed above and preferentially the adherent mortar attached to it, besides crushing it to an adequate particle size distribution. For this purpose, various methods are available, such as mechanical (jaw, hammer, rolling, impact, and hand crushers), chemical (acid dissolution), and thermal approaches (freeze-thaw, thermal expansion, heating and rubbing, and microwave heating) [64]. While the mechanical methods facilitate large-scale processing, part of the adherent mortar may only be entirely removed by chemical/thermal methods, which are often complex and more expensive to perform at a large scale. Furthermore, the shortage of international standards addressing RA requirements also impairs their routine use in structural concrete. While EN 206 [65], Annex E, brings specific recommendations for RA properties, many standards, such as Brazilian standard NBR 7211-Aggregates for Concrete: Specification [66], does not even mention RA. This lack of technical support reduces the safety of materials and structures designers to use RA concrete. There is still much misinformation from consumers who associate RA with low-quality construction, which is not true. The stimulus to research and its dissemination become important tools for the use and commercial acceptance of RA. RA from Other Wastes In this section, other wastes with the potential to be used as RA in concrete and other construction materials will be discussed. Table 2 presents a summary of works with these materials, extracted from the same database of the bibliometric analysis of Figure 1. The most recent publications were prioritized to build Table 2. Glass Waste as RA Glass Waste is one of the most generated wastes by human beings due to the versatility of glass used in the packaging industry, home appliances, and civil construction [83,84]. Annually, around 14 million tons of waste glass are produced in the European Union and around 11.38 million in the USA [51]. These numbers highlight the urgency of recycling the material. An important advantage of using glass as RA is the fact that it is easily ground. Compared to other RA such as C&DW, the energy required to reduce the glass waste particle size is much less [68]. However, this can also be a disadvantage in using this material as RA as it is more likely to wear through abrasion [85]. Another interesting advantage of glass waste is that it is more uniform than other ARs within the same production standard. In other words, glass from the packaging industry is homogeneous with each other [68], which is an important advantage for using this material as RA [73]. Another important point of using waste glass as RA is that the material can increase the durability of cementitious materials. Alducin-Ochoa et al. [72] evaluated the durability of mortars produced with glass waste as RA partially replaced NA in contents of up to 25%. The authors observed that glass waste mortars showed superior performance in salt crystallization, freezing, and thermal shock tests. This is another advantage of using the material as AR. Another important issue made possible using glass waste as RA is highlighted by Xiao et al. [67]. The authors verify the possibility of using glass waste as a luminescent material in decorative mortars. This effect is only possible with the use of glass waste which, if correctly used as an AR, allows the emission of visible light (glow) due to its luminescent surface. Furthermore, in the same study, the authors proved that glass waste has a high mechanical performance when used as an RA. A worrying disadvantage of using glass waste as RA is the possibility of alkaliaggregate reactions due to the high content of amorphous and reactive silica in the composition of this material. This causes cracks and failures in concrete structures. However, as highlighted by Duan et al. [69], the use of organic waste such as drinking water treatment sludge as a substitute for OPC can reduce this effect. Another possibility to reduce this problem is to promote the curing of cementitious material with glass waste in CO 2 curing, as highlighted by Whang et al. [71]. This type of curing causes an increase in mechanical strength, a reduction in pH, and permeability of the material. This is because the treatment promotes a reduction in the CH content of the OPC. In the case of glass waste, the treatment promotes the occurrence of an alkali-aggregate reaction on the surface of the glass, generating pressure into the glass aggregate. This pressure acts in the opposite direction of the typical expansion of the alkali-aggregate reaction, mitigating the problem. Therefore, this problem should not be enough to rule out the use of glass waste as AR. Slag Waste as RA Slag wastes are materials from different metallurgical processes, such as in producing steel. These materials are known as granulated blast furnace slag and can undergo two types of cooling: abrupt cooling, which allows the material to obtain agglomerating characteristics, thus being a viable substitute for OPC; or slow cooling, where the material loses its binding properties and can only be used as RA [52,86]. Since the binding materials have greater added value, the metallurgical industries started to produce slag waste through an abrupt cooling process, aiming to use it as a binding material. However, there are still studies that use slag wastes from steel production. For example, the work by Goli [73] is cited, in which the authors used 75% steel slag to produce asphalt pavements. The authors concluded that the use of steel slag improved the marshall stability and tensile strength of the material. Therefore, the use of this material is viable as an AR, even if this is not the best waste application. Another interesting work to mention is that of Chandru and Karthikeyan [74], where the authors used recycled steel slag as coarse aggregate to produce self-compacting concrete. The authors observed that the material presented good mechanical results and good durability parameters. However, in addition to the granulated blast furnace slag, there are other types of waste that can be used as RA. This is the case with ferronickel slags, which do not have high binding power, as is the case with granulated blast-furnace slag. In this sense, the best application for the material is RA. This was studied by Luo et al. [75] and Petrounias et al. [76], where the authors proved the use of this material in the proposed application. Ceramic Waste as RA Ceramic waste is a material obtained by grinding blocks and bricks obtained from construction and demolition, or from the ceramic industry [77]. Depending on the burning range of the blocks in the manufacturing stage, the material may have a crystalline or amorphous structure. This happens due to the transformation of clay minerals, mainly kaolinite, as shown in Figure 6 [87]. This clay mineral can transform into metakaolin, if the temperature occurs in the range of 500-750 • C, which has an amorphous structure. In this case, the material has pozzolanic potential and can be used as a substitute for OPC or as a precursor in geopolymers [10]. If the firing temperature exceeds 950 • C, the material is transformed into mullite, which has a crystalline structure. In this case, it is possible to use the material as RA [87]. Liu et al. [77] evaluated the use of ceramic waste as fillers in the production of concrete. The authors observed that the energy consumption to perform the grinding of the material is very high and should be considered a major disadvantage in the use of the material as RA. However, the results of mechanical properties obtained are very relevant and enable the use of ceramic waste as RA. In addition to the research by Liu et al. [77], other authors also verified the feasibility of applying ceramic waste to RA. Yang et al. [78] evaluated the properties of foam concrete containing ceramic waste as RA; Aldemir et al. [79] evaluated the shear behavior of concrete beams containing ceramic waste as RA. In both surveys, the results obtained were very satisfactory. PET Waste as RA PET waste is a material with great potential for use in AR. However, one of the great difficulties is to perform the grinding of the material due to the surface area of the waste. Furthermore, the energy used for grinding the material is higher than other wastes. This is a major difficulty in using the material [88]. Another problem is the large absorption of water promoted by PET waste. This makes it difficult to use the material at high levels. Campanhão et al. [8] and Silva et al. [80] evaluated the use of PET waste as RA replacing NA in cementitious materials. The authors evaluated the substitution of up to 30% and observed that the use of the residue causes a reduction in the workability and a decrease in the mechanical strength of the material. However, at contents of up to 10% of waste, the authors observed an increase in compressive strength due to an increase in the packing of the material. Other authors have confirmed this pattern. For example, Perera et al. [81] and Alfahdawi et al. [82] evaluated the use of PET waste at levels of 5 and 2.5%, respectively. Therefore, this pattern of incorporation at low levels is typical of the material. In view of this, it is observed that the main difficulty in using the material is the high-water absorption of the PET waste. This pattern reduces the amount of work done with the material. Other Waste as RA Other wastes can also be used as RA. An example is the primary sludge waste from the paper industry. This waste is an organic material that can have a binding potential. As an aggregate, it should be used in low levels due to the high leaching potential [89]. The ornamental stone waste is also mentioned. These materials are generally inert and used as a filler fraction. The main limitation of the material is the high-water absorption, which can impair the mechanical performance of the material. However, it can promote greater mechanical strength due to promoting packing [90,91]. Liu et al. [77] evaluated the use of ceramic waste as fillers in the production of concrete. The authors observed that the energy consumption to perform the grinding of the material is very high and should be considered a major disadvantage in the use of the material as RA. However, the results of mechanical properties obtained are very relevant and enable the use of ceramic waste as RA. In addition to the research by Liu et al. [77], other authors also verified the feasibility of applying ceramic waste to RA. Yang et al. [78] evaluated the properties of foam concrete containing ceramic waste as RA; Aldemir et al. [79] evaluated the shear behavior of concrete beams containing ceramic waste as RA. In both surveys, the results obtained were very satisfactory. PET Waste as RA PET waste is a material with great potential for use in AR. However, one of the great difficulties is to perform the grinding of the material due to the surface area of the waste. Furthermore, the energy used for grinding the material is higher than other wastes. This is a major difficulty in using the material [88]. Another problem is the large absorption of water promoted by PET waste. This makes it difficult to use the material at high levels. Campanhão et al. [8] and Silva et al. [80] evaluated the use of PET waste as RA replacing NA in cementitious materials. The authors evaluated the substitution of up to 30% and observed that the use of the residue causes a reduction in the workability and a decrease in the mechanical strength of the material. However, at contents of up to 10% of waste, the authors observed an increase in compressive strength due to an increase in the packing of the material. Other authors have confirmed this pattern. For example, Perera et al. [81] and Alfahdawi et al. [82] evaluated the use of PET waste at levels of 5 and 2.5%, respectively. Therefore, this pattern of incorporation at low levels is typical of the material. In view of this, it is observed that the main difficulty in using the material is the high-water absorption of the PET waste. This pattern reduces the amount of work done with the material. Other Waste as RA Other wastes can also be used as RA. An example is the primary sludge waste from the paper industry. This waste is an organic material that can have a binding potential. As an aggregate, it should be used in low levels due to the high leaching potential [89]. The ornamental stone waste is also mentioned. These materials are generally inert and used as a filler fraction. The main limitation of the material is the high-water absorption, which can impair the mechanical performance of the material. However, it can promote greater mechanical strength due to promoting packing [90,91]. Another waste that can be used as AR are aggregates obtained from geopolymers. As with C&DW, aggregates obtained from geopolymers are a viable option for the development of AR replacing NA. This information was highlighted in the research by Xu et al. [92], where the authors evaluated the possibility of producing artificial aggregates using geopolymer aggregate concrete for light coarse aggregate. Although the mechanical performance obtained with the use of geopolymer aggregate was lower than that of natural aggregates, the scarcity of this material makes the use of this RA class a viable alternative. These results are consistent with other studies, such as the works by Xu et al. [93] and Qian et al. [94]. In both works, the authors evaluated the use of geopolymer fine aggregates to produce geopolymer materials. However, in these studies, the authors found that the geopolymer fine aggregates showed reactivity with the cement matrix. This resulted in strong interfacial bonding, which is a promising result. It is noteworthy that even though, due to the complexity of the interaction between geopolymer fine aggregates and the geopolymer matrix, the ITZ obtained is not clearly understood. This highlights the need to develop new research in the area to enable the application of geopolymer aggregates to RA. Conclusions Construction and demolition activities use large amounts of natural resources and account for a considerable portion of global CO 2 emissions. In addition to that, they generate about 4.5 bi tons of solid waste (i.e., C&DW) every year, from which around 35% is disposed of in landfills. The use of RA from C&DW for concrete production can simultaneously mitigate these environmental problems. It is important to conclude that the article benefits other researchers in the field of recycled aggregates because it presents some solutions to mitigate the main problems related to the use of recycled aggregates. For example, the current literature shows that RA tends to reduce concrete's fresh and hardened performance compared to NA. However, these performance reductions are often negligible when the replacement levels are kept low, e.g., up to 30%. Durability-related properties are not much affected by RA incorporation. There are some efficient strategies available to mitigate these performance reductions, mainly: (i) reducing the content of old mortar in RA; (ii) treating the RA's surface before its application in concrete; and (iii) compensating the performance loss through fiber addition. Non-structural concrete is another good candidate for RA application, even at 100% NA replacement. Owing to the feasibility of using RA for sustainable concrete production, efforts must be spent on improving the efficiency of RA processing, thus allowing large-scale production with lower heterogeneity, in addition to international standardization for this type of aggregate. In addition, this literature review highlighted some pertinent information that allows understanding the behavior of C&DW and the main properties that need to be evaluated in other applications of building materials. Using C&DW as RA in asphalt pavement, pavement subbase, and geopolymer requires greater control of durability properties but allows for the use of 100% RA as coarse and fine aggregate. Finally, other wastes that can be used as RA were evaluated. This is the case for glass, PET, slag, ceramic, and artificial aggregate. Some of these materials face resistance in the use of AR, as is the case with PET and ceramic waste. As a result, there is a need for further research evaluating the behavior of these materials as recycled aggregates.
v3-fos-license
2022-09-16T06:17:13.719Z
2022-09-01T00:00:00.000
252282127
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2796276/yang_2022_oi_220899_1662491166.53703.pdf", "pdf_hash": "2e5d7ff4723650fece2bd25b56158ecbf80d10ac", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41608", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "fbb30828994374bdaea2b880f225e84bf0fdf410", "year": 2022 }
pes2o/s2orc
Two-Year Health Outcomes in Hospitalized COVID-19 Survivors in China Key Points Question What are the 2-year health outcomes among patients hospitalized for COVID-19 in China? Findings In this longitudinal cohort study that included 1864 patients, the most common symptoms at 2 years after SARS-CoV-2 infection were fatigue, chest tightness, anxiety, dyspnea, and myalgia, and most symptoms resolved from 1-year to 2-year follow-up, although the incidence of dyspnea showed no significant change. Patients with severe disease during hospitalization, especially those who required intensive care unit admission, had higher risks of persistent symptoms and higher chronic obstructive pulmonary disease assessment test scores. Meaning These findings suggest that prolonged symptoms may persist in a proportion of COVID-19 survivors for 2 years after SARS-CoV-2 infection. Introduction By May 6, 2022, the global pandemic of COVID-19 had resulted in more than 500 million confirmed cases and 6.1 million deaths. 1 With the emergence of new SARS-CoV-2 variants of higher transmissibility (eg, Omicron and Delta), the number of confirmed cases constantly increases. 2,3 Although most SARS-CoV-2-infected patients recover from the acute phase, some patients may experience long-lasting health problems, including physical, cognitive, and psychological sequelae, affecting their social participation and health-related quality of life. [4][5][6] Therefore, systematic follow-up of patients with COVID-19 discharged from the hospital is necessary to identify the trajectory of symptom burden, to understand the long-term health outcomes of this disease. Previous studies 7-10 have indicated that a substantial proportion of COVID-19 survivors still experience problems in various health domains after hospital discharge. In a large national cohort 7 of people with COVID-19 and contemporary and historical controls, an increased risk of incident mental health disorders (eg, anxiety disorders and depressive disorders) was found in people with COVID-19 compared with those with seasonal influenza. In an exploratory prospective cohort study 8 involving patients who survived 1 year following intensive care unit (ICU) treatment for COVID-19, physical, mental, or cognitive symptoms were frequently reported. We previously reported 9 that in a cohort of 2433 hospitalized COVID-19 survivors, 45.0% of patients reported at least 1 symptom 1 year after hospital discharge, and patients with severe disease had increased risk of having more symptoms. However, whether COVID-19-related symptoms may persist for a longer time is still in question. Most recently, a longitudinal follow-up study 10 described the evolution of health and functional outcomes among COVID-19 survivors up to 2 years and found that health-related quality of life, exercise capacity, and mental health continued to improve throughout the 2 years regardless of initial disease severity. In the current study, we investigated the dynamic trajectory of symptom burden and symptom persistence of COVID-19 survivors 2 years after discharge from 2 designated hospitals. Methods This study was approved by the Ethics Committee of the Daping Hospital, Army Medical University, since its medical staff worked in the COVID-19-designated Huoshenshan Hospital and Taikang Tongji Hospital during the acute phase of the pandemic in early 2020. Verbal informed consent was obtained from COVID-19 survivors or their legal guardians before the study. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline was implemented. Study Design and Patients This is a longitudinal cohort study involving COVID-19 survivors who were discharged from Huoshenshan Hospital and Taikang Tongji Hospital (both in Wuhan, China) between February 12 and April 10, 2020. All adult patients with laboratory-confirmed COVID-19 were screened for eligibility. The exclusion criteria included (1) those who declined to participate, (2) those unable to be contacted, and (3) those who died before the follow-up. The 2-year follow-up study was conducted from March 1 to April 6, 2022. Procedures All patients were contacted in the order of their discharge date documented in their medical record. At each follow-up, patients underwent a standardized telephone interview and completed a selfreported symptom questionnaire and a chronic obstructive pulmonary disease (COPD) assessment test (CAT), which was initially designed to assess symptom burden of patients with COPD based on a modeling study of the association between CAT scores and impact of COPD on daily life and well-being. [11][12][13] The questionnaire at 1-year follow-up was based on symptoms that had been reported by patients during hospitalization and was described in an earlier study. 9 The questionnaire at 2-year follow-up was based on symptoms that had been reported at 1-year follow-up, as shown in eTable 1 in the Supplement. The symptoms included in the study questionnaire were graded according to a 4-point Likert scale (no problems, mild problems, moderate problems, or severe problems). Symptoms were present if at least 1 problem was rated as moderate or severe. COVID-19 survivors with long COVID-19 symptoms were defined as having at least 1 persistent or new-onset symptom related to COVID-19 that could not be explained by an alternative disease, which is consistent with the case definition of post-COVID-19 condition. 14 For patients not responding to the telephone interview for the first time, another 2 attempts were made. On the basis of the dynamic changes of symptom number between years 1 and 2, patients were classified into 4 categories: (1) patients with at least 1 symptom at both follow-up time points were defined as having symptoms persist; (2) patients with at least 1 symptom at 1-year follow-up without symptoms at 2-year follow-up were defined as having symptom relief; (3) patients without any symptom at 1-year follow-up but with at least 1 symptom at 2-year follow-up were defined as having new-onset symptoms, which included patients who had a symptom that was reported as a mild problem at year 1, but at 2-year follow-up was reported as moderate or severe problem; and (4) patients with no symptoms at all at both follow-up time points were defined as having no symptoms. Data Acquisition Collection of clinical data during hospitalization of enrolled patients has been described in our previous study of 1-year follow-up. 9 Briefly, demographic characteristics (self-reported age, sex, and cigarette smoking) and clinical characteristics (comorbidities and symptoms) were retrieved from electronic medical records. The severity of disease was defined by World Health Organization guideline for COVID-19. 9 Patients with severe disease were those with fever or suspected respiratory infection, plus 1 of the following conditions: respiratory rate greater than 30 breaths per minute, severe respiratory distress, or oxygen saturation as measured by pulse oximetry less than or equal to 93% on room air. We double-entered and validated all data using EpiData software version 3.1 (EpiData Association). Statistical Analysis Continuous variables were presented as median (IQR), followed by Mann-Whitney U test, and categorical variables were presented as absolute values along with percentages, followed by the Pearson χ 2 test or Fisher exact test when appropriate. To test the risk of bias due to patients lost to follow-up, the clinical characteristics between the enrolled patients and those lost to follow-up were compared. As an exploratory analysis, a 1:1 propensity score-matching (PSM) was further applied between these 2 subpopulations, based on age, sex, disease severity, and coexisting disorders. To identify factors associated with the risk of occurrence of at least 2 symptoms at 2-year follow-up, symptoms persisting or new-onset symptoms during follow-up, and CAT scores of at least 10, univariable logistic regression analysis was used to identify potential risk factors with P < .10, and then was adjusted by a stepwise (forward likelihood ratio) selection process in multivariable logistic regression model, whereas age, sex, and disease severity were forced into the model because of their importance. All tests were 2-sided, and P < .05 was considered significant. All statistical analyses Characteristics of Long-term Symptoms at 1-Year and 2-Year Follow-up During hospitalization, 1777 patients (95.3%) were found with at least 1 COVID-19-related symptom. During follow-up, the proportion of patients with long COVID-19 symptoms constantly decreased ( At 1 year after discharge, the most common symptoms among COVID-19 survivors were fatigue, sweating, chest tightness, anxiety, and myalgia, whereas at 2-year follow-up, the most common symptoms were fatigue, chest tightness, anxiety, dyspnea, and myalgia ( common symptoms in the new-onset symptoms group were fatigue, anxiety, chest tightness, cough, and expectoration ( Figure 2B). Of note, the proportion of dyspnea was much higher in the symptoms persist group than the new-onset symptoms group ( were independently associated with the risk of symptom persistence (eTable 12 in the Supplement). CAT Scores at 2-Year Follow-up Previously, CAT has been used to assess symptom burden of patients with COVID-19. 9,15 At 2 years after hospital discharge, the median (IQR) CAT score was 2 (0-4) in the total cohort, and the severe (Figure 3A). A total of 116 patients (6.2%) had CAT total scores of 10 or higher, and the proportion was higher in the severe disease group than in the nonsevere disease group (severe vs nonsevere, 48 patients [9.5%] vs 68 patients [5.0%]) ( Figure 3B). Patients who had CAT total scores of 10 or more were older, had a higher proportion of severe disease, more coexisting disorders, longer length of hospital stays, and a greater use of oxygen therapy than patients with lower CAT scores (eTable 13 in the Supplement). The CAT item scores are shown in Figure 3C, which suggest that more patients tended to have sleep disorder and poor energy state than other symptoms. After multivariable adjustment, age (OR, 1.04; 95% CI, 1.03-1.06; P < .001), ICU admission (OR, 2.83; 95% CI, 1.21-6.66; P = .02), and chronic liver disease (OR, 2.18; 95% CI, 1.10-4.33; P = .03) were found to be factors independently associated with the risk of CAT scores of 10 or higher at 2-year follow-up ( Figure 3D and eTable 14 in the Supplement). Higher CAT item scores regarding breathlessness, sleep, and energy were found in those with chronic liver disease (eFigure 2 in the Supplement). Discussion In this cohort study, at 2 years after hospital discharge, 370 patients (19.8%) still had symptoms, including 224 (12.0%) with persistent symptoms and 146 (7.8%) reporting new-onset or worsening symptoms from a reported level of mild at year 1. The most common symptoms were fatigue, chest tightness, anxiety, dyspnea, and myalgia. Most symptoms resolved over time, except for dyspnea, but at 1 year it was already at a low level. ICU admission was associated with higher risks of symptoms persisting, whereas coexisting cerebrovascular diseases were associated with new-onset symptoms. In total, 116 patients (6.2%) had CAT total scores of at least 10, for whom the factors associated with increased risk included ICU admission during hospital stay or coexisting chronic liver diseases. Taken together, these findings add to our current knowledge of health outcome dynamics of COVID-19. For 2-year survivors of COVID-19, the most common symptom was fatigue, which decreased from 26.9% at 1-year follow-up to 10.3% at 2-year follow-up. There was a general decreasing trend for this symptom, which was confirmed in another study. 16 The post-COVID-19 fatigue is similar to postinfectious fatigue syndromes following other well-documented infectious diseases, 17 including SARS-CoV-1 (the cause of severe acute respiratory disease) 18 and Ebola virus, 19 persistent impairment of 6-m walk distance was observed, and it contributed to the reduced quality of life. The current study identified chronic liver disease as a factor associated with the risk of symptom persistence, as well as CAT scores of 10 or higher. Previously, 1 large multicenter study 23 identified specific subgroups of patients with chronic liver disease who had higher mortality with COVID-19. In the current study, patients with chronic liver disease had higher item scores regarding Coexisting cerebrovascular diseases were associated with increased risk of new-onset symptoms, and it has been reported that coexisting cerebrovascular disease during hospitalization was 1 of the top 3 factors associated with COVID-19 severity 24 and was associated with postacute sequelae of COVID-19. 25 We also found that patients with coexisting cerebrovascular disease had more coexisting disorders of other organ systems, which raised the possibility that diseases other than COVID-19 may have been associated with the new-onset symptoms, so it is difficult to determine whether new-onset symptoms were completely attributable to long COVID-19. Limitations There are several limitations to our study. The first is common to most studies of COVID-19: the absence of an age-matched and comorbidity-matched control group. It is, therefore, not possible to directly ascribe the patients' long-term symptoms to the acute illness, particularly for patients who are at an age when comorbidities and their associated symptoms are common and will increase over time. For example in a population study in Australia, 26 9.5% of people surveyed had a modified Medical Research Council dyspnea score of 2 or higher. However, the longitudinal nature of our study showing progressive reduction in symptoms over time following the acute episode suggests an association between the acute event and the persistent symptoms. In terms of limitations that are specific to this study, the enrolled patients were less than half of the eligible population discharged from hospital. Patients lost to follow-up were older than those who continued in the study, which is important because age is an effect modifier of post-COVID-19 symptoms, and older patients had more pre-existing disorders, thus introducing a risk of survivor bias. Although we performed a PSM process, this method is limited to the factors that were measured, and other important unmeasured factors may have been operating, so residual selection bias may have persisted. Second, we used a self-reported symptom questionnaire rather than specific diagnostic tools, predisposing a risk of bias due to patients' subjectivity, and the number of symptoms involved in our study was small considering that more than 100 potential COVID-19-related symptoms have been reported, 27,28 which may introduce bias in that patients are less likely to willingly provide information not on the survey questionnaires. 29,30 Third, constantly emerging coronavirus variants have been endemic 31 and may have different virulence and long-term sequelae with our findings. Conclusions In this longitudinal cohort study that included 1864 hospitalized COVID-19 survivors, the most common symptoms at 2 years after discharge were fatigue, chest tightness, anxiety, dyspnea, and myalgia. Most symptoms resolved, yet dyspnea persisted at a very low level over time. Patients with severe disease during hospitalization, especially ICU admission, had higher risks of symptom persistence and CAT total scores of at least 10. The findings provide valuable information about the dynamic trajectory of long-term health outcomes of COVID-19 survivors.
v3-fos-license
2023-02-03T16:13:47.522Z
2023-01-31T00:00:00.000
256531454
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1718-7729/30/2/133/pdf?version=1675173736", "pdf_hash": "1e03fb323932e85e90387ea43b3df6c24081efbd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41610", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "dff6ca5bc54e13bed953caeead408ce2e9394e51", "year": 2023 }
pes2o/s2orc
Social Wellbeing in Cancer Survivorship: A Cross-Sectional Analysis of Self-Reported Relationship Closeness and Ambivalence from a Community Sample Improvements in early screening and treatment have contributed to the growth of the number of cancer survivors. Understanding and mitigating the adverse psychosocial, functional, and economic outcomes they experience is critical. Social wellbeing refers to the quality of the relationship with partners/spouses, children, or significant others. Close relationships contribute to quality of life and self-management; however, limited literature exists about social wellbeing during survivorship. This study examined positive and negative self-reported changes in a community sample of 505 cancer survivors. Fourteen items assessed changes in communication, closeness with partner/children, stability of the relationship, and caregiving burden. An exploratory factor analysis was conducted using a robust weighted least square procedure. Differences by sociodemographic and clinical characteristics were investigated. Respondents were mostly male, non-Hispanic white, and ≥4 years since diagnosis. Two factors, labeled Relationship Closeness and Ambivalence, emerged from the analysis. Women, younger survivors, individuals from minority groups, and those with lower income experienced greater negative changes in social wellbeing. Variations by treatment status, time since diagnosis, and institution were also reported. This contribution identifies groups of cancer survivors experiencing affected social wellbeing. Results emphasize the need to develop interventions sustaining the quality of interpersonal relationships to promote long-term outcomes. Introduction The implementation of early cancer screening and detection, combined with advances in curative treatment options, have contributed to the continued growth of the number of cancer survivors living in the United States [1,2]. To date, estimates from the National Cancer Institute and the American Cancer Society indicate that there are 16.9 million cancer survivors in the country, accounting for 5% of the population [3]. Additionally, the number of cancer survivors is expected to increase to 26.1 million by 2040 [4]. Physical, emotional, and financial consequences are experienced well into survivorship [5][6][7][8][9][10]. About 60% of survivors report persistent distress and fear of recurrence [5,8], approximately 36.5% remain unable to work [11,12], and between 15% and 75% present cancer-related cognitive impairment [2,13,14]. As a result, it becomes imperative to better understand the experience of this heterogeneous group and to identify strategies and approaches to address their unmet needs and long-term issues, whether due to treatment side effects, disparities, or social determinants of health [7,[15][16][17][18]. To this end, cancer survivorship research has emerged as a subset of efforts aimed at understanding the psychosocial sequelae associated with cancer treatment and to prevent and mitigate multifaceted adverse outcomes [9,[19][20][21][22]. An extensive body of evidence has demonstrated the pervasive consequences of the illness for close relationships, in terms of mental health, communication, and relationship dissolution [7,[23][24][25][26]. Elevated rates of psychological distress impair the quality of life of survivors and their partners [27][28][29]. Relationship satisfaction was reported among couples engaging in mutual constructive communication, expression of feelings, and negotiation [30]. On the contrary, avoidance, holding back, or disengagement have been linked to poorer relationship functioning, coping, and psychological wellbeing [30] Cancer-related distress and caregiving responsibilities may also negatively alter relationship stability, with greater odds of separation/divorce recorded among female survivors, young adults, and those experiencing greater distress and financial problems [31][32][33]. Although a recent systematic review documented that cancer is linked to a small decrease in divorce rate [34], Nalbant et al. (2021) found that cancer was the main reported cause for relationship dissolution among partners of cancer survivors [35]. While quality of life is a broad multidimensional concept-often resulting from the "individual's perception of their position in life in the context of the culture and value system in which they live and in relation to their goals, expectations, standards, and concerns" [36], social wellbeing refers to the satisfaction the individual has in regard to the quality of the relationships with others [37][38][39]. Authors that investigated patient-reported outcomes in cancer survivorship found that close relationships are contributing factors for quality of life after diagnosis. Support from partners, family members, and the larger social network protects against physical morbidity and mortality, while also promoting psychological wellbeing and self-management [7,37,[40][41][42][43][44]. Yet, adverse physical and psychosocial consequences of cancer have the potential to impair survivors' social wellbeing [7,23,24,45]. Contributions have documented that cancer survivors tend to show decreased social functioning because of late treatment side-effects, impaired physical functioning, mental health symptomatology, perceived stigma, financial hardship, and access to and changes in their social networks [31,[46][47][48]. Social wellbeing in cancer survivorship varies by gender, age, ethnic and cultural aspects, type/stage of cancer, and socioeconomic and personality characteristics [9,26,[49][50][51]. Although most survivors are older than 65 [1], a substantial number of patients are diagnosed with cancer during young adulthood or adulthood, with differential effects on their psychosocial outcomes [6,41,52]. Studies have shown that older age is both an aggravating and a protective factor [40,41,52]. While older patients experience more comorbidities and social isolation, they appear to cope better with the impact of the disease on close relationships as they are more inclined to preserve or improve existing ones [40,52,53]. Younger survivors, on the contrary, are faced with the premature confrontation with mortality, disruption of educational and professional goals, financial difficulties, and reproductive and sexual health concerns, which lead to difficulties in maintaining or establishing romantic partnerships and intimacy [23,25]. While there is still scarce literature concentrating on cancer survivors from minoritized racial/ethnic groups and their social wellbeing, studies have shown that worse outcomes were reported, especially for Hispanic patients [54], and that culturally informed and contextual factors guide family interactions and coping [4,55]. Despite growing attention to social wellbeing after cancer and the development of intervention approaches that capitalize on the relationship with significant others to alleviate the burden of the illness [21,53,56], gaps remain in our understanding of patterns and quality of close relationships beyond active treatment, next to the inclusion of communitybased samples able to illustrate the experience of survivors from different backgrounds, race/ethnicities, and receiving care in diverse oncology settings. The present study aims to examine positive and negative self-reported changes in social wellbeing by sociodemographic and clinical characteristics. Procedure This contribution is a secondary data analysis of the Survivorship Survey data collected between July and December 2015 from CancerCare, a leading US nonprofit organization providing professional supportive services including counseling, support groups, educational workshops, and financial assistance to cancer survivors and caregivers. Survivors were recruited through online panels; respondents were limited to individuals who were 25 years of age and older, and who had received a confirmed diagnosis of cancer from a physician/healthcare professional. Fifty percent of the sample included common cancers (lung, breast, colorectal, and prostate), and research vendors utilized specific criteria and filters so that approximately 25% of respondents were recruited from each region of the nation (Northeast, Midwest, Southeast, and Southwest/West) to increase sample representativeness. Approximately 3000 participants were invited by e-mail to reach the target sample of 500 respondents, and 505 answers were collected for the survivorship questionnaire. To minimize response biases, potential participants were not selected from cancer survivors who have used the services of the organization, online communities, or client database. Informed consent was obtained from all individual participants included in the original study. All procedures were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments. The dataset inclusive of variables and measures of interest for the present work was shared with the research team after IRB approval (19 June 2018). Impact of Cancer on Relationships A total of 14 items were utilized to assess differences in the social wellbeing of the participants since the cancer diagnosis. Items were initially developed by social work counselors and subsequently reviewed by an advisory board inclusive of experts in survey development and patient care. Using dichotomous answer options (yes/no), cancer survivors were asked to ascertain whether the illness contributed to positive or negative changes in different aspects of their lives: communication (more meaningful conversation with loved ones, more likely/less likely to share their thoughts and feelings with loved ones), closeness with partner and children (time spent with partner, time spent with children, level of intimacy, sense of isolation, children becoming too attached or withdrawn/angry), stability of the relationship (divorce or relationship dissolution with spouse/partner), and caregiver burden (partner/spouse being exhausted because of extra responsibilities, having trouble being dependent on others). Demographic and Clinical Information Demographic characteristics such as age, sex, ethnicity/race, education, annual household income, and healthcare insurance were self-reported. Clinical factors assessed as part of the survey included cancer type, time since diagnosis, current cancer status, treatment type, and information about the institution where participants received care. Data Analysis Descriptive statistics were obtained to summarize the sample's characteristics. Means and standard deviations were calculated for continuous variables, while frequency and percentages have been used for categorical variables. An exploratory factor analysis (EFA) was conducted on the 14 items that asked participants to rate positive and negative changes in their relationships with partners/spouses, family members, and children. Then, the resulting pseudo-factors obtained by summing items loading on the two-factor solution were compared by socio-demographic and clinical characteristics using chi-square tests for nominal variables and ANOVAs for continuous variables. We also calculated post hoc Tukey's test for comparison between individual groups as well as Cohen's d effect size when appropriate. Bonferroni correction was conducted for all analyses. Mplus version 7.31 was utilized for data cleaning, management, and analysis [57]. The level of significance was set at p < 0.05. Sample The characteristics of the sample are presented in Table 1. A total of 505 participants were included. Half of the respondents identified as male (52.9%) and non-Hispanic white (65.9%), and they were young adult cancer survivors below the age of 44 years (39.2%). Most of the participants were college graduates (60.8%), declared an annual income of over 50,000 USD (68.3%), and had health insurance (97.2%). The most reported cancer types were prostate (13.7%), early-stage breast (13.1%), colorectal (8.9%), and gynecological (7.3%). Participants had received multiple forms of treatment (56.2%), and they were not undergoing maintenance therapy when the survey was completed (40.4%). Respondents were diagnosed more than 4 years earlier (long-term survivorship 34.9%), with one-fourth of the sample been diagnosed within the previous 2 years (short-term survivorship, 25.0%). Most cancer survivors received care at academic cancer centers (29.7%) and in community hospitals (30.8%). Examination of the demographic variables by age category revealed that the Black/African American and Hispanic categories tended to be mostly present in the younger age group, with older participants being mostly non-Hispanic white (χ 2 (6) = 118.98; p < 0.0001). Males were under-represented in the middle-age group, while females were over-represented among adults (χ 2 (2) = 17.49; p < 0.01). Significant differences were identified by treatment status (χ 2 (4) = 18.68; p = 0.0009), with multiple treatments more frequently reported by younger patients (χ 2 (1) = 14.39; p = 0.00015). Additionally, insurance status varied by age group (χ 2 (2) = 7.65; p = 0.02), and younger survivors were more likely to be lacking coverage. No significant differences were detected for income (p = 0.7) and education (p = 0.06). Exploratory Factor Analysis As an initial step, an exploratory factor analysis (EFA) was performed on the 14 items assessing self-reported changes in close relationships. As the items were dichotomous, a robust weighted least square procedure was used, and the initial factor solution was rotated using the GEOMIN oblique method [57]. Up to four factors were extracted, according to the previously hypothesized domains (communication, closeness with partner/children/family members, stability of the relationship, and caregiver burden). An overview of the items, different models, and the two-factor structure loadings is available in Supplementary Materials Tables S1 and S2. Model 1, using a single factor, resulted in a poor fit (χ 2 (77) = 469.22; p < 0.0001). Four items (item 4, 11, 12, and 14) were not significantly related to the single factor. Model 2 utilized a two-factor solution and was a significant improvement over the one factor model (χ 2 (13) = 213.53; p < 0.0001). Items 2, 4, 5, 7, 12, and 14 had significant loadings on both factors; however, in all cases, one loading was negative and/or substantially smaller than the other. The three-factor solution (Model 3) showed only a minimal improvement (χ 2 (52) = 120.47; p < 0.0001), and all but four of the items (items 7, 11, 13, and 14) had significant cross loadings. Lastly, a four-factor solution (Model 4) was only minimally better than the three-factor option (χ 2 (41) = 89.78; p < 0.0001) with six items (items 4, 5, 6, 7, 8, and 10) indicating significant cross loadings and similar loadings on two factors. Despite the best model fit, the four-factor solution did not match the previously hypothesized domains. Because of the sensitivity of the chi-square test to the large sample size, it was decided to utilize a two-factor solution considering the best conceptual model and the empirical model fit. To provide a simple description that could easily be replicated by other studies, the two factors were created by summing the items with the highest loadings on each factor (≥0.45), which were then labeled Relationship Closeness and Relationship Ambivalence. The correlation between the two factors was examined (r = 0.23, p < 0.05), and internal consistency was investigated (Cronbach's alpha for relationship closeness, α= 0.65; Cronbach's alpha for relationship ambivalence, α = 0.57). Mean scores (ranging from 0 to 2) were then compared by variables of interest, with results presented below. Figure 1 illustrates mean scores for relationship closeness and ambivalence compared by sociodemographic variables. Significant differences were identified between male and female respondents for both closeness (F (1, 503) = 10.04; p = 0.0016; R 2 = 0.02) and ambivalence (F (1, 503) = 4.04; p = 0.0451, R 2 = 0.008), with female survivors reporting higher levels of positive (Cohen's d = 0.28) and negative changes in social wellbeing (Cohen's d = 0.18) than males. Race was also significantly correlated with both positive (F (4, 500) = 3.32; p = 0.0106; R 2 = 0.026) and negative effects (F (4, 500) = 3.54; p = 0.007; R 2 = 0.027). Hispanic cancer survivors and non-Hispanic whites reported significantly greater closeness than Black/African American participants (Cohen's d = 0.53, Cohen's d = 0.32, respectively). Relationship ambivalence was more elevated among survivors who identified as Hispanic than Black/African American (Cohen's d = 0.59) and non-Hispanic white respondents (Cohen's d = 0.58). Differences in Relationship Closeness and Ambivalence by Sociodemographic Characteristics Although there were no differences in positive changes by age group (F (2, 502) = 0.04 p = 0.9597, R 2 = 0.0001), variations existed in terms of relationship ambivalence in the illness aftermath (F (2, 502) = 11.01; p < 0.0001, R 2 = 0.042), suggesting greater vulnerability for survivors diagnosed at a younger age. The oldest age group (≥66 years) had fewer negative changes than either young adult survivors (Cohen's d = 0.53) or the middle-age group (Cohen's d = 0.33), but these two groups did not differ significantly (Cohen's d = 0.20). Lastly, while relationship closeness did not differ by income (F (2,471) =0.20; p = 0.821, R 2 = 0.001), cancer survivors from low socioeconomic backgrounds were significantly more likely to report negative consequences (F (2, 471) = 8.65; p = 0.0002; R 2 = 0.035). Those with the lowest income (≤49,999 USD) had more elevated ambivalence than the middle (50,000-99,999 USD; Cohen's d = 0.26) and higher income (≥100,000 USD; Cohen's d = 0.51) groups. No significant differences were registered for education level, insurance coverage, and geographical locations. Figure 2 illustrates mean scores for relationship closeness and ambivalence by clinical variables. No differences were detected by cancer type; a result that may be due to the large number of cancers included. When cancer status was examined, significant differences for positive changes in social wellbeing (F (5, 499) = 2.25; p = 0.0481, R 2 = 0.018) were identified. Post hoc analysis revealed that individuals who had completed treatment and were not on maintenance therapy presented greater closeness than those who were diagnosed but not yet receiving treatment (mean difference = 0.75, p < 0.05). Similar results emerged when ambivalence was investigated; significant variations existed between those on active treatment and cancer survivors who completed treatment, as well as between survivors on maintenance therapy and those who were not (F (5, 499) = 5.312; p < 0.001, R 2 = 0.023). Discussion The present work extends current literature on social wellbeing in cancer survivorship, by investigating self-reported positive and negative changes in the context of close relationships. Two factors, Relationship Closeness and Relationship Ambivalence were identified via exploratory factor analysis. Then, differences by sociodemographic and clinical characteristics were examined. Results indicate that women, younger survivors, Black and Hispanic survivors, and those with lower income presented more impaired social wellbeing. Additionally, variations were registered by treatment status, time since diagnosis, and institution. The study confirms existing literature investigating social outcomes in cancer survivors. Female participants reported both greater negative and positive changes in social wellbeing. This finding can be linked to reported sex and gender differences in morbidity and adjustment [10], as well as to emerging application of theoretical frameworks that contribute to describe gender-related differences [58]. For instance, Social Role Theory can characterize this finding as resulting from perceived role and caregiving responsibilities [58], while transactional approaches may relate this to differential appraisals [59]. Although no differences for closeness were detected by age group, greater ambivalence among younger survivors confirms the profound psychosocial impact of facing cancer as a young adult. Studies have consistently documented the clinical decrement of social functioning over time, especially for young survivors experiencing greater symptomatology, limited social support [48,60,61], and higher distress in their relationships [23,48,[62][63][64][65]. Three recent systematic reviews identified that this group continues to experience difficulties establishing and maintaining relationships with peers, family members, and partners [64][65][66]. In addition to sex and age, members of minoritized groups and socioeconomically vulnerable individuals experienced higher levels of ambivalence. These findings can contribute to illustrate the differential impact of the illness for those who experience cancer from a position of vulnerability. Financial hardship [67][68][69][70][71][72] can affect psychological distress, quality of life, and social relations [73]. Worse outcomes, in the form of lower closeness and higher ambivalence, were reported by Black/African American and Hispanic respondents, respectively. These results reflect the intersection of social determinants of health [17] and culturally informed expectations for family interaction and provision of support [54,55], which require greater understanding by the literature and multilevel interventions [16,56]. The transition to survivorship is confirmed to be a delicate moment for the social wellbeing of the individual, as evidenced by significant variations in relationship ambivalence between those on active treatment and cancer survivors who completed treatment, as well as between survivors on maintenance therapy and those who were not. Previous evidence has revealed cancer survivors and their partners' tendency to withdraw from each other in the period immediately following the end of active treatment [24]. Differences in negative consequences by treatment modality, status, and time since diagnosis can help in identifying moments of potential susceptibility for the social wellbeing of survivors; in this sample, active treatment, maintenance therapy, and the early survivorship phase were characterized by greater ambivalence. This result can assist future efforts to identify and intervene on psychosocial resources that contribute to the wellbeing of both patients and caregivers [21,22,56]. As greater survival rates have been reported for those treated at NCI-designated comprehensive cancer centers [74,75], it was unexpected to register fewer negative outcomes for those that received care at community hospitals. While this finding may be due to this sample's characteristics, Zebrack et al. [76] found that providers at community cancer programs presented greater institutional capacity for continuity in the delivery of psychosocial care over time. Future contributions investigating the implementation and outcome evaluation of comprehensive psychosocial support services across cancer care settings are needed. The cross-sectional design, the utilization of self-reported dichotomous items, and the lack of a comparison group of healthy peers are important limitations that affect the present work. Positive and negative variations in social wellbeing were evaluated using a list of dichotomous items created for the purpose of the survey by providers. Hence, it was not possible to discriminate among the different domains affected by the illness, nor to elaborate on the amount of change participants experienced since diagnosis. The inclusion of standardized and validated questionnaires is, therefore, recommended for future studies. Furthermore, SEM model fit indices were acceptable but lower than ideal, suggesting future research to consider alternative models when additional measures are available to describe social wellbeing in the cancer aftermath. The lack of a comparison group of healthy peers prevented the authors from inferring whether these changes occurred due to illness, aging process, or other sample characteristics. Furthermore, the survey was cross-sectional, and it was not possible to elaborate on causation nor on trajectories of positive and negative changes over time and at critical turning points of the cancer continuum. Similarly, the association with mental health data should be further investigated to clarify whether variations in social wellbeing were linked to affected mood or distress. While the inclusion of a large, national, and diverse sample is an aspect of strength of the present analysis, recruitment via online panels led to the overrepresentation and underrepresentation of certain groups of survivors, which may have influenced some of the current results. Conclusions The present study revealed that there are groups of cancer survivors experiencing more affected social wellbeing: women, young adults, individuals from minoritized groups, and those with lower financial resources. Furthermore, variations by treatment status, time since diagnosis, and institution suggest that social wellbeing may be influenced by the interaction with the healthcare system. Specifically, our findings indicate that there may be settings not fully equipped to provide models of care encompassing the psychosocial needs of patients, which can ultimately affect their social relationships. This work also has implications for oncology social workers and healthcare teams involved in direct care delivery. Results emphasize the need to enhance providers' capacity for addressing psychosocial issues related to the relationship with partners, family members, and the larger social network. At the same time, this contribution unveiled the necessity to develop interventions able to sustain the quality of survivors' interpersonal relationships and overall social wellbeing, with a particular emphasis on the experience of certain groups and for the differential burden that accompanies active treatment, early vs. long-term survivorship. Future research should expand both qualitatively and quantitatively current understanding of the experience of survivors reporting more affected social wellbeing and investigate the development and implementation of supportive care services alleviating stressors impairing the quality of close relationships. Informed Consent Statement: This was a secondary data analysis. Informed consent was obtained from all subjects involved in the original study. Data Availability Statement: Primary data for this secondary analysis article were collected by CancerCare as part of the Patient Access and Engagement Report. The datasets analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2021-03-06T06:16:25.945Z
2021-03-04T00:00:00.000
232121994
{ "extfieldsofstudy": [ "Geology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247590&type=printable", "pdf_hash": "b6f8a537d740e63070c44dcbf28900e7a8dbd124", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41611", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b48e8d4389d96531c1af363acd10cdf26538aae4", "year": 2021 }
pes2o/s2orc
Climate variation during the Holocene influenced the skeletal properties of Chamelea gallina shells in the North Adriatic Sea (Italy) Understanding how marine taxa will respond to near-future climate changes is one of the main challenges for management of coastal ecosystem services. Ecological studies that investigate relationships between the environment and shell properties of commercially important marine species are commonly restricted to latitudinal gradients or small-scale laboratory experiments. This paper aimed to explore the variations in shell features and growth of the edible bivalve Chamelea gallina from the Holocene sedimentary succession to present-day thanatocoenosis of the Po Plain-Adriatic Sea system (Italy). Comparing the Holocene sub-fossil record to modern thanatocoenoses allowed obtaining an insight of shell variations dynamics on a millennial temporal scale. Five shoreface-related assemblages rich in C. gallina were considered: two from the Middle Holocene, when regional sea surface temperatures were higher than today, representing a possible analogue for the near-future global warming, one from the Late Holocene and two from the present-day. We investigated shell biometry and skeletal properties in relation to the valve length of C. gallina. Juveniles were found to be more porous than adults in all horizons. This suggested that C. gallina promoted an accelerated shell accretion with a higher porosity and lower density at the expense of mechanically fragile shells. A positive correlation between sea surface temperature and both micro-density and bulk density were found, with modern specimens being less dense, likely due to lower aragonite saturation state at lower temperature, which could ultimately increase the energetic costs of shell formation. Since no variation was observed in shell CaCO3 polymorphism (100% aragonite) or in compositional parameters among the analyzed horizons, the observed dynamics in skeletal parameters are likely not driven by a diagenetic recrystallization of the shell mineral phase. This study contributes to understand the response of C. gallina to climate-driven environmental shifts and offers insights for assessing anthropogenic impacts on this economic relevant species. Introduction Evaluating how marine ecosystems could respond to near-future global warming is critical to design proper conservation and management strategies, especially in coastal areas with increasing urbanization and resource overexploitation. In the marine realm, calcifying macroinvertebrates such as corals, brachiopods and mollusks produce hard structures for support and protection that constitute high-resolution archives recording the environmental conditions that have prevailed during their life [1,2]. Through the control exerted by intraskeletal macromolecules, mollusks can exert imprints on calcium carbonate biomineralization [3], influencing the polymorphism, morphology and chemistry of the shell in response to environmental changes [4][5][6]. Those biogenic structures can be useful tools to reconstruct the historical effects of climate change on marine organisms, thus allowing a better understanding of near-future dynamics. Quantifying the effect of near future climate change on marine calcifying organisms requires long-term multi-generational studies for assessing their adaptability to changing environmental conditions [7]. Nevertheless, such studies are difficult to address in laboratory conditions. Natural latitudinal gradients could represent an alternative to laboratory experimental studies. In fact, this methodology allows to evaluate the effects of different environmental conditions, like temperature variations, along large-scale spatial gradients [7,8]. A complementary approach is to investigate the recent fossil record. This line of research gives access to an archive of ecological responses to past climate transitions that could elucidate near-future scenarios of marine ecosystems under global warming [9,10]. During the Holocene some time intervals were warmer than the present. Of these warmer periods, the longest was from about 9,000 to about 5,000 years before present (BP) (i.e., Holocene climate optimum HCO), with significantly higher temperatures than today at high latitudes (up to 4˚C [11]). Holocene sedimentary successions are characterized by well-preserved remains of mollusk taxa with well-known ecological needs. Thus, it preserves a centennial record of environmental and biological dynamics that lead to present-day ecosystems. In this context, the recent sedimentary succession of the Po Plain-Adriatic Sea system (Italy) has been extensively investigated in the last decades and offers a high-resolution stratigraphic framework (for details see S1 Text "Geological setting" in S1 File; [12][13][14][15][16][17][18][19][20][21]). Hence, biomineralization dynamics in relation to millennial scale climate change can here be investigated in a wellresolved climate and stratigraphic framework. Among economically relevant mollusks of the Adriatic Sea, the infaunal bivalve Chamelea gallina seems to be particularly sensitive to environmental changes, showing shell morphology variations in response to environmental change [22][23][24][25]. Previous studies have mainly focused on population dynamics, shell growth and composition of this species in the present-day Mediterranean and along latitudinal gradients [24,25]. In contrast, there is no information about shell variations in relation to climate-driven environmental change along temporal gradients. This study aimed to investigate the variations in skeletal features of C. gallina assemblages during the last 8000 years from shoreface deposits and active shoreface settings of the Po-Adriatic system (Italy). This allowed to assess phenotypic variation occurred in time with different environmental conditions and determine how the impact of anthropogenic warming could affect this economically important bivalve species in the future. Biometry, composition and crystal structure of C. gallina shell were investigated in five shoreface-related horizons: two from the Middle Holocene, one from the Late Holocene and two from modern thanatocoenoses. Since diagenetic processes can occur over time, analyses of the taphonomic degradation status of the subfossil shells were carried out before comparing the results with modern thanatocoenosis. Specimen collection This study has been conducted on remains of C. gallina from areas not privately owned or protected. The species collected in this study is not protected or endangered. No specific permits were required to collect shell material for scientific research from sediment cores or targeted areas. Sub-fossil specimens (Holocene in age) of C. gallina were sampled from sediment cores of the Po Coastal Plain (Italy) and drilled as part of a multidisciplinary project [26,27]. Two horizons were collected from core 205-S6 (Comacchio, 44˚68'N, 12˚15'E), code "CO1" and "CO2". The third sub-fossil horizon (code "CE") was collected from core 240-S8 (Cervia 441 6'N, 12˚20'E) (Fig 1). All investigated horizons came from shoreface depositional environments characterized by sandy substrates and estimated water depth between~5 and 10 m. Paleoenvironmental, paleobathymetric and paleogeographic reconstructions of the Po-Adriatic system during the Holocene are detailed in previous studies [19,[27][28][29] (for details see S1 Text "Geological setting" in S1 File). Modern samples of C. gallina were collected in the Northern Adriatic Sea off the coast of Goro (MGO; 44˚75'N, 12˚43'E) and Cervia (MCE; 44˚30'N, 12˚40'E) (Fig 1). Samplings were performed by means of Van Veen Grab and scuba diving on the sandy bottom at 5.2 m and 5.3 m water depth. The sampling areas are about fifty kilometers away from each other and correspond roughly to the extraction areas of cores used in this study. Sampling operations were restricted to the top-most 10 cm of the taphonomically active zone (TAZ) of the sea bottom. This sampling allowed to collect a time-averaged record of shells estimated in tens of years, following the deposition rates reported in Trincardi et al. [30]. This allowed a better comparison with cored sub-fossil horizons, in which the sampled shells came from a time span of tens of years. No living organism was collected for this study. The specimens are housed at the Department of Biological, Geological and Environmental Sciences (Bologna, Italy), repository number MGGC 26131 and MGGC 26138(for samples from cores CE and CO, respectively) and MGGC 26350 for samples from modern thanatocoenosis. Shell data used in this study have been archived as a PLoS One online-access appendix (S1 Appendix). Only valves of 5-30 mm length (maximum distance on the anterior-posterior axis) were considered for the analyses. The lower limit was defined by the technical difficulties in obtaining reliable measurements in very small specimens. The upper limit was due to the difficulty to collect whole shells over 30 mm in the sub-fossil horizons with 90 mm cores diameter and in finding valves over 30 mm in thanatocoenoses located in C. gallina harvesting areas. Prior to any measurements, each valve was cleaned with a toothbrush and soaked in distilled water for two hours to remove any external residue on the shells surfaces. In addition, valves from modern Adriatic settings were immersed in a solution of distilled water and hydrogen peroxide (5 vol.%) for 24 h to eliminate any traces of organic material on the surface (e.g., epibionts). Then, the valves were dried in an oven at 37˚C for one night to remove any moisture that may influence subsequent measurements. Radiocarbon measurements A dating was performed exploiting the high-resolution stratigraphic framework developed for the Holocene succession of the Po coastal plain, which allowed to subdivide this ca. 30 m thick sedimentary package in millennial-scale stratigraphic units (parasequences in [13]). Successively, radiocarbon dating was performed on five randomly selected valves to constrain the time span of the examined sample. Radiocarbon data were calibrated with Oxcal 4.2 [32], using the Intcal13 calibration curve [33], Delta R (ΔR) = 139.0 ± 28.0 and obtained from CHRONO Marine Reservoir Database, Map No 235 (North Adriatic, Rimini, Italy). Environmental parameters Sea surface temperature (SST) for the Adriatic Sea in proximity of targeted shoreface settings were obtained from the global ocean OSTIA sea surface temperature and sea ice analysis databank [34]. Mean annual SST was calculated from daily values measured from January 2010 to December 2019 (number of daily values = 3651 for each site). As for sub-fossil C. gallina horizons, SST estimates were based on Alkenones unsaturation index, a widely applied proxy for past SST. Alkenones are long-chain methyl ketones synthesized by some single-celled algae found in marine sediments and whose carbon bond saturation index varies according to annual mean values at the SST [35]. Jalali et al. [36] produced a high-resolution SST record of the past 10,000 years based on alkenone paleothermometry for the central-northern Mediterranean Sea (Gulf of Lion). This site is at the same latitude of the North Adriatic Sea and shows a comparable physiographic setting. Estimated paleo-SST for the Gulf of Lion can be considered a reliable proxy for the study area too. Mean SST estimated for the Gulf of Lion (off Leucate) from January 2010 to December 2019 is 16.7 ± 0.1˚C. Shell parameters Shell length (maximum distance on the anterior-posterior axis) and height (maximum distance on the dorsal-ventral axis) were measured using ImageJ software after data capture of each shell shape with a scanner (Acer Acerscan Prisa 620 ST 600 dpi). The shell width (maximum distance on the lateral axis of the valve) was measured with a caliper (± 0.05 mm). Skeletal parameters were measured by buoyant weight (BW) analysis, using a density determination kit Ohaus Explorer Pro balance (± 0.1 mg; Ohaus Corp., Pine Brook, NJ, USA, see Gizzi et al. [24] for details). The BW measurement was repeated three times and the average was considered for statistical analysis. The BW technique allowed to estimate the variable of interest: i. micro-density or matrix density (mass per unit volume of the material which composes the shell, excluding the volume of pores; g�cm −3 ); ii. porosity: the volume of pores connected to the external surface (%); iii. bulk density: the density of the valve (including the volume of pores). Correlations analyses between SST and skeletal parameters were performed to investigate any significant pattern developed over geological time as a function of temperature. Differences in skeletal properties of C. gallina shells were also investigated in relation to animal sexual maturity (reached in modern specimens after 1 year of life [23] and length >18mm) in order to consider eventual differences in the biomineralization process during different stages of the bivalve's life cycle. Shell phase composition and microstructure Nano-scale and micro-scale analyses of skeletal features were used to determine the mineral phase and an eventual recrystallization or alteration of the samples. Prior to the analyses, samples were soaked in an ethanol solution (10 vol.%) and immersed in a bath sonicator (Falc Instruments S.r.l., UTA 18) for one minute. Subsequently, the valves were treated with a sodium hypochlorite solution (5 wt.%) for one hour, rinsed with distilled water and dried in a desiccator. About one-half of each shell was finely grounded in a mortar to obtain a homogenous powder. X-ray powder diffraction (XRD) analyses were performed on six specimens for each horizon, by preparing a thin compact layer of the sample in a silica background signal free holder. Diffractograms for each sample were collected using an X'celerator detector fitted on a PANalytical X'Pert Pro diffractometer, using a Cu-Kα radiation generated at 40 kV and 40 mA. The data were collected within the 2θ range from 20˚to 60˚with a step size (Δ2θ) of 0.016˚and a counting time of 60 s. Fixed anti-scatter and divergence slits of 1/2˚were used with 10 mm beam mask. All measurements were carried out in a continuous mode. The XRD patterns were analyzed using the X'Pert HighScore Plus software (PANalytical). High-resolution synchrotron X-ray powder diffraction (HR-XRPD) measurements were performed on three valves of the oldest horizon (CO1) and three of today's thanatocoenosis (MCE). The analysis was carried out on ID22 beamline at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, using a monochromatic radiation of 0.49599 Å. Each sample was transferred to a 0.9 mm glass capillary and measured three times at a fast rate (10 deg�min -1 ) at three different locations, while being rotated. This setup makes it possible to PLOS ONE Variation in skeletal properties of C. gallina during the Holocene avoid beam damage and texture influences. Measurements were performed at room temperature and after ex-situ heating at 300˚C for 2 h in order to examine possible influence of the intracrystalline organics on the shell's unit cell. The unit cell parameters were extracted using Rietveld refinement method applied to a full diffraction pattern profile. Coherence length (nm) along specific crystallographic directions was derived by applying the line profile analysis to a specific diffraction peak. This was performed by fitting the diffraction peak profile to a Voigt function and deconvolution of the diffraction peak broadening into the Lorenzian and Gaussian widths. Fourier-transform infrared spectroscopy (FTIR) analyses were performed on twelve valves for each site using a Nicolet IS10 Spectrometer (Thermo Electron Corporation) working in the 4,000-400 cm -1 range of wave numbers at a resolution of 2 cm -1 . The samples were analyzed as KBr pellets using a sample concentration of about 1 wt.%. Thermogravimetric analysis (TGA) was used to estimate the organic matrix (OM) and the structurally associated intra-skeletal water content of each shell. The measurements were performed using a SDT Q900 instrument (TA Instruments). Five different valves were analyzed for each horizon, by measuring 10-15 mg of sample in a ceramic crucible. The analysis was carried out under nitrogen flow with a pre-equilibration at 30˚C, followed by a heating ramp from 30˚C to 850˚C using a 10˚C�min -1 heating rate. Inductively coupled plasma optical emission spectroscopy (ICP-OES) measurements to evaluate the metal content of shells were performed on valves treated with sodium hypochlorite (5 wt.%) for 24 h, then rinsed with distilled water and dried in a desiccator. About 1 g of shell was dissolved in 3 mL of HCl and HNO 3 in a 1: 3 volume ratio, adjusting the volume with milliQ water until 5 mL. Solvents and reagents with trace analysis grade of purity were used. Three samples were measured for each level. Each sample was measured three times, 12 s each with 50 s of prerunning, using a Spectro Arcos-Ametek, ICP-OES with axial torch and high salinity kit. Statistical analyses Levene's test was used for testing homogeneity of variance while Kolmogorov-Smirnov's test was used for testing normality for environmental and shell parameters. Since assumptions for parametric statistics were not fulfilled, the non-parametric Kruskal-Wallis equality-of-populations rank test was used. Spearman's rank correlation coefficient was used to evaluate trend between shell parameters and sea surface temperature. In each horizon, rank-correlations were computed on all valves and also on two subgroups consisting of immature specimens (valve length <18 mm) and mature ones (>18 mm [23]). All statistical analyses were computed using RStudio software [37]. Dating and environmental parameters Radiocarbon measurements ascribed two of the sub-fossil horizons to the Middle Holocene (CO1 and CO2) and one to the Late Holocene (CE) as reported in Table 1. According to the data reported for the Gulf of Lion, estimated and measured SST appeared to cool down gradually moving from the oldest horizon (CO1, 18.6˚C) to nowadays setting (MCE, 17.3 and MGO, 17.2˚C) (Kruskal-Wallis test, df = 4, p < 0.001; Table 1). The reconstructed SST trend for the Holocene showed a difference of *1.5˚C between middle Holocene and present day, a difference that is comparable with the current SST variation along the latitudinal gradient in the Adriatic Sea (Gizzi et al. [24]). All the measured shell parameters (i.e., length, height, width and mass; Table 1) were homogeneous among horizons (Kruskal-Wallis test, p > 0.05, Table 1). In all investigated C. gallina assemblages, length correlated positively with height, width, and mass (S1 Fig in S1 File). Shell length correlated with skeletal parameters (i.e., bulk-, micro-density and apparent porosity) except for apparent porosity and length of levels CO2 and MCE (S1 Fig in S1 File). Skeletal parameters resulted significantly different among stratigraphic horizons both in the whole dataset and in the subgroups (i.e., mature and immature shells) (Tables 1 and 2). In both cases, micro-and bulk density were positively correlated with SST, while apparent porosity correlated negatively with SST (Fig 2). The only exception was represented by the subgroup of mature shells, which showed no significant correlation between apparent porosity and SST (Fig 2C). Shell phase composition and microstructure The conventional XRD and FTIR analyses (Fig 3) of the shells from all levels showed only aragonite signals, no other mineral phase was detected. However, the HR-XRPD data (S2 Fig in S1 File) allowed to precisely deduce the unit cell parameters, microstrain fluctuations and crystallite size. The heat treatment removed possible effects of the OM on the unit cell of the shells. The obtained data revealed that the intracrystalline OM induced an elongation of both the aand c-axes and a contraction of the b-axis (Fig 4A). Values of the calculated lattice distortions vary from 0.15% to 0.20%, with the highest strain observed in the case of the modern sample (MCE). The line profile analysis allowed to derive the crystallite sizes along the <111> and <021> aragonite directions for the MCE sample, 0.221 and 0.183 μm, and CO1 samples, 0.275 and 0.231 μm, respectively. After the thermal treatment, the crystallite sizes were 0.158 and 0.139 μm for the MCE, and 0.179 and 0.171 μm for the CO1, respectively (Fig 4B and 4C). The shell's skeletal weight loss measured by TGA before aragonite decomposition, in the temperature range between * 150 and 450˚C, differed among horizons (p< 0.05), but no correlation with SST was found (S1 Table in S1 File). The weight percentage of the OM and the associated intra-skeletal water was slightly lower in the case of the sub-fossil horizons, where the values varied from 1.37 to 1.72%, as compared to that of the both modern horizons with 1.79 and 1.83%. (S1 Table in S1 File). The metal content analysis (ICP-OES) showed no differences between the sub-fossil and modern horizons in the content of magnesium and strontium, two elements that may vary in response to a diagenetic alteration occurred over time [38] (S1 Table in S1 File). Discussion This study investigated the phenotypic variation of C. gallina in relation to SST trend in the Po Plain-Adriatic Sea system during the last~8,000 years. By comparing the Holocene sub-fossil record to modern day thanatocoenoses, has been possible to get insight on skeletal dynamics on a millennial temporal scale. This allowed overcoming the time limits imposed by laboratory studies and assessing how rising SST and environmental-driven changes could affect this economically relevant bivalve in the future. The high paleontological and stratigraphic resolution of the investigated succession offered an ideal venue to explore this scenario. On a geologic time scale, the taphonomic status of a skeletal remain could be an indicator of its relative age [39]. This concept goes under the name of taphonomic clock [40]. Although intriguing, the taphonomic clock shows a variable reliability, as it is not only function of time since-death, but mainly depends on the time spent by the skeletal remain in the taphonomic active zone (TAZ) of the sediment layer, where it is exposed to physical and biological degradation processes [41]. In Holocene sub-fossils, most distinctive external features related to taphonomic alterations as a function of time would be expressed as lack of color, chalky surface or loss of glossiness. Other than aesthetic damages, deterioration of fossil remains also affects the preservation of the mineral phase that constitutes the shell. Indeed, in fossil records of biogenic calcium carbonate biominerals a partial or complete recrystallization might occur over time. This process might lead to recrystallization of aragonite into a more stable polymorph, such as calcite, or different minerals, such as calcium phosphate [42]. Overall, a sustained variation in the mineral composition could deeply alter the original organization of the biomineral phase and in the end be responsible for spurious trends. In this study, if external and internal taphonomic degradations had occurred, older shells would have a chalky surface and would show a reduction in micro-density of shells, since calcite has a lower density than aragonite (2.71 mg�mm -3 vs 2.94 mg�mm -3 [43]). On the contrary, our data report higher shells micro-density values in the most ancient horizons than in modern ones. Moreover, the application of both HR-XRPD and FTIR detected no other mineral phase than aragonite (Fig 3), the original mineralogy of C. gallina shell. Thus, we can assume no recrystallization process occurred in the geological time period examined. The analyses of the HR-XRPD data allowed also to quantify the strain with the crystals due to the presence of the intra-crystalline OM. The obtained data on the lattice distortions and micro-structural parameters (microstrain fluctuations and crystallite size) were in line with those reported in the literature for the biogenic aragonite from other organisms [44,45]. The lattice strain was lower in the sub-fossil samples (CO1) compared to the modern ones (MCE ; Fig 4), indicating a partial degradation of the organic matrix in the sub-fossil samples. The values of the crystallite size after the thermal treatment, which is known to lead to the removal of Fourier-transform infrared (FTIR) spectra (on the left chart) and X-ray powder diffraction (XRD) patterns (on the right chart) from grinded valves of C. gallina. A representative diffraction pattern and FTIR spectrum is shown for each level from the older samples (top) to more recent ones (bottom). https://doi.org/10.1371/journal.pone.0247590.g003 PLOS ONE Variation in skeletal properties of C. gallina during the Holocene the OM [44,45], were lower than those of the non-treated samples and were quite similar for the both MCE and CO1 samples. The latter further confirms the presence of OM and supports its role in determining the lattice strain (Fig 4). Thus, we can safely state that in the sample CO1 the OM were still present, excluding a relevant re-crystallization of aragonite crystallite that should imply a loss of the strain, even if partially degraded. Moreover, we could speculate that degradation processes occurred mainly in the inter-crystallite fraction of the organic matrix rather than in the intra-crystallite one. The fact that no recrystallization process occurred in the sub-fossil shells was also confirmed by the values of the measured metal content that were constant among all samples, excluding important environmental fluid diffusion into the biomineral. This result was in agreement with previous studies reporting that in marine shallow settings certain parameters, such as high sedimentation rates, could rapidly sequester skeletal remains from the TAZ, increasing their preservation [20]. In conclusion, for the purpose of our study, these evidences allowed to rule out the possible influence of the taphonomic alteration of the mineral phase on the observed trends of the skeletal parameters. C. gallina skeletal parameters differed between mature and immature clams (Fig 2 and S1 Fig in S1 File) in their biomineralization patterns. Higher apparent porosity was observed both in sub-fossil and modern horizons for shells of small size, decreasing from more than 20 to less than 15% approaching the length at sexual maturity (about 18 mm [23]). High porosity influenced bulk density, which was conversely lower in small size shells. Micro-density followed the same pattern as bulk-density. This trend agreed with previous study carried out in living populations of C. gallina from the Adriatic Sea [25]. Hence, suggesting that Middle Holocene specimens of C. gallina in different climate-environmental contexts (Fig 5 and S1 Fig in S1 File) exerted a similar physiological control on biomineralization of calcium carbonate during their lifespan. In agreement with Mancuso et al. [25], mature specimens of C. gallina seemed to change their biomineralization behavior, showing small variations in apparent porosity and bulk density compared to immature ones. This suggested that C. gallina promoted an accelerated shell accretion, in order to quickly reach the size required for sexual maturity, at the expense of possessing a less dense, more porous and mechanically weaker shell. Apparent porosity showed no correlation with SST in mature shells and a significant negative correlation in immature shells (Fig 2). Bulk and micro-density increased with increasing SST, for both mature and immature shells. The significant correlation in shell density with SST can be attributed to different mineralization rates driven by temperature and related to aragonite saturation state. Warmer water masses reduce the thermodynamic work required to organisms to deposit calcium carbonate [46,47], making the calcification less expensive in terms of metabolic cost [48]. This enables an increase in calcification rates [49]. Comparable patterns have been detected also in brachiopods, where some species living in cold water showed a reduced calcium carbonate deposition and an increased organic matrix content compared to higher temperate settings, characterized by larger crystals and reduced organic matrix (hence denser shells [50]). Previous studies on C. gallina shells were conducted along a latitudinal gradient in the Adriatic Sea, including the area considered in this study [24,25]. Mature shells of C. gallina of commercial size over 25 mm long were thinner, more porous and less resistant to fractures in warmer and more irradiated populations [24,25]. On the other hand immature shells, less than 18 mm long, showed the opposite trend to mature ones, resulting in more porous and less dense shells with lower SST [25] (S3-S5 Figs in S1 File). According to these results, local environmental parameters seemed to have a different influence on the biomineralization rate of mature shells compared to immature ones, likely due to different growth and metabolic S1 Fig in S1 File). The dotted line represents the modern coastline. n = number of valves; ρ = Spearman's determination coefficient; � p < 0.05; �� p < 0.01; ��� p < 0.001. Geomorphologic map after Amorosi et al. [13,14], slightly modified and for illustrative purposes only. https://doi.org/10.1371/journal.pone.0247590.g005 PLOS ONE Variation in skeletal properties of C. gallina during the Holocene rates [25,51]. This might suggest that while immature clams have an energy surplus to better withstand environmental stress, mature clams are more dependent on their reserves [52]. Studies on mollusks and other macroinvertebrates highlight that calcification increases with aragonite saturation state [48,49]. Trends depicted here conform to those patterns of biomineralization in warmer settings. However, no aragonite saturation data were available along the considered temporal gradient to investigate relationships between shell calcification and seawater chemistry in the studied area. Although all samples were collected in the same area, a strong geomorphological change took place during the Holocene in response to the glacio-eustatic sea-level variations [14]. The highest values for micro-density and bulk density of C. gallina shells were recorded during the Holocene climate optimum (HCO, 9-5 ky BP), when SSTs in the study area were higher than today. The North Western Adriatic coastal area was characterized by estuary systems, bounded seaward by a series of sandbars that isolated coastal lagoons and limited riverine plumes into the Adriatic (Fig 5) [14]. Mancuso et al. [25] reported that C. gallina populations could be negatively impacted by riverine influence (i.e., reduction of net calcification and linear extension rates). The positive correlation between temperature and shell density of C. gallina specimens found in the current study could be facilitated, other than higher aragonite saturation state due to past warmer conditions, by a more stable shoreface depositional setting due to reduced influence of riverine plumes. Indeed, in estuarine system mixing between freshwater and marine water occurs in the back barrier settings and not in the shallow marine zone where C. gallina thrives. By contrast, during the last part of the HCO, the weight of eustasy on the coastal dynamics of the study area largely vanished, and the study area transitioned (between 7.0 to 2.0 ky BP) to a wave-dominated and, after 2.0 ky BP, to a river-dominated deltaic system [14]. The last geomorphologic configurations led to progressively increasing influence of riverine processes on the control of coastal dynamics and the storage-release of sediments [53]. The enhanced freshwater discharge in the nearshore area, especially during the last 2.0 ky BP, resulted in a strong progradation and the upbuilding of the modern Po Delta [14,54] in a climatic context characterized by an overall decreasing trend in SST. The upbuilding of the modern Po Delta likely helped the installation of a low temperature and salinity wedge in the coastal area around it [55] and southward, due to action of anti-clock wise long-shore currents. Indeed, during flood events the modern Po river plume can influence the sea facing area in a radius of~60 km [56]. Freshwater plumes can reduce the SST between 2˚C to 6˚C, with a sensible effect down to 10 m of depth [57]. The drop in SST could have reduced the aragonite saturation state in the seawater, increasing the metabolic cost for calcification of C. gallina. Moreover, although C. gallina is euryhaline, the installation of suboptimal salinity level due to the riverine inflows could lead to a reduced feeding activity and slower net calcification rates, as documented also for other macrobenthic species [24,25,58]. Additionally, the recorded decline of shell density could also be associated to the increasing water turbidity and oligotrophic conditions as the Po Delta advanced into the Adriatic Sea. During the early HCO, the estuary-lagoon acted as a material sink, accommodating most of the sediments and nutrients debouched by the Po River [14,59]. This settings likely reduced the water turbidity, with positive repercussions on feeding activity [60], providing C. gallina with spare metabolic energy to sustain higher net calcification and linear extension rates [61]. By contrast, during the onset of a wave dominated and then a fluvial dominated deltaic system the sediment and nutrients runoff directly into the shallow Adriatic Sea progressively increased [14]. The resuspension of fine bottom sediments, could have increased the turbidity with serious consequences on the feeding activities of bivalves by reducing the rate of water pumped, increasing the period of valve closure [25,62] and damaging bivalves gills [60], overall cutting the energy available for the skeletal construction. The discrepancy between this and previous works in the shell density for mature clams (positive correlation with SST found in this study, VS negative correlation with SST found in Gizzi et al. [24] and Mancuso et al. [25], S4 Fig in S1 File) suggested that this parameter is not only dependent on physical environmental factors (SST, salinity, aragonite saturation, sediments and nutrient supply), but is affected by a complex interplay between physical, biological and physiological factors, making clams response less predictable to changing environmental parameters. When making this comparison we assume that past and modern clams belong to the same species, without large spatial and temporal variability. As reported by Papetti et al. [63], modern populations of C. gallina in northern-central Adriatic Sea are homogeneous at large geographic scale, displaying low genetic differentiation at local and temporal scales. Variability of local circulation, reproductive success, and high larval mortality rates are recognized as the main factors determining the negligible genetic differences observed today [63]. Nevertheless, although the present-day situation suggests a rather homogenous genetic structure, we cannot exclude that in the past this variability was larger than today and that the trends observed in this study may reflect environmentally-driven migration of different C. gallina morphotypes. Moreover, since 2.0 ky BP, the anthropogenic influence on the Po Delta evolution constantly grew becoming dominant around the 17 th century when river diversion and channels stabilization led to the growth of the modern Delta. These human interventions dictated an increase in sediments runoff, eutrophication events and anoxic events, overall contributing in increasing the instability and stress of the nearshore environments, whose effects on C. gallina skeletal construction cannot be excluded. On a millennial time scale, temperature can be considered as a complex gradient that not only affects skeletal biomineralization directly by exerting a physiological response but also indirectly, by influencing the geomorphologic configuration and environmental parameters of C. gallina biotope. Conclusion Chamelea gallina shells appeared to be sensitive to changes in seawater temperature. At the macroscale level, specimens from past sub-fossil horizons, living in warmer water, presented a denser, less porous shells than modern specimens. The significant correlation between temperature and skeletal density remained consistent even when dividing the total dataset into two minor subgroups and analysing immature and sexually mature individuals, separately. At the microscale level, the shells were all composed of pure aragonite, presenting a perfectly preserved mineral phase with no relevant diagenetic alteration and only a slight degradation of the inter-crystalline organic phase. Hence, the observed difference in micro-density is not ascribable to any of the parameters here measured. Other factors not investigated in this study, such as occluded porosity and intra-crystalline water content, may be at the origin of the observed differences. This study along a temporal gradient represented a complementary approach to previous studies conducted along a latitudinal gradient in the Adriatic Sea and together improved our understanding of the response of this economically relevant species to a changing environment in face to seawater warming.
v3-fos-license
2021-08-05T06:18:21.777Z
2021-08-03T00:00:00.000
236915518
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-021-01663-w.pdf", "pdf_hash": "ade2168549f05ed9e57c0c5571dc6ef22be93c27", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41612", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "sha1": "4d69f2dd51b06d6576c0c5891638d0fe8a8b0623", "year": 2021 }
pes2o/s2orc
Accuracy and precision of visual and auditory stimulus presentation in virtual reality in Python 2 and 3 environments for human behavior research Virtual reality (VR) is a new methodology for behavioral studies. In such studies, the millisecond accuracy and precision of stimulus presentation are critical for data replicability. Recently, Python, which is a widely used programming language for scientific research, has contributed to reliable accuracy and precision in experimental control. However, little is known about whether modern VR environments have millisecond accuracy and precision for stimulus presentation, since most standard methods in laboratory studies are not optimized for VR environments. The purpose of this study was to systematically evaluate the accuracy and precision of visual and auditory stimuli generated in modern VR head-mounted displays (HMDs) from HTC and Oculus using Python 2 and 3. We used the newest Python tools for VR and Black Box Toolkit to measure the actual time lag and jitter. The results showed that there was an 18-ms time lag for visual stimulus in both HMDs. For the auditory stimulus, the time lag varied between 40 and 60 ms, depending on the HMD. The jitters of those time lags were 1 ms for visual stimulus and 4 ms for auditory stimulus, which are sufficiently low for general experiments. These time lags were robustly equal, even when auditory and visual stimuli were presented simultaneously. Interestingly, all results were perfectly consistent in both Python 2 and 3 environments. Thus, the present study will help establish a more reliable stimulus control for psychological and neuroscientific research controlled by Python environments. Introduction Virtual reality (VR) has attracted much attention as a new methodology for scientific research. As described in Cipresso et al. (2018), VR technologies immerse us in a virtual environment and enable us to interact with it. These features help establish more natural environments for experiments in three dimensions (3D), where participants can see, hear, and behave as in the real world, enhancing the ecological validity of research (Parsons, 2015). Such environments have also been referred to as "ultimate Skinner box" environments (Rizzo et al., 2004;Wilson & Soranzo, 2015). Since researchers can control stimuli and procedures that are not easily controllable in the real world, VR has been applied in studies on rehabilitation, therapy, and social interaction (Pan & de Hamilton, 2018;Parsons, 2015). In particular, modern VR head-mounted displays (HMDs), such as HTC Vive and Oculus Rift, allow the presentation of complex and dynamic stimuli that can achieve higher ecological validity (close to daily life) and be subject to more experimental control (Loomis et al., 1999;Parsons, 2015). Indeed, a recent study has indicated that the VR HMD of HTC Vive enables the measurement of visual cognition performance, such as visual attention and working memory capacity, as reliably as a cathode-ray tube (CRT) display (Foerster et al., 2019). Accuracy and precision of stimulus presentation As the accuracy and precision of stimulus presentation have been critical for psychological and neuroscience research, millisecond stimulus control should be considered in VR studies. If experiments are performed with low accuracy and precision or untested apparatus, it is difficult to collect replicable data. In particular, in psychological or neuroscientific experiments, stimuli such as visual (e.g., geometric figure, pictures, and animations) and auditory (e.g., tone sound, voice, and music) information should be presented to participants for a set duration and timing with millisecond accuracy and precision. Accuracy in stimulus presentation is measured in reference to the constant error, which is the lag or bias from the true value (designed duration of stimulus) in the experimental procedure, whereas precision is measured in reference to the trialto-trial variability, that is, jitter or variable error (standard deviation) of stimulus presentation (Bridges et al., 2020). If a picture and a sound with a transistor-transistor logic (TTL) trigger used for event marking in brain activity recording (e.g., EEG, MEG) are presented simultaneously for 100 ms, the stimuli and TTL trigger should ideally be synchronized. Each duration should be 100 ms with no time gap between stimulus onsets (i.e., no time lag). However, the actual stimulus presentation may not be synchronized correctly. There may be a large lag in visual and auditory stimuli from the TTL trigger (low accuracy), and the duration of those stimuli may change unstably to become either shorter or longer than TTL (low precision). The low accuracy and precision do not only collapse the experimental procedure but also disturb the participants' performance on the task due to unsuitable stimulus onset asynchrony (SOA). This issue can occur in every experiment owing to various hardware-and software-related issues. Thus, even in VR studies, the accuracy and precision in experimental environments should be tested and validated to obtain wellcontrolled methods with replicability (Plant, 2016). Hardware devices for standard laboratory experiments Traditionally, the risk of low accuracy and precision has been remarkably improved by proper hardware devices in standard laboratories for two-dimensional (2D) environments, but not in VR. For visual stimulus presentation, CRT or low-latency liquid crystal display (LCD) monitors have been used in traditional experiments. CRT displays are still the best choice because of their quick response. Every pixel on the phosphor screen of a CRT is illuminated from top left to bottom right by an electron beam, and its illuminance reaches a maximum level quite rapidly (almost no persistence) (Elze, 2010). It enables the presentation of a visual stimulus without a millisecond time lag (virtually zero). Furthermore, highperformance LCD monitors have also been used instead of CRT monitors because CRTs are no longer produced. In the past, LCDs did not have a rational response time for stimulus presentation. The time to peak illuminance was too slow in the LCD, causing a delay in stimulus onset. While the electrum beam directly illuminates the phosphor screen on a CRT, a backlight behind a layer of liquid crystal is used in LCD. The illuminating light from the backlight needs to pass through a liquid crystal layer placed between polarizing filters. Currently, specific LCDs used for experiments or highperformance LCDs for gaming provide stable and lowlatency environments for visual stimuli (Elze, 2010;Ghodrati et al., 2015) (a latency of a few milliseconds). Presentation of auditory stimuli is more complicated and difficult than that of visual stimuli (Reimers & Stewart, 2016). Compared with visual stimuli, the lag of auditory stimuli presentation can be unstable and much longer, although the human temporal resolution for auditory information is more precise than vision (Ghirardelli & Scharine, 2009). To improve the poor auditory stimulus, researchers need to consider various devices for auditory stimuli in experiments: sound cards, audio interfaces, speakers, and headphones. An audio interface or qualified sound card, including analog-to-digital (A/D) or digital-to-analog (D/A) converters, are generally equipped to generate auditory stimuli without sound distortion or noise. In this case, devices always have input or output latency, which causes a much longer time lag than in visual stimuli (Kim et al., 2020). Speakers or headphones, which are also used to present auditory stimuli to participants, also have time lags. Although there may be time lags, recent audio devices that have low or virtually zero latency should be useful for validating the timing lag of auditory stimuli. Python software tools for standard laboratory experiments In addition to the hardware apparatus, specialized software tools are necessary to generate stimuli using millisecond control. Recently, Python has been widely used in scientific research (Muller et al., 2015). Python is an interpreted programming language that has various libraries and high code readability and is easy to make and debug. Over the last decade, many useful Python software tools have been developed to establish specific experiments for psychology and neuroscience (Dalmaijer et al., 2014;Garaizar & Vadillo, 2014;Krause & Lindemann, 2013;Mathôt et al., 2012). Recently, the use of Python tools for experiments has been confirmed via benchmark tests, ensuring that they have robust accuracy and precision in both laboratory and online studies (Wiesing et al., 2020). Bridges et al. (2020) showed that PsychoPy, which is a popular Python package for cognitive experiments, has robust millisecond accuracy and precision even across different operating systems (Windows, macOS, and Ubuntu) and environments (laboratory and online experiments). In laboratory-based studies using PsychoPy, the mean precision of stimulus duration and its lag were less than 1 ms. In online studies, although the results did not achieve the level of labbased environments, PsychoPy performed the best with under 5-ms precision for auditory and visual stimuli presentation. These studies indicate the useful advantages of Python for achieving millisecond accuracy and precision. VR hardware and software However, although the well-established hardware and software with millisecond reliability as described above is commonly used in psychological and neuroscience research, little is known about the general time/timing accuracy and precision of stimulus presentation in modern VR experiments. Previous studies on VR HMDs including eye tracking have shown that the spatial accuracy of position and orientations are sufficient for general experiments as well as rehabilitation studies (Borrego et al., 2018;Niehorster et al., 2017). Although modern VR HMDs have organic light emitting diode (OLED) displays that provide fast and precise temporal responses for visual stimuli (Cooper et al., 2013;Wiesing et al., 2020), the time accuracy and precision of VR experiments remain unclear. Wiesing et al. (2020) showed that the duration of a visual stimulus controlled by Unreal Engine (a 3D game engine) is stable even with high rendering workload or head movements in VR. Moreover, recent studies using Python API and Unity (a 3D game engine that is used as major software for VR studies) on HTC Vive Pro have suggested that both environments have over 15 ms latency for visual stimuli and over 30 ms latency for auditory stimuli (Le Chénéchal & Chatel-Goldman, 2018). Importantly, Le Chénéchal and Chatel-Goldman (2018) also suggested that Python environments have better timing accuracy (lower time lag) than Unity, and the auditory latency becomes much longer than the visual latency in both Python and Unity. The present study Previous studies were conducted in specific environments (different VR HMDs and software tools such as Unity or Unreal Engine with specific visual and auditory stimuli and procedures), and general time/timing accuracy and frame-by-frame precision in VR experiments have not been proven across modern VR tools in the same procedure. Furthermore, it is still unclear whether psychological and neuroscientific VR experiments controlled by Python have sufficient timing accuracy and precision for stimulus presentation and whether there are differences between Python 2 and 3 versions, although the use of Python in non-VR studies has been increasing, as described above. Clarifying these issues would enable researchers to establish more suitable VR environments that can be validated by millisecond (adjusted within a millisecond) to their own experimental procedures. The purpose of this study was to empirically evaluate the accuracy and precision of visual, auditory, and audio-visual stimulus presentations with TTL triggers in VR, using modern VR HMDs across Python 2 and 3 environments. Although various software such as Unity or Unreal Engine can be used for VR experiments, most of the programs are not designed for psychological and neuroscientific experiments (Wiesing et al., 2020), except for Vizard. Vizard is a Python-based application from WorldViz for VR development and experimentation (https://www.worldviz.com/vizard-virtual-realitysoftware). It supports various VR devices and functions for experiments (e.g., stimulus presentation, data collection, synchronization with external devices) in both Python 2 and 3 environments. Python 2 is relatively old, but it is still useful because some third-party packages are only available in Python 2 environments (Rhoads, 2019). In fact, PsychoPy for behavioral studies supports both versions (Peirce et al., 2019). Moreover, we used TTL triggers to strictly evaluate the synchronization between the triggered time/timing from Python and presented stimuli for the VR HMDs. Our method can be used for various experiments with external devices controlled by TTL signals. Especially in experiments with eye tracking or brain recording that require high timing accuracy and precision, the unstable presentation time and timing of stimuli cause incorrect time stamps (unstable onset and offset of stimulus presentations with large jitter) against TTL signals and incorrect activity timelines for real-time recording of biological data (low accuracy and precision). Therefore, evaluation in Python 2 and 3, respectively, is valuable for researchers to reveal the critical differences that can arise in VR stimulus presentation in Python environments (whether there are millisecond differences). The evaluation also contributes to the understanding of the levels of accuracy and precision of the stimulus control provided by Python in VR experiments, compared with the previous evaluation studies of different environments such as Unreal Engine (C++ language) (Wiesing et al., 2020). Experiment 1: Visual stimulus presentation In Experiment 1, the accuracy and precision of visual stimuli in VR developed in Python environments were evaluated using major VR HMDs such as HTC and Oculus. To examine actual stimulus presentation that is synchronized with the refresh rate of VR devices (i.e., v-sync), stimulus duration was controlled frame by frame (i.e., 11.11 ms per frame in 90-Hz HMDs) (cf., Wiesing et al., 2020). In addition, TTL triggers through serial ports were also sent from the same Python program during the visual stimulus presentation (Bridges et al., 2020). Sending the TTL trigger as a time stamp allowed the measurement of the time lag between the actual stimulus presentation time and timing and triggered ones; this is the same methodology used in psychological and neuroscience research with external equipment such as brain activity recording. Experiment software settings The stimulus presentation in VR was controlled using the Vizard 6 (64 bit) software (Vizard 6.3, WorldViz, USA) and the Vizard 7 (64 bit) software (Vizard 7.0, WorldViz, USA) on a laptop PC (Experiment PC) equipped with an Intel Core i7-10750H (2.6 Hz), Windows 10 operating system (64 bit), 16 GB RAM, and an NVIDIA GeForce RTX 2070 video card (Alienware m15 R3, DELL, USA). The reason Vizard software was used is that it is currently the only Python software that supports psychological and neuroscientific VR studies with useful functions for stimulus control. Researchers can perform and compare VR experiments in both Python 2 and 3 environments directly by Vizard 6 and 7. In experiments using the Python 2 environment, Vizard 6 was used to generate and present a visual stimulus, as it is based on Python 2.7.12, whereas in experiments using Python 3, Vizard 7 was used as it is based on Python 3.8.0. These two environments enabled us to examine whether the different major versions of the Python language affect stimulus control in VR. The code was made using the Python 2 to 3 conversion tool (Python 2 to 3 tool, WorldViz, USA: https://docs.worldviz. com/vizard/latest/Python2to3.htm#2To3Tool) to maintain the same coding structure between the two versions. The vertical synchronization (v-sync) setting of display was always turned on in both Vizard 6 and 7 to control the stimulus presentation refresh rates, using the "viz.vsync()" function. The USB power saving settings of the Experiment PC were disabled to maintain high-performance connections between the PC and the VR HMDs. VR head-mounted display settings We used two different HMDs for stimulus presentation in VR in each experiment: an HTC Vive Pro HMD (HTC Vive Pro, HTC, Taiwan; 2880 × 1600 pixel resolution (1440 × 1600 per eye), 90-Hz refresh rate), and an Oculus Rift HMD (Oculus Rift, Facebook Technologies, USA; 2160 × 1200 pixel resolution (1080 × 1200 per eye), 90-Hz refresh rate). The "Motion Smoothing" system in SteamVR for HTC Vive Pro (SteamVR 1.15.12, Valve, USA) was disabled because the frame smoothing systems in modern VR HMDs can change the frame rate automatically and disturb the stimulus presentation based on the programmed frame rate (90 Hz) during VR experiments. Due to the "Asynchronous Space Warp" system on Oculus Rift, a frame smoothing system in Oculus devices that works automatically even if it is turned off by the Oculus Rift software (Oculus Debug Tool, Facebook Technologies, USA), we measured the luminance change of Oculus HMD by turning "CRT Refresh Correction" on in the Black Box Toolkit. Evaluation device settings To evaluate the accuracy and precision of visual stimulus presentation in milliseconds, Black Box Toolkit (BBTK) (Black Box Toolkit v2 Elite, The Black Box Toolkit, United Kingdom, 36 channels with 6-kHz sampling rate), which is a special measuring device for stimulus timing accuracy and precision (Bridges et al., 2020;Plant et al., 2004;Wiesing et al., 2020), was used on another independent laptop PC (Host PC) equipped with an Intel Core i7-7Y75 (1.6 Hz), a Windows 10 operating system (64 bit), and 8 GB RAM (Lavie Direct NM, NEC, Japan) ( Figure 1). An opto-sensor connected from the Black Box Toolkit was attached to the left lens of the HMDs to measure the luminance changes (BBTK optodetector sensor, The Black Box Toolkit, United Kingdom). TTL triggers were sent to the BBTK through the I/O port (USB TTL Event Marking Module, The Black Box Toolkit, United Kingdom) connected to the Experiment PC. The PySerial library was used in both Vizard 6 (Python 2) and Vizard 7 (Python 3) environments to establish this serial port connection for TTL triggers (https://pythonhosted.org/ pyserial/) (Bridges et al., 2020;Tachibana & Niikuni, 2017). All evaluation tests were conducted and data were collected using Digital Stimulus Capture mode that allowed to measure both the auditory and visual stimuli onsets and offsets with TTL input triggers in BBTK. In the experiments using the HTC Vive Pro HMD, the "CRT Refresh Correction" tool in BBTK was turned off because the stimulus presentation by HTC's HMD was measured by frames correctly, as in CRT displays (11.11 ms per frame). Moreover, the "CRT Refresh Correction" tool was turned on during the experiments with Oculus's HMD due to Asynchronous Space Warp. This setting enabled the measurement and definition of stimulus duration as TTL triggers. While the Asynchronous Space Warp was working, a black blank was inserted automatically after every short refresh of 2-2.5 ms, preventing the measurement of the visual stimulus presentation by frames. The number of short flashes in the HMD depended on the number of frames presented. For instance, Oculus HMD flashed four times for 2-2.5 ms each when the stimulus duration was 4 frames (44.44 ms), and the black blanks were inserted among these short flashes. Hence, the time of visual stimulus in Oculus was measured as the duration from the start of the first frame flash to the end of the last frame flash (cf., Wiesing et al., 2020). Stimuli Black and white full-background blanks in VR were used as visual stimuli. The black background environment (RGB: 0, 0, 0) was generated using the "viz.clearcolor (0, 0, 0)" function (the arguments "0" correspond to the "0" of each RGB) to change all colors in the VR environment to black on HTC Vive Pro (6.65 cd/m 2 ) and Oculus Rift (0.42 cd/m 2 ). Similarly, when a black background was generated, the white blank (RGB: 255, 255, 255) in the VR environment was generated using the "viz.clearcolor (1, 1, 1)" (the arguments "1" correspond to the "255" of each RGB) on HTC Vive Pro (116.80 cd/m 2 ) and Oculus Rift (78.24 cd/m 2 ). The luminance of blanks presented on each HMD was measured by a luminance and color meter Konica Minolta,Japan). In experiments with Python 2 environment, all visual stimuli were generated and controlled by Vizard 6 (Python 2 code). Otherwise, this was done using Vizard 7 (Python 3 code). Procedure The black-to-white screen transition test, which is a wellestablished evaluation for stimulus timing accuracy and precision (Garaizar & Vadillo, 2014;Krause & Lindemann, 2013;Tachibana & Niikuni, 2017;Wiesing et al., 2020), was performed in VR. In the experiments, black and white blanks were shown alternately 1000 times in the HMDs. The duration of each blank was 11.11, 22.22, 33.33, 44.44, or 99.99 ms along with 1, 2, 3, 4, and 9 frames of the HMD display, respectively. The durations of these stimuli were controlled by the function "viztask.waitFrame()" for the precise frame number of white and black blanks. The TTL triggers were sent at the onset of each blank. During the tests, visual stimulus presentation and TTL triggers were measured using BBTK. This measurement enabled the analysis of differences between the actual time and timing of stimulus presentation on HMDs and programmed time and timing by TTL in VR. The test was performed in both Python 2 and 3 environments using two HMDs (HTC Vive Pro and Oculus Rift) separately. Thus, 20 tests (two Python environments × two VR HMDs × five stimulus durations) were conducted. The opto-sensor was calibrated using BBTK sensor threshold manager before the experiments. Results and discussion For descriptive statistics analysis, we analyzed the number of presented stimuli (white blank), the average duration of stimulus presentation, and the average time lag and its standard deviation between the onsets of TTL triggers and stimulus presentation (Bridges et al., 2020;Le Chénéchal & Chatel-Goldman, 2018;Reimers & Stewart, 2016) (Table 1). If the number of presented stimuli did not reach the expected count of 1000 (1000 means correct presented number of stimuli synchronized with the frame rates), we excluded it from the analyses of duration and time lag. This data screening enabled us to clarify whether the vertical synchronization worked correctly and what was the sufficient frame number for accurate stimulus presentation in each HMD. Additionally, as shown in Table 1, this screening was performed because the data of stimulus duration and time lag from incorrectly presented number of stimuli (e.g., 53/1000 times in 11.11 ms with Oculus HMD) should not be analyzed with data from correctly presented number of stimuli (1000/1000 times) due to the differences in the sample number. Number of stimulus presentations The white blank was perfectly presented 1000/1000 times with the expected durations, except for the 11.11-ms duration in Python 2 with Oculus Rift (53/1000 times) and Python 3 with Oculus Rift (87/1000 times) environments, indicating that it was difficult to present visual stimuli for one frame accurately. Duration of stimulus presentation Overall, there were no differences between the Python 2 and Python 3 environments. The average stimulus duration of the Oculus Rift was 8-9 ms shorter than the expected duration. HTC Vive Pro was accurate for all the expected durations. In both HMDs, the standard deviations were less than 1 ms, indicating high precision. Time lag of stimulus presentation Similar to the duration, there were no differences between Python 2 and Python 3, and the standard deviations were under 1 ms overall. Importantly, there was a 17-18-ms time lag from the TTL trigger to present visual stimulus in every condition, suggesting that it is a constant delay for visual stimuli in VR using Python environments. Experiment 2: Auditory stimulus presentation In Experiment 2, the accuracy and precision of auditory stimuli in VR were evaluated using the same procedure as in Experiment 1. Method Apparatus A microphone (BBTK digital microphone, Black Box Toolkit, United Kingdom) was used instead of an optosensor to measure auditory stimulus presentation. The microphone was attached to the left speaker of the HMD. The sound settings of the HMD's active speaker were activated by SteamVR for HTC Vive Pro and the Oculus software for Oculus Rift. The other apparatus was identical to that used in Experiment 1. Duration (ms) The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. Procedure In the experiment, silence (no sound) and sine-wave sound were presented alternately 1000 times. There was no visual stimulus presentation in the experiments (the background color remained black). The duration of each time was 11.11, 22.22, 33.33, 44.44, or 99.99 ms, as in Experiment 1. These stimulus durations were controlled by the function "viztask.waitFrame()" for the precise frame number of sine waves as well as silences. The TTL triggers were sent at each sound onset. During the experiments, the auditory stimulus presentation and TTL triggers were measured using BBTK. The test was performed in both Python 2 and 3 environments using two HMDs (HTC Vive Pro and Oculus Rift), and 20 tests (two Python environments × two VR HMDs × five stimulus durations) were conducted. The microphone was calibrated using BBTK sensor threshold manager before the experiments. Results and discussion The data were analyzed in the same manner as in Experiment 1 ( Table 2). Number of stimulus presentations The pure tone sound was perfectly presented 1000 times, except for the 11.11-ms and 22.22-ms duration. In 11.11 ms, the tone sound was out of control (1 or 2 /1000 times). In these conditions, stimulus presentation by frame rate did not work correctly and the sound was poor with frequent breaks, indicating that the auditory stimulus needed to have a duration of at least 30 ms. Duration of stimulus presentation Overall, there were no differences between Python 2 and Python 3. In addition, the mean duration in both HTC and Oculus HMDs was accurate and almost the same. In contrast to the visual stimulus, the standard deviations were bigger (over 4 ms) in 33.33-ms and 44.44-ms duration. In 99.99 ms, the standard deviations were improved to under 4 ms. Time lag of stimulus presentation Although there were no differences between Python 2 and Python 3, the time lag was larger than in the visual stimulus presentation. In the HTC HMD, there was a constant 38-ms delay from the TTL trigger to present the auditory stimulus. In the Oculus HMD, there was a 57-ms delay. The standard deviations were approximately 3 ms for both HTC and Oculus HMDs, suggesting that the timing accuracy depends on the hardware for the auditory stimulus. These results indicate that the presentation of the auditory stimulus had a lower timing accuracy and precision than the visual stimulus. Experiment 3: Audio-visual stimulus presentation In Experiment 3, the accuracy and precision of audio-visual stimulus presentation in VR were evaluated. This experiment facilitated the measurement of the SOA between auditory and visual stimuli that occurs constantly in VR controlled by Python environments. Apparatus The apparatus was identical to that of Experiments 1 and 2. Both the microphone and the opto-sensor were used to measure the audio-visual stimulus presentation simultaneously. Stimuli The same stimuli as in Experiments 1 and 2 were used. Procedure The procedures of Experiments 1 and 2 were combined. Tests of black-to-white screens with sound were performed in VR. In the test, black and white blanks were shown alternately 1000 times in the HMDs. At the same time as the white blank, a sinewave sound was presented for the same duration, while there was no sound during the black background presentation. The durations of each blank and sound were 11.11, 22.22, 33.33, 44.44, or 99.99 ms respectively. As in Experiments 1 and 2, the duration of these stimuli was controlled by the function "viztask.waitFrame()" for the precise frame number of both auditory and visual stimuli. TTL triggers were sent at the onset of each blank. During the tests, the visual and auditory stimulus presentations and TTL triggers were measured using BBTK. Similar to Experiments 1 and 2, 20 tests (two Python environments × two VR HMDs × five stimulus durations) were conducted. The microphone and opto-sensors were calibrated by BBTK sensor threshold manager before the experiments. Results and discussion In addition to the analyses used in Experiments 1 and 2 (Appendix Table 9), we analyzed the time lag between visual and auditory stimuli ( Table 3). Number of stimulus presentations Each presented number of auditory and visual stimuli was the same as in Experiments 1 and 2. The visual stimulus was presented 1000 times, except for the 11.11-ms duration in Python 2 with Oculus Rift (40/1000 times) and Python 3 with Oculus Rift (112/1000 times) environments. For the auditory stimulus, stimulus presentation in the 11.11-ms and 22.22-ms durations did not work correctly, exhibiting disturbed sound in both HTC and Oculus HMDs (1/1000 times). Duration of Stimulus Presentation Overall, the duration of auditory and visual stimuli was the same as in Experiments 1 and 2. There were no differences between Python 2 and 3 in each condition. The mean visual stimulus duration of Oculus Rift was 8-9 ms (1 frame approximately) shorter than the expected duration, whereas HTC Vive Pro had an accurate duration for all expected durations. In both HTC Vive Pro and Oculus Rift, the standard deviations were less than 1 ms, indicating high precision. The mean duration of the auditory stimulus in HTC and Oculus HMDs was accurate and almost the same. Similarly as in Experiment 2, the standard deviations were slightly bigger (over 4 ms) in the 33.33-ms and 44.44-ms durations than in the 99.99-ms duration. Time lag of stimulus presentation Consistent with the results of Experiments 1 and 2, there were stable time lags for both stimuli. For the visual stimulus, a 17- Number of Presented Sounds The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. 18-ms time lag (SD < 1 ms) occurred in each condition. In the auditory stimulus, the time lag was larger than the visual stimulus presentation: 37-ms time lag in the HTC Vive Pro, and 58-ms time lag in the Oculus Rift. The standard deviations were approximately 3 ms for both HMDs. There were no differences between the Python 2 and 3 environments. These results indicate that there was no negative interaction that could cause a more unstable time lag even if the auditory and visual stimuli were presented simultaneously. Time lag between auditory and visual stimuli There were consistent time lags, depending on the HMDs. The HTC Vive Pro and Oculus Rift had time lags of approximately 19 and 39 ms, respectively, between the auditory and visual stimuli in every condition. The standard deviations of these lags were almost identical, at 3 ms. Similar to the results above, there were no differences between the Python 2 and 3 environments overall. The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. Experiment 4A: Visual stimulus presentation with gray-to-gray screen transitions In Experiments 4 A and B, the accuracy and precision of complex visual stimuli in VR developed in Python environments were evaluated with the method used in Experiment 1. Previous studies have shown that the gray-to-gray transitions of LCD have a different temporal resolution due to their slower rise and fall time of luminance change than black-to-white transitions (Poth et al., 2018). Thus, in Experiment 4A, gray-to-gray screen transition tests in which the screen changes from a gray level to a different gray level were conducted to measure the effects of gray-to-gray changes on visual stimulus presentation in VR HMDs. Apparatus The apparatus was identical to the apparatus of Experiment 1. Procedure The gray-to-gray screen transition test was performed in VR. In the experiments, 10% gray-level and 90% gray-level blanks were shown alternately 1000 times in the HMDs. The duration of each blank was 11.11, 22.22, 33.33, 44.44, or 99.99 ms. The rest of the procedure was the same as in Experiment 1. Twenty tests (two Python environments × two VR HMDs × five stimulus durations) were conducted. The opto-sensor was calibrated using BBTK sensor threshold manager before the experiments. Results and discussion We analyzed the number of presented stimuli (90% gray-level blank), the average duration of stimulus presentation, and the average time lag and its standard deviation between the onsets of TTL triggers and stimulus presentation (Table 4). If the number of the presented stimuli did not reach the expected count of 1000, we excluded it from the analyses of duration and time lag. Number of stimulus presentations The 90% gray-level blank was perfectly presented 1000/1000 times with the expected durations, except for the 11.11-ms duration in Python 2 with Oculus Rift (1/1000 times) and Python 3 with Oculus Rift (1/1000 times) environments, indicating that it was difficult to present visual stimuli for one frame accurately. These results were consistent with those of Experiment 1. Duration of stimulus presentation The results were consistent with the results of Experiment 1 (black and white screen transition). There were no differences between the Python 2 and Python 3 environments. The average stimulus duration of the Oculus Rift was 8-9 ms shorter than the expected duration. HTC Vive Pro was accurate for all the expected durations. In both HMDs, the standard deviations were less than 1 ms, indicating high precision. Time lag of stimulus presentation Similar to the duration, there were no differences between Python 2 and Python 3, and the standard deviations were under 1 ms overall. Consistent with the results of Experiment 1, there was a 17-18-ms time lag from the TTL trigger to present visual stimulus in every condition, suggesting that the same time lag occurs in gray-to-gray transitions. Experiment 4B: Visual stimulus presentation using complex virtual scene In Experiment 4B, the accuracy and precision of complex virtual scene as visual stimulus in VR were evaluated. As described in Wiesing et al. (2020), complex visual stimuli such as 3D virtual scenes are quite typical for VR experiments and have a high rendering workload (various 3D objects and textures in a scene) to present as visual stimuli. In contrast, simple stimuli (i.e., black, white, and gray-level blanks) have a minimum rendering workload in VR environments. Evaluation of the stimulus presentation with complex virtual scenes provides evidence on whether there is a longer time lag than with stimuli with low rendering workload. Apparatus The apparatus was identical to that of Experiment 4A. Stimuli To use a highly realistic VR environment with high rendering workload (Wiesing et al., 2020), a VR scene "piazza" which is implemented in both Vizard 6 and 7 as a standard model was used as a visual stimulus on HTC Vive Pro (31.67 cd/m 2 ) and Oculus Rift (23.74 cd/m 2 ) ("piazza.osgb": https://docs. worldviz.com/vizard/latest/#Old_Book/Adding_3D_Models. htm) (Figure 2). The black full-background blank used in Experiment 1 was also used as visual stimulus. All visual stimuli were generated and controlled by Vizard 6 (Python 2 code) in experiments with the Python 2 environment. Otherwise, this was done using Vizard 7 (Python 3 code). Procedure The procedure was the same as in Experiment 1 except for the visual stimulus. The VR scene was used instead of the white blank. In the tests, the VR scene and black blank were presented alternately 1000 times in the HMDs. The opto-sensor was calibrated using BBTK sensor threshold manager before the experiments. Results and discussion We analyzed the number of presented stimuli (VR scene), the average duration of stimulus presentation, and the average time lag and its standard deviation between the onsets of TTL triggers and stimulus presentation (Table 5). If the number of presented stimuli did not reach the expected count of 1000, we excluded it from the analyses of duration and time lag. Duration (ms) The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. Number of stimulus presentations The VR scene was perfectly presented 1000/1000 times with the expected durations, except for the 11.11-ms duration in Python 2 with Oculus Rift (1/1000 times) and Python 3 with Oculus Rift (1/1000 times) environments, indicating that it was difficult to present visual stimuli for one frame accurately. These results were consistent with the results of Experiments 1 and 4A. Duration of stimulus presentation The results were consistent with Experiments 1 (black and white screen transition) and 4A (gray-to-gray screen transition). There were no differences between the Python 2 and Python 3 environments. The average stimulus duration of the Oculus Rift was 8-9 ms shorter than the expected duration. HTC Vive Pro was accurate for all the expected durations. In both HMDs, the standard deviations in 99.99-ms duration were relatively long: 1-3 ms. Time lag of stimulus presentation There were no differences in the time lags between Python 2 and Python 3, and the standard deviations were under 1 ms overall. Consistent with the results of Experiment 1, there was a 17-18-ms time lag from the TTL trigger to visual stimulus presentation in every condition, suggesting that the same time lag occurs in VR scene presentation. The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. Experiment 5: Complex auditory stimulus presentation Complex auditory stimuli such as realistic background music (BGM) or sound effects in VR scene are more typically used in VR experiments than a simple tone sound. As the complex visual stimuli tested in Experiment 4B, we evaluated the accuracy and precision of complex auditory stimuli in VR environments using the procedure of Experiment 2. Apparatus The apparatus was identical to that of Experiment 2. Stimuli We used a daily life sounds of a "piazza" ("07035152.wav"; BBC Sound Effects: https://sound-effects.bbcrewind.co.uk/ search?q=piazza; 44.1-kHz sampling rate; 16-bit depth; 84 dB (A) in HTC HMD, 78 dB (A) in Oculus HMD) as the auditory stimulus. These are realistic daily life sounds of Piazza Navona which are congruent with the VR scene stimulus in Experiment 4B. The stimulus was edited by the Audacity software (Audacity Ver 3.0.2, The Audacity Team: https://www.audacityteam.org/) to cut the silent part of the sound file out because there were no sounds for the first few seconds in the original file. The stimulus was imported into Vizard programs by the "viz.playSound()" function to preload the auditory stimulus in the same way as in Experiment 2. Procedure The procedure was the same as in Experiment 2 except for the auditory stimulus. The complex sound was used instead of the pure tone sound. In tests, the complex sound and silence (no sound) were presented alternately 1000 times in the HMDs. The microphone was calibrated using BBTK sensor threshold manager before the experiments. Results and discussion The data were analyzed in the same manner as in Experiment 2 ( Table 6). Number of stimulus presentations The results were consistent with Experiment 2. The complex sound was perfectly presented 1000 times, except for the 11.11-ms and 22.22-ms durations. The complex sound in those durations did not work as expected. In these conditions, the stimulus presentation by frame rate did not work correctly, Fig. 2 A screenshot of the VR scene in HMDs. This screenshot was taken from Vizard software in HTC Vive Pro with SteamVR and the sound was poor with frequent breaks, indicating that the complex sound stimuli needed to have a duration of at least 30 ms for accurate presentation, similar to the pure tone sound. Duration of stimulus presentation Overall, there were no differences between Python 2 and Python 3 in the duration of stimulus presentation. In addition, the mean durations in both HTC and Oculus HMDs were accurate and almost the same. Consistent with the results of Experiment 2, the standard deviations were bigger (over 4 ms) in 33.33-ms and 44.44-ms durations. In 99.99 ms, the standard deviations were improved to under 4 ms, indicating that the precision of sound stimuli in VR HMDs becomes stable if its duration is over 100 ms. Time lag of stimulus presentation There were no differences in time lag between Python 2 and Python 3. The time lag was larger than in the visual stimulus presentation in Experiments 1, 4A, and 4B. In the HTC HMD, there was a constant 38-ms delay from the TTL trigger to the auditory stimulus presentation. In the Oculus HMD, there was a 57-ms delay in 33.33-ms and 44.44-ms durations. In the 99.99-ms duration in Oculus HMD, the time lag was a slightly longer 60 ms with under 1-ms jitter. The other standard deviations (in 99.99 ms in HTC HMD, in 33.33 ms and 44.44 ms in both HMDs) were approximately 3 ms, suggesting that the timing accuracy depends on the hardware for the auditory stimulus. Consistent with the results of Experiment 2, these results indicate that the presentation of the auditory stimulus had a lower timing accuracy and precision than the presentation of a visual stimulus. Experiment 6A: Audio-visual stimulus presentation with gray-to-gray screen transition In Experiment 6A, the accuracy and precision of audio-visual stimulus presentation using gray-level screens with complex sound were evaluated. This experiment was conducted to test whether the SOA between auditory and visual stimuli in grayto-gray screen transitions changed in VR environments. Apparatus The apparatus was identical to that of Experiments 4A and 5. Both the microphone and the opto-sensor were used to measure the audio-visual stimulus presentation simultaneously. Stimuli The same stimuli as in Experiments 4A and 5 were used. Procedure All procedures were identical to Experiment 3 except for the auditory and visual stimuli. Tests of gray-to-gray screens with complex sound were performed in VR. In 90% gray-level screen presentation, the complex sound was presented simultaneously, whereas there was no sound in 10% gray-level screen. The microphone and opto-sensors were calibrated by BBTK sensor threshold manager before the experiments. Results and discussion Analyses were performed in the same way as in Experiment 3 (Appendix Table 10) ( Table 7). Number of stimulus presentations The numbers of presented number auditory and visual stimuli were consistent with those in Experiments 4A and 5. Duration (ms) The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. Duration of stimulus presentation Overall, the duration of auditory and visual stimuli was the same as in Experiments 4A and 5. There were no differences between Python 2 and 3 in each condition. The mean visual stimulus duration of Oculus Rift was 8-9 ms (approximately 1 frame) shorter than the expected duration, whereas HTC Vive Pro had an accurate duration for all expected durations. In both HTC Vive Pro and Oculus Rift, the standard deviations were less than 1 ms, indicating high precision. The mean duration of the auditory stimulus in HTC and Oculus HMDs was accurate and almost the same. Similarly to Experiments 2 and 5, the standard deviations were slightly bigger (over 4 ms) in the 33.33-ms and 44.44-ms duration than in the 99.99-ms duration (under 4 ms), indicating that the precision of sound stimuli in VR HMDs becomes stable when the duration is over 100 ms, even in the audio-visual stimulus presentation. Time lag of stimulus presentation Consistent with the results of Experiments 4A and 5, there were stable time lags for both stimuli. For the visual stimulus, a 17-18-ms time lag with 1-ms jitter occurred in each condition. In the auditory stimulus, the time lag was larger than the visual stimulus presentation: 37 ms time lag in the HTC Vive Pro, and 58 ms time lag in the Oculus Rift. The standard The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. deviations were approximately 3 ms for both HMDs. There were no differences between the Python 2 and 3 environments. These results indicate that there was no negative interaction that could cause a more unstable time lag even if the complex sound and gray-level visual stimuli were presented simultaneously. Time lag between auditory and visual stimuli Similarly to Experiment 3, there were constant time lags, depending on the type of HMD. The HTC Vive Pro and Oculus Rift had time lags of approximately 19 and 39 ms, respectively, between the auditory and visual stimuli in every condition. The standard deviations of these lags were approximately 3 ms except for Oculus HMD in 99.99-ms duration (less than 1 ms), suggesting that Oculus HMD may became more stable if the stimulus duration is over 100 ms. There were no differences between the Python 2 and 3 environments overall. Experiment 6B: Audio-visual stimulus presentation using VR scene with complex sound In Experiment 6B, the accuracy and precision of audio-visual stimulus presentation using a VR scene with complex sounds were evaluated. This experiment facilitated the measurement of more realistic SOA in VR experiments with high rendering workload controlled by Python environments. Apparatus The apparatus was identical to that of Experiments 4B and 5. Both the microphone and the opto-sensor were used to measure the audio-visual stimulus presentation simultaneously. Stimuli The same stimuli as in Experiments 4B and 5 were used. Procedure The procedure was identical to the procedure of Experiment 6A, except for the visual stimulus. The realistic VR scene used in Experiment 4B was used instead of 90% gray screen stimulus. The black blank was used as visual stimulus instead of 10% gray screen. When the VR scene was presented, the complex sound was presented simultaneously, whereas there was no sound during the black blank presentation. The microphone and opto-sensors were calibrated by BBTK sensor threshold manager before the experiments. Results and discussion Analyses were performed in the same way as in Experiments 3 and 6A (Appendix Table 11) ( Table 8). Number of stimulus presentations The presented numbers of auditory and visual stimuli were consistent with those in Experiments 6A. The visual stimulus was presented 1000 times, except for the 11.11-ms duration in Python 2 with Oculus Rift (574/1000 times) and Python 3 with Oculus Rift (526/1000 times) environments. For the auditory stimulus, stimulus presentation in the 11.11-ms and 22.22-ms durations did not work correctly, exhibiting disturbed sound in both HTC (1/1000 times in 11.11 ms with both Python 2 Duration of stimulus presentation Overall, the durations of the auditory and visual stimuli were the same as in Experiments 4B and 5. There were no differences between Python 2 and 3 in each condition. As in Experiments 1, 3, 4A, 4B, and 6A, the mean visual stimulus duration of Oculus Rift was 8-9 ms (1 frame approximately) shorter than the expected duration, whereas HTC Vive Pro had an accurate duration for all expected durations. In both HTC Vive Pro and Oculus Rift, the standard deviations were less than 1 ms, indicating high precision. The mean duration of the auditory stimulus in HTC and Oculus HMDs was accurate and almost the same as in Experiments 5 and 6A. The standard deviations were also slightly bigger (over 4 ms) in the 33.33 ms and 44.44 ms duration than in the 99.99 ms duration (under 4 ms). Time lag of stimulus presentation Consistent with the results of Experiments 4B and 5, there were stable time lags for both stimuli. For the visual stimulus, there was a constant 18-ms time lag (less than 1-ms jitter). In the auditory stimulus, the time lag was larger than that of the visual stimulus presentation: 37-ms time lag in the HTC Vive Pro and 58-ms time lag in the Oculus Rift. Similarly to Experiment 5, the standard deviations slightly improved (under 2 ms) in 99.99-ms duration with Oculus HMD, whereas the other standard deviations were approximately 3 ms for both HMDs, suggesting that timing precision for auditory stimulus depends on the hardware devices. There were no differences between the Python 2 and 3 environments. Overall, these results indicate that there was no negative interaction that could cause a more unstable time lag even if the complex VR scene and auditory stimuli were presented simultaneously. Time lag between auditory and visual stimuli Similarly to Experiment 6A, there were constant time lags, depending on the type of HMD. The HTC Vive Pro and Oculus Rift had time lags of approximately 19 and 39 ms, respectively, between the auditory and visual stimuli in every condition. The standard deviations of these lags were approximately 3 ms except for Oculus HMD in 99.99 ms (around 1 ms), suggesting that Oculus HMD became more stable if the stimulus duration is over 100 ms. There were no differences between the Python 2 and 3 environments overall. General discussion This study systematically evaluated the accuracy and precision of visual, auditory, and audio-visual stimulus presentations in VR controlled by Python environments. The results clearly showed that there is a stable time lag and jitter for each stimulus. The time lag for visual stimulus was approximately The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. 18 ms, whereas for auditory stimuli, it was 37 ms and 58 ms on the HTC Vive Pro and Oculus Rift, respectively. These time lags indicate that there is a considerable 20 ms SOA for HTC's HMD and 40 ms SOA for Oculus's HMD when the auditory and visual stimuli are presented simultaneously even when complex stimuli with high rendering workloads were used. For the auditory stimulus, a duration of at least 30 ms (three frames) was required to present it without sound distortion. Importantly, these results were consistent in both Python 2 and 3, indicating no differences between those environments for stimulus presentation. In Experiment 1, the visual stimulus was presented 18 ms after the TTL trigger in every condition with high precision (onset jitter was under 1 ms approximately). This result is consistent with the previous study that evaluated HTC Vive Pro HMD using the Python API (mean time lag = 18.35 ms, SD = 0.96 ms) (Le Chénéchal & Chatel-Goldman, 2018). In their studies, the time lag was directly tested by native OpenVR wrapped in a Python library to establish a low-level (minimal overload for rendering) Python API, whereas our studies used Vizard, which is a large Python application for VR experiments, on both Python 2 and 3. Thus, the same time lags of 18 ms are caused by the hardware (VR HMD), called the application (or motion)-to-photon latency of the VR HMD (Choi et al., 2018;Le Chénéchal & Chatel-Goldman, 2018). Although the time lag does not show the extremely high accuracy of CRT, the jitter (standard deviation) of stimulus onsets is high and stable (under 1 ms). With the high precision of time lag, researchers can adjust their accuracy by presenting visual stimuli a few frames faster. In VR experiments controlled by Python with a 90-Hz refresh rate, the 18-ms time lag was almost equivalent to two frames (22.22 ms). Adjustment of the two frames achieves improved accuracy for visual stimulus presentation (under 5 ms). While the accuracy and precision of visual stimuli are consistently stable across every environment, the duration of the visual stimulus depends on the HMD system. In the Oculus HMD, the stimulus duration was one frame shorter than the expected duration. Unfortunately, there was no accuracy for stimulus presentation when the duration was a single frame (11.11 ms). Because of the Asynchronous Space Warp system on Oculus, the frame rate is forced to change and a black blank is inserted automatically in every flash to reduce artifacts during VR running, even if the settings are turned off. This causes the frame rate to decrease from 90 to 45 Hz, and the actual flash of the display (white blank) is reduced. In VR experiments that do not require the millisecond control of stimulus, it may help to run the task on a low-spec graphics card; however, it must be noted that the motion smoothing for VR HMDs interrupts the millisecond control by frames in psychological and neuroscientific experiments. To establish accurate frame control, Oculus Rift Development Kit 2, which is an official SDK for Oculus devices, can be used. For HTC HMDs, motion smoothing is controlled by SteamVR. Because the duration of visual stimulus in HTC Vive Pro is quite accurate and precise frame by frame, it may be more suitable for scientific experiments. In Experiment 2, the accuracy and precision of the auditory stimulus were lower than those of the visual stimulus. As in the 11.11-and 22.22-ms durations, the sound was distorted and not presented for the expected duration on both HMDs across Python 2 and 3 environments, suggesting that the auditory stimulus is uncontrollable under the 20-ms duration in modern VR HMDs. As the auditory stimulus in VR environments is presented to participants via the headphones integrated in HMDs connected by a DisplayPort or HDMI cable, we cannot use audio interfaces to improve the sound quality, as is usually done in traditional audio hardware setups. Additionally, unlike with libraries for auditory stimuli, which do not report the progress of sound playing, researchers can detect when a graphics card for visual stimulus flipped the frame (Bridges et al., 2020). As for auditory stimulus presentation in 2D environments controlled by Python tools for experiments, there is an 18-ms time lag with 1-3-ms jitter (Krause & Lindemann, 2013;Tachibana & Niikuni, 2017). Although previous studies measured the auditory timing performance directly from the output of the sound card to test the native lag without physical headphones or speakers, the time lag in VR HMDs may still be worse because the latency of recent audio interfaces or headphones is low, and the buffer size of sound can be controlled. Over 33.33 ms, auditory stimulus was presented as per the expected number and duration in all environments. This clearly indicates that 30 ms is the minimum duration required for auditory stimuli without sound distortion in modern VR HMDs. However, there is an approximately 30-60-ms time lag even if the stimulus duration is longer than 30 ms. The time lag varies in HMDs: 37 ms for HTC Vive Pro and 57 ms for Oculus Rift. We should consider these timing delays for each device and adjust them to present the auditory stimulus a few frames faster than the visual stimulus to reduce the time lag to under 4 ms. Moreover, the jitter of duration on 33.33 and 44.44 ms were slightly bigger (over 4 ms) than that on 99.99 ms (under 4 ms). This suggests that the accuracy and precision of auditory stimulus on VR HMDs would be more stable over a duration of 100 ms, although a jitter of less than 5 ms may be sufficient for general experiments. If more millisecond control of the auditory stimulus by Python is required for experiments, the PTB library in PsychoPy 3 can be helpful. This sound library seems to have the lowest time lag and jitter (5-ms time lag with 1-ms jitter) to present the auditory stimulus in Python experimental control (https://www.psychopy.org/api/sound.html). Since it works only in Python 3 environments, Vizard 7 (Python 3) may be required and suitable for presenting more accurate audio stimuli in VR environments. It is obvious from Experiment 3 that the accuracy and precision of auditory and visual stimuli do not vary even when the stimuli are presented simultaneously on both Python 2 and 3, corresponding to Experiments 1 and 2, where the visual and auditory stimuli were tested independently. The time lag between auditory and visual stimuli also depends on the HMD: 19 ms in the HTC HMD and 39 ms in the Oculus HMD. Interestingly, the jitter was 3 ms. Previous studies on timing accuracy and precision showed that the time lag between auditory and visual stimuli was less than 10 ms with 1-ms jitter in standard laboratory 2D environments with a low-latency LCD monitor, whereas it was over 50 ms with 3-5-ms jitter in web-based environments controlled by PsychoPy on a Windows 10 operating system (Bridges et al., 2020). Compared with laboratory and web-based studies on Python environments, modern VR HMDs may present intermediate accuracy and precision for audio-visual stimulus presentation. In addition to Experiments 1-3 where the timing accuracy and precision were measured by well-established methods, we expanded the evaluation using complex stimuli in VR environments in Experiments 4-6. As a whole, the results were consistent with Experiments 1-3. It clearly shows that the time lags in stimulus presentation are stable and constant even when the stimuli have a high rendering workload in VR environments. In Experiments 4A and B, the accuracy and precision were not affected by the complex visual stimuli. Generally, the gray-to-gray transitions on LCD have additional time lags because the raise time to peak luminance (i.e., 90% gray level) and fall time to the low luminance (i.e., 10% gray level) take more time to change compared with the black-to-white transitions (Boher et al., 2007). In the present study, the time lags between grayto-gray and black-to-white transitions did not differ (i.e., 18 ms). The stimulus duration and its jitter (less than 1 ms) were also the same in black-to-white transitions. As OLED that has faster response time than LCD is used in modern VR HMDs, the time lags to present visual stimuli in gray-to-gray transitions may vary little, suggesting that the millisecond accuracy and precision are accomplished. This is supported by the results of Experiment 4B. While the VR scene in Experiment 4B was a complex visual stimulus with high rendering workload, where the luminance levels of each pixel on HMDs were not uniform as gray screens due to various RGB values of textures in the scene, the accuracy and precision did not vary (same as in black-to-white and gray-to-gray transitions). In 99.99-ms duration, the jitter of both HMDs (approximately 1 ms in HTC, 3 ms in Oculus) was slightly bigger than in the shorter durations (less than 1 ms). If the stimulus duration is longer (over 100 ms), the fall time from the various luminance level to the uniform black blank (lowest luminance level) on each pixel may vary slightly, depending on the VR HMDs, although the average duration is accurate. The results of Experiment 5 also show that the accuracy and precision do not vary even when a complex auditory stimulus, which is more typical for VR experiments used in VR HMDs, is presented. The constant time lags of 38 or 58 ms with 3-ms jitter occur, depending on the VR HMDs. Consistent with the results of Experiment 1 (pure tone sound), the complex sound was also distorted in the short durations of 11.11 and 22.22 ms, indicating that at least 30-ms duration is required for accurate stimulus presentation in general VR experiments. Besides the time lags, in 99.99-ms duration, the jitters of HTC and Oculus HMDs were remarkably improved (under 4 ms), consistent with the results of Experiment 2, confirming that the 100-ms duration is sufficient for VR general experiments using realistic BGM or sound effects. Importantly, as shown in Experiments 6A and B, the accuracy and precision of audio-visual stimulus presentation of complex stimuli are consistent with that of simple stimuli in Experiment 3. These results obviously confirm that the SOA between auditory and visual stimuli in VR experiment controlled by Python environments is constant but strongly depends on VR devices (19 ms in the HTC HMD and 39 ms in the Oculus HMD with 3-ms jitter). As described above, researchers can adjust these SOA by presenting the stimuli a few frames faster in the program. For instance, in VR experiments with HTC HMD, the auditory stimulus should be presented two frames (22 ms) faster than the visual stimulus, enabling only 3 ms SOA between auditory and visual stimuli. This small SOA is quite sufficient for general VR experiments as well as standard 2D experiments. If there is no measurement and validation before the experiments, unsuitable SOA among visual, auditory, and trigger stimuli for external devices would be crucial for eye tracking and multisensory research in VR. Eye tracking technology is commonly used in VR research to examine how visual cognition works in 3D environments. For instance, when the experimental task is involved in the onset or offset of eye movements synchronized with stimuli in a trial sequence, the time lag between the actual onset of eye movements and triggered time (time stamp) in eye tracker devices must be confirmed, as a previous study on the temporal and spatial quality of HMD's eye tracking suggests that the accuracy and precision also depend on the eye tracker device in HMDs (Lohr et al., 2019). In multisensory research on VR, multiple types of information such as auditory, visual, and haptic stimuli can be used simultaneously Burdea et al., 1996;Wilson & Soranzo, 2015). Especially in neuro-rehabilitation studies, haptic devices such as VR gloves are potential keys to providing kinesthetic or tactile stimulation for participants in VR (Demain et al., 2013). When haptic feedback synchronized with visual and auditory information is used for rehabilitation, the time lags among stimuli should be validated as much as possible. The time lag of the auditory stimulus from the visual stimulus can be used for adjusting the accuracy of haptic devices because haptic feedback is generated by frequency and amplitude in the same manner as an auditory stimulus if the haptic stimulus is controlled by vibration interfaces. However, when researchers perform experiments requiring strict millisecond control of the stimulus, the time lags must also be adjusted before the experiments to improve the unsuitable time/timing gap among multiple stimuli, testing their own VR devices. Limitations The limitations of this study were the operating systems, the gray-to-gray transition levels, and the participants' response times. We used the Vizard software for stimulus presentation in VR to systematically investigate the accuracy and precision of stimulus generation of modern VR HMDs in Python 2 and 3 environments. However, it only supports Windows operating systems. Previous studies on non-VR environments have indicated that accuracy and precision vary depending on which operating systems are used for software (Bridges et al., 2020;Krause & Lindemann, 2013;Tachibana & Niikuni, 2017). Moreover, the version of Apple operating system (current macOS vs. older OS X) seems to cause critical differences in stimulus presentation (Bridges et al., 2020). The stimulus presentation should be tested in VR experiments across all common operating systems with different versions by other VR software that has cross-platform support, although it is difficult to evaluate those directly as there are limited options for graphics card that can be used in Apple computers. In Experiments 4A and 6A, we measured the accuracy and precision using only 10-90% gray-to-gray transitions, which are general gray levels for the test (Boher et al., 2007;Liang & Badano, 2006). A previous study has shown that other gray-level transitions such as 30-70% levels have a different temporal resolution for the presentation of visual stimuli on Gaming LCD (Poth et al., 2018). Due to the specific limitation of sensor threshold on BBTK, the luminance changes between 30% and 70% on VR HMDs could not be measured as precisely. BBTK, which we used as the evaluation tool in the present study, is a special device with 36 channels (6-kHz downsampling rate) used to test the onset and offset timing accuracy and precision of stimulus synchronization with TTL triggers, enabling researchers to test various stimulus presentations simultaneously for a long period (1000 or more trial times is the common setting in human behavior studies). In contrast, a typical oscilloscope has 2-4 channels (100-MHz high sampling rate) and can be used only for a very short measurement period due to its low internal memory capacity (https://www.blackboxtoolkit.com/ faq.html). Although BBTK is sufficient to measure the general timing accuracy and precision of stimulus presentation as in previous studies (Bridges et al., 2020), more specific measurements of gray-to-gray luminance changes on modern VR HMDs (i.e., OLED) are required by an oscilloscope to examine how specific gray-to-gray transitions affect the timing accuracy on OLED compared to LCD for the psychological and neuroscientific research. The accuracy and precision of the response time should be tested using VR response devices. Participants in VR experiments may respond in several ways, such as keyboards, response pads, joysticks, VR controllers, and VR gloves. While the response pads and gaming keyboards commonly used in behavior experiments have high accuracy and precision when collecting participants' data, the accuracy and precision of VR response devices are not well known. In particular, recent VR gloves such as Manus, which has haptic feedback, can be a key for dynamic response methods in VR studies. In future studies, a comparison between traditional and VR response devices should be conducted to reveal how reliable VR response devices are in terms of collecting participants' realtime data. Conclusions In summary, although there are time lags for each visual, auditory, and audio-visual stimulus presentation in modern VR HMDs, the accuracy and precision are stable across various stimulus types and capable of achieving millisecond time and timing control by a few frame adjustments. While the visual stimulus has a constant 18-ms time lag with less than 1-ms jitter, the auditory stimulus has a time lag of 37 ms on HTC Vive Pro and 58 ms on Oculus Rift with 4-ms jitter that depend on VR HMDs. These accuracies and precisions are robustly equal in both Python 2 and 3 environments for VR experiments, enabling the establishment of a more reliable experimental set up for psychological and neuroscientific research using various Python software. This study is also beneficial for researchers and developers who apply VR technologies for real-time communication where a number of people (and VR avatars) interact through VR (e.g., VR chat or meeting), as well as studies on rehabilitation tools that require high timing accuracy for recording biological data, improving unsuitable latency. The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation. The pink-highlighted data represent no accuracy of stimulus presentation. HMD: head-mounted display. SD: standard deviation.
v3-fos-license