Assessing assumptions about boundaries with critical systems heuristics

By Werner Ulrich

werner-ulrich
Werner Ulrich (biography)

How can those participating in research effectively reflect on their own assumptions about where they set boundaries around: problems, solutions, measures of success, knowledge claims and other aspects of research? These aspects are inevitably partial in the dual sense of representing a part rather than the whole of the total universe of conceivable considerations, and of serving some parties better than others.

How can examination of assumptions about boundaries be employed as an emancipatory practice to assess the assumptions of others and to point to better ways of serving the disenfranchised and marginalised?

I developed critical systems heuristics in the 1980s to support such boundary critique. It aims to enhance the ‘critical’ (reflective) competence of researchers, decision makers and other stakeholders, as well as of ordinary people. It provides ‘heuristic’ support in the form of questions and argumentation tools that make a difference in practice.

Critical systems heuristics poses questions about 4 basic boundary issues:

  • Basis of motivation – Where does a sense of purposefulness and value come from?
  • Basis of power – Who is in control of what is going on and is needed for success?
  • Basis of knowledge – What experience and expertise support the claim?
  • Basis of legitimacy – Where does legitimacy lie?

These four issues are essential for reflective practice in most (if not all) situations of problem solving, decision-making, or professional intervention, including in research on complex societal and environmental problems. The questions which form the heuristics are asked in the ‘is’ and ‘ought’ modes, to reflect fact and values, respectively. They are an aid to both self-reflection and reflection on the assumptions and practices of others.

SOURCES OF MOTIVATION

  1. Who is (ought to be) the client or beneficiary? That is, whose interests are (should be) served?
  2. What is (ought to be) the purpose? That is, what are (should be) the consequences?
  3. What is (ought to be) the measure of improvement or measure of success? That is, how can (should) we determine that the consequences, taken together, constitute an improvement?

SOURCES OF POWER

  1. Who is (ought to be) the decision-maker? That is, who is (should be) in a position to change the measure of improvement?
  2. What resources and other conditions of success are (ought to be) controlled by the decision-maker? That is, what conditions of success can (should) those involved control?
  3. What conditions of success are (ought to be) part of the decision environment? That is, what conditions can (should) the decision-maker not control (eg., from the viewpoint of those not involved)?

SOURCES OF KNOWLEDGE

  1. Who is (ought to be) considered a professional or further expert? That is, who is (should be) involved as competent provider of experience and expertise?
  2. What kind of expertise is (ought to be) consulted? That is, what counts (should count) as relevant knowledge?
  3. What or who is (ought to be) assumed to be the guarantor of success? That is, where do (should) those involved seek some guarantee that improvement will be achieved – for example, consensus among experts, the involvement of stakeholders, the experience and intuition of those involved, political support?

SOURCES OF LEGITIMATION

  1. Who is (ought to be) witness to the interests of those affected but not involved? That is, who is (should be) treated as a legitimate stakeholder, and who argues (should argue) the case of those stakeholders who cannot speak for themselves, including future generations and non-human nature?
  2. What secures (ought to secure) the emancipation of those affected from the premises and promises of those involved? That is, where does (should) legitimacy lie?
  3. What worldview is (ought to be) determining? That is, what different visions of ‘improvement’ are (should be) considered, and how are they (should they be) reconciled?

As a rule, it makes sense to ask each question both in the ‘is’ and in the ‘ought’ mode. The ‘ought’ answers always help to clarify the standpoint from which a person is assessing a situation or related claim in the ‘is’ mode.

Furthermore, differences between ‘is’ and ‘ought’ answers are frequent, not to say the rule. As they point to unresolved boundary issues, they can drive the process of unfolding the selectivity of a claim.

However, the specific way in which the ‘is’ and the ‘ought’ mode are combined – say, in what order they are employed – depends on the particular application of boundary critique which is of interest.

It would be a mistake to conceive of boundary critique as a kind of step-by-step technique for ‘boundary setting’, that is, as a method to determine ‘right’ and ‘wrong’ boundary judgments and to settle conflicts. No kind of methodology could claim to know the ‘right’ answers to boundary issues.

What boundary critique can achieve is to help the parties in appreciating their own boundary assumptions and those of others, so that they can then articulate any concerns in a cogent way. The decision on what boundary judgments should underpin practical action is then a question of legitimacy rather than of validity. Once the selectivity of claims has become transparent, democratically institutionalised processes of decision-making can work in a meaningful way.

What has your experience been in assessing assumptions about boundaries in your own research? Do you have additional tips? Are there other ways of assessing boundaries that you have found to be helpful?

To find out more:

Ulrich, W. (2005). A brief introduction to critical systems heuristics (CSH). ECOSENSUS project, The Open University, Milton Keynes, UK. (Online): https://wulrich.com/downloads/ulrich_2005f.pdf (PDF 144KB).
Much of the text in the blog post is taken verbatim from this article. This article and my home page (https://wulrich.com/) provide references to additional work on critical systems heuristics.

Biography: Werner Ulrich PhD is retired Ancien professeur titulaire of the University of Fribourg (Faculty of Arts and Humanities) in Switzerland. He is a social scientist and practical philosopher with a particular interest in the philosophy and methodology of reflective professional practice and research. He is one of the originators of critical systems thinking.

22 thoughts on “Assessing assumptions about boundaries with critical systems heuristics”

  1. I studied the CSH framework and boundary judgement questions for my MSc research project in Systems Thinking at the OU. What I found was individuals I interviewed appreciated the essence of the questions but the questions themselves are wordy and inaccessible to those who need to question experts.

    It was a common thread of agreement that these types of questions and critical thinking should be interwoven throughout educational levels as they would lead us to better discourse and citizenship but the language of the questions render them difficult to apply.

    CSH is extremely beneficial to my thinking and I’ve loved working and studying the concepts.

    Reply
    • Hi Michelle, thanks for your comments. I am not quite sure what you mean with your characterization of the boundary questions as ‘wordy’. They are in my view exactly as long and ‘wordy’ as it takes to define the intent of the specific boundary category addressed by each question; this is the purpose of the added “That is…” part of each question. Once this intent is understood, it is in my experience sufficient for most people to rely on the short version of the questions (leaving away the ‘that’ part) or even just to use the table of the boundary categories as the shortest possible reminder of the 12 questions (compare Ulrich, 2000, p. 256 / Fig. 2 and p. 258 / Table II).

      The vision that I pursue with CSH consists in a ‘critically-heuristic training for citizens’: no-one should leave school without having acquired a basic understanding and practice of boundary critique. The same holds for professional (including university) education: no professionals should complete their training (or studies) without an advanced understanding of the idea of boundary critique.

      I have introduced CSH to students from many different fields, levels, and origin, as well as to mature professionals and adults, and have always again been encouraged by the enthusiasm and speed with which they captured the spirit and power of the boundary questions. If you try and practice boundary critique for yourself, you will soon experience its power and thus also be able to convey to others what it means to you and how you employ it. Interviewing students is perhaps not the best way to achieve that aim.

      Reply
    • Hello again systemswiki (Gene), thanks for your challenging request to condense the essence of CSH into a single sentence. Despite some doubts, I am willing to try, although not without articulating a few reservations before and some minimal explanations after that single sentence.

      Before: I see a danger that such a single sentence may and probably will be misunderstood as my „definition“ of CSH. Definitions are certainly useful for clear communication, but perhaps less so for expressing the essence of an intellectual journey that spans several decades and involves multiple philosophical traditions of thought. Definitions are dead rather than alive, they tend to „nail down“ our thinking; they say what something „is“ rather than what it might become; they thus limit what we make of an idea rather than encouraging us to unfold it. I am more interested in thoughts that are alive and allow growth – the life of ideas rather than their definition.

      It should then be clear that the following answer is NOT meant to be a definition or in any way a final summary of „what CSH is.“ Rather, is is a provisional attempt to capture the essence of CSH as I seek to understand and develop it – the core idea that at this stage informs its development:

      The essence of CSH is critical contextualization –
      a permanent effort to face the contextual nature of
      one’s views and valuations and to limit one’s claims accordingly.

      After: Reflecting about my intellectual journey during the past 45 years or so, I think this one sentence, if any, captures its main thrust. But then, you will ask, what „is“ critical contextualization? It is, I am tempted to say, the quintessence of studying a number of intellectual traditions that are concerned with the question of what it means to live up to claims of knowledge, rationality, improvement, good practice, competence, fairness, etc. How can we know (substantiate) what we believe or claim to know? How can we argue that our actions or proposals for action are rational and conducive to improvement, improvement of what kind and for whom? And so on. I turned to the ideas and theories of science, systems thinking, practical philosophy and moral reasoning, discourse theory, pragmatic thought, and even ancient Indian philosophy as found in the Upanishadic tradition, all of which are very rich traditions of thought offering many partial answers; but all these answers remain incomplete and partial in that there will always remain deficits of justification.

      The key to handling this situation of inevitable deficits of justification became for me the idea of a „critical turn“ (Ulrich, 2001, pp. 26-28). It says that the quality or value of our propositions and claims depends not on complete justifications but rather, on the way we deal with the fact that complete justification is unavailable. „The rationality of applied inquiry and design is to be measured not by the (impossible) avoidance of justification deficits but by the degree to which its deals with such deficits in a transparent, self-critical, and self-limiting way.“ (Ulrich, 1993, p. 5, and 2001, p. 27)

      This is where the idea of „boundary critique“ moves into focus. If we want to develop a methodology for dealing systematically with justification deficits, we need to find some kind of leverage points that will basically remain the same. For me this leverage point is the „context“ we take to be relevant. Methodologically speaking, a „deficit“ of justification means that some relevant aspects of a situation of interest are not adequately included among the circumstances („facts“) and concerns („values“) taken into account. Taken together, the circumstances and concerns we do take into account make up a claim’s considered context. As a matter of principle, any deficit of justification can thus be understood to amount to an exclusion of relevant aspects. Even if we include an aspect in question but then fail to consider it adequately, we have in effect excluded it from the context taken to be relevant.

      The difficulty is, there is no overall or grandstand perspective from which we could claim to know what that relevant context ought to include and what not. For two reasons: first, because we can never claim to have a complete overall picture, and second, because asking for the correct or right context has us switch to an imperative rather than descriptive mode of speech. That is, worldviews and ethics are involved. So people will almost unavoidably talk past one another, because their interests and concerns are different. But what we can do is to change the context considered so as to bring in previously neglected circumstances and concerns. People then may still not agree, but at least may begin to understand why their „facts“ and „values“ differ – namely, because they are talking about different contexts. Or conversely, they begin to see how the contexts they consider differ from those of others, due to different assumptions of fact and value. The methodological leverage point that then emerges is to systematically alter the boundary judgments at work. Although we cannot claim completeness for the contexts we consider, we can always strive to expand them a little, as well as to narrow or refocus them, so as to improve people’s understanding of how boundary judgments condition their views and concerns.

      The methodological aim thus becomes to develop a discipline of observation, thought, and argumentation that helps us to systematically decontextualize (=expand or broaden) and recontextualize (=narrow or refocus) our claims – the essence of „critical contextualism“ as I propose to understand it. CSH is my attempt to ground such a discipline in a philosophical and methodological framework based in systems thinking, practical philosophy, discourse and argumentation theory, and pragmatic thought; and the methodological core principle that is emerging from this framework is what I call „boundary critique.“

      This is not the place to offer any further explanation, as doing so would go beyond the scope of a blog entry such as this. Let me conclude with three brief hints:

      (1) Interested readers may want to see two figures that illustrate the idea of critical contextualism as I just tried to sketch it with words only, given that the current blog format does not allow figures. Together with the surrounding text, the two figures might help readers to get a sense of what critical contextualization means and how it could change their habits and skills of thinking:

      https://wulrich.com/bimonthly_july2014.html
      (search Fig. 4: The critically contextualist cycle) and
      https://wulrich.com/bimonthly_november2016.html
      (search Fig. 12: The moral idea in context).

      (2) My main current interest is in developing what I call „critical pragmatism,“ compare

      https://wulrich.com/bimonthly_march2016.html or
      https://wulrich.com/downloads/bimonthly_march2016.pdf
      (html and pdf versions).

      Think of critical pragmatism as „critical contextualism applied to pragmatic thought“ and you’ll get the basic idea.

      (3) It often helps to capture the main idea of difficult, because still unfamiliar, writings by getting a sense of where the author comes from and what is her or his vision. You can find a rather readable account (I hope) in my essay on „The idea of boundary critique“ (Ulrich, 2018, see the main section titled „An interview with myself“):

      https://wulrich.com/bimonthly_march2018.html or
      https://wulrich.com/downloads/bimonthly_march2018.pdf
      (html and pdf versions).

      References

      Ulrich W. (1993). Some difficulties of ecological thinking, considered from a critical systems perspective: a plea for critical holism. Systems Practice, 6, No. 6, 1993, pp. 583-611. Open-access and paginated postpublication version at https://wulrich.com/downloads/ulrich_1993.pdf.

      Ulrich, W. (2016). Philosophy for professionals: towards critical pragmatism. Ulrich’s Bimonthly, March-April 2016 (earlier version in: Journal of the Operational Research Society, 58, No. 8, 2007, pp. 1109-1113). https://wulrich.com/bimonthly_january2018.html and https://wulrich.com/downloads/bimonthly_january2018.pdf (html and pdf versions).

      Ulrich, W. (2018). The idea of boundary critique. https://wulrich.com/bimonthly_march2018.html and https://wulrich.com/downloads/bimonthly_march2018.pdf (html and pdf versions).

      Ulrich W. (2001) The quest for competence in systemic research and practice. Systems Research and Behavioral Science, 18, No. 1, pp. 3-28. Open-access and paginated prepublication version at https://wulrich.com/downloads/ulrich_2001a.pdf.

      Reply
      • Werner, explanation not required. I thought the sentence was amazing! “The essence of CSH is critical contextualization – a permanent effort to face the contextual nature of one’s views and valuations and
        to limit one’s claims accordingly.” I supposed I might have changed the end of the ending to “and act accordingly within the contextualization.” We all seem to have our preferences. Thanks ever so much!!!

        Reply
  2. Many thanks for the refresher Werner. Hadn’t thought about CSH for quite some time and I really like the manner in which you condensed the essence of the longer paper. Getting people to read more than a tweet today is often difficult. I continue to find one can define the boundary anywhere one likes though what’s addressable depends on the extent of the set of engaged stakeholders authority, power, and influence over. Maybe this very short video best describes my thoughts… https://www.youtube.com/watch?v=Mg-fnNHKjdc

    Reply
    • Hi systemswiki / Gene, thank you for your short comment on what you fittingly call a ‘refresher’. As we get older and our memory shorter, we learn to appreciate this kind of support, ha!

      The specific issue you deal with is indeed a core issue of my work as well, of how we can make relevant stakeholders act responsibly when they are just not interested in participating in responsible action as for them, everything is apparently fine so long as responsible action costs them more than does doing nothing.

      Since my work on critical systems thinking / critical systems heuristics (CSH) is a discursive (argumentative) approach rather than one of political or managerial agency or however you understand your approach, I frame the issue of ‘leverage’ in discursive (argumentative) terms such as ‘boundary critique’, ’emancipatory use of boundary critique’, and achieving ‘symmetry of critical competence’ rather than in non-argumentative terms of influencing, exerting pressure, or or making unresponsiveness costly. The background vision for me is a society in which problems of collective concern are identified and resolved on the basis of public discourse and democratic legitimation (a vision of deliberative democracy). Not free of pressures of power, but politically controlled through democratically institutionalized discourses and opportunities of conflict resolution. For this reason, I focus on argumentative rather than non-argumentative means, without implying the latter are not needed as well. I am well aware they are much needed, unfortunately, but that does not alter the fact that there must be some sources of competence and of legitimation in using non-argumentative means.

      Our two approaches meet where you use the means of publicizing the irresponsibility in question and demonstrating the resulting damage to other stakeholders or society-at-large, so that there will be a cost for inaction and irresponsibility. I agree that costs are the only argument which such irresponsible agents tend to hear and to take into account, if only it is high and painful enough. It’s a kind of public shaming, not nice but effective, which replaces cooperative action based on mutual understanding and participative discourse as it would be preferable in a model of deliberative democracy.

      As you say, there is always leverage, in this case a kind of public shaming. Quite similarly, boundary critique leads to arguments which are apt to disclose and challenge defensive pseudo-arguments of irresponsible agents (‘we are so far not aware of any damage’, we are’not doing anything illegal’, etc.). The leverage in this case is what I call the critical use of boundary judgments and, in a somewhat stronger form for situations where one party switches to non-argumentativ means, ’emancipatory boundary critique’. It achieves basically the same effect: it makes it publicly apparent that the delimitation of the relevant context (or ‘reference system’) on which those defensive arguments or non-argumentative stance rely is done opportunistically, so that the ‘facts’ and ‘values’ considered are suiting the irresponsible agent’s purpose.

      Thus the ‘facts’ and ‘values’ such agents present as relevant will conveniently exclude circumstances and concerns that would not suite the irresponsible agents purposes so nicely.

      So long as this is not clear to all the involved or concerned stakeholders, the irresponsible agent will as a rule pay a lot of ‘experts’ and lawyers to argue the problems away and maintain an appearance of superior expertise as compared to the critics. However, experience shows that the mask of expertise of such opportunistic helpers will slip quickly once it becomes obvious that they rely on subjective and value-laden boundary assumptions concerning the relevant context, assumptions by which they reduce the scope of responsibility but which they are quite unable to justify by reference to their supposedly superior expertise. They argue, quite simply, outside the boundaries of their competence and, at the same time, of responsible action. Thus the opportunistic character (‘selectivity’ and ‘partiality’ in CSH) of such references to expertise becomes apparent and people sense that justifying boundary judgments is not primarily a question of expertise but rather one of concern and responsibility. In other words, the expertise of those involved has no in-principle advantage over the concerns and considerations of those effectively or potentially affected, be they ordinary citizens or other stakeholders and professionals, the media, and so on.

      In short, my interest is in the kind of leverage that argumentation can produce through the critical employment of boundary judgments. Argumentation rather than non-argumentative kinds of influencing or pressuring irresponsible agents should come first in an open and enlightened society, I think. That is why it is my priority and main methodological interest. But I agree with you that in all cases in which argumentation does not work and is replaced by power and deception or other non-argumentative means, the second-best option is a ‘leverage’ approach of the kind you describe, assuming it works with an adequate basis of underpinning political discourse and democratic legitimation. Ultimately, I dare say, there is no alternative to discursive (i.e., cooperative and argumentative) efforts at reaching mutual understanding, but we must nevertheless be prepared to deal with those who are not hearing arguments except if they cause a cost or come in the form of other pressures.

      Thank you for this interesting addition to my ‘refresher’, and kind regards.

      Reply
  3. The article is a very good elaboration of the essences in Ulrich’s Boundary Critique as a qualitative alternative to the measure oriented approach to systems. Something that isn’t readily gleaned from ‘theoretical’ papers, which tend to focus on establishing a counterpoint to ‘sweeping in’, a trap of inexhaustibility behind indiscriminate comprehensiveness and the inevitable dilution of what matters. Having these patterns of examination not only guide core meanings, but can be instrumental in easing adoption of the hypothesis into practice. Besides the key themes; motivation, power, knowledge and legitimacy, one of the core insights into practice is the philosophical principle of examining potential gaps between what ‘is’ and ‘ought’ to be, e.g. from an ethical or moral perspective. The utility of this line of questioning can’t be overstated, especially if causes of efficacy or efficiency are in doubt and opaque.

    Similarly, given its complementarity, CATWOE can also provide instructive conceptual formalisms in this mode of enquiry. Customers – beneficiaries or victims of the transformation. Actors – people involved in the project. Transformation – intended or anticipated change. Worldview – fundamental assumptions about the product. Owner – commissioning (and eventually terminating) party. Environment – the setting in which the product will be developed and operated. Aspects like ‘worldview’ or ‘owner’ lend themselves to many reflective functions at different stages, as well as my pet attribute – ‘credibility’, can all serve to inform various criteria of Boundary Critique heuristics and appraisal of the ‘system under analysis’ or the broader context situation, e.g. the people dimension.

    Reply
    • Dear Piotr, thanks for your comment. If I understand you correctly, the issue you raise with your reference to a ‚trap’ concerns a fundamental quandary of all practical reasoning, that is, reasoning aimed at understanding and improving real-world situations of concern: How should it take into account everything possibly relevant (a basic demand of all cogent reasoning) without ending up in a hopeless quest for comprehensiveness that in the end risks undermining the very aim of practical reasoning, of enabling us to respond „with reason“ to situations of concern and this within given constraints of time, knowledge, and resources? How, to put it more succinctly, can we avoid the trap of arbitrariness in including vs. excluding situational considerations from the context we take to be relevant (e.g., on the sole ground that doing so happens to suit our interests and competences, or eases given pressures of time and cost)?

      In this present discussion stream a methodologically related key consideration has emerged, concerning the judgmental nature of all practical reasoning and hence, the role of ‚evaluative thinking’ in it. What constitutes the specific quality that well-understood and ‚professional’ evaluative thinking can bring to the party? That is, what is the difference we might expect evaluation practiced as research or as a profession to make? If we had the answer to this latter question, perhaps errors in handling the former (how to avoid the trap of arbitrariness in the selection of circumstances to be considered) could be more easily avoided. We must thus ask what contribution high-quality evaluative thinking might make toward high-quality management of boundary questions.

      My personal bias is that if there is any specific evaluative competence that might live up to the challenge, it is a generalist’s skill in handling boundary judgments. Bring in all the specialists available, they will not be able to tell you what is the one ‚right‘ boundary setting that would secure improvement to the problem situation at hand, nor what would constitute such ‚improvement‘. A different kind of competence is required, to which I refer here as a generalist’s kind of competence.

      Generalists are not supposed to bring in any specialist skills and interests. With one exception: we might expect them to be specialists in boundary critique, as it were. This is a very special kind of specialization indeed, one that requires a good level of general education and an open mind but also needs to be embedded in what Thomas Schwandt (2015) has fittingly described as a broad cultivation of the evaluative mind. This is so, I suspect, because boundary judgment is a methodologically deep-seated kind of cognitive competence. It is presupposed in everything we observe, say, and do, whether as researchers and professionals or as ordinary citizens. As I see it, it includes some ‚scientific’ qualities such as deferred judgment, openness to the views of others, and abstraction tolerance, along with ‚ethical‘ qualities such as a deep sense of impartiality, a discipline of relentless questioning of one’s own assumptions, and so on.

      In a slightly wider sense we might also include in our notion of personal cultivation the four themes (as you call them) of motivation, power, knowledge, and legitimacy. Indeed, they are essential for identifying boundary judgments and their implications, so much so that I derived my twelve boundary categories and the corresponding boundary questions from these four deep-seated sources of human intentionality (compare Ulrich, 1983, pp. 240-257 and, perhaps less demanding for beginners in boundary critique, Ulrich, 2014, pp. 19-44).

      Essential as such a ‚cultivated’ horizon of evaluative thinking is for boundary critique, it should not have us avoid the more down-to-earth methodological questions involved, say, of argumentative rigor, methodological discipline, standards of good practice, and requirements of systematic training and professionalization, to name only a few. We will therefore be well-advised to regard boundary critique as a practice that offers itself for some systematic teaching and training, which does not mean we should worry too much on its technicalities. I see them merely as initial aids; later-on, all that is needed might be a copy of the checklist of boundary questions (as found, e.g., in Ulrich, 2000, p. 258) on your smart phone. To have this list always available is useful because each of the questions contains in its second part a definition of the respective boundary category.

      For those interested, here is a list of some training routines that I find useful:

      1. cultivating the habit of thinking in terms of ‚is’ and ‚ought’;
      2. paying attention to the ways people talk past one another due to different boundary judgments (‚listening on the bus‘);
      3. developing the skill of systemic triangulation (see the figure in Ulrich, 2000, p. 252 or p. 6 in the prepubl. version; 2017, p. 7);
      4. exercising the ‚polemical‘ (or emancipatory) employment of boundary judgments with a view to creating some ‚symmetry of critical competence’ in situations of conflict and unequal power (e.g., Ulrich, 1987, p. 281f; 1993, pp. 599-605);
      5. using boundary judgments for different aims such as ‚ideal mapping‘, ‚evaluation‘, ‚reframing‘ or ‚challenge‘ (Ulrich, 2005, p. 12);
      6. systematically altering one’s perspective from the viewpoint of ‚those involved‘ to ‚those affected or concerned but not involved‘ (Ulrich, 1983, pp. 247-256; 1987, p. 281f; 2018, pp. 10-12);
      7. systematically changing perspective according to the ‚three-level concept of rational practice’ (Ulrich, 1988, pp. 146-159; 2018, pp. 12-17); and finally ,
      8. focusing boundary critique on varying types of reference systems as captured by the SEAU formula of CSH (System of interest, decision Environment, context of responsible Action, and total Universe of discourse, e.g., Ulrich, 2018, pp. 6-10).

      There are thus more than enough opportunities of training; further there is a general list of guidelines for boundary critique in Ulrich, 2000, p. 262). However, lest I create a wrong impression, I should add that not all these methodological aspects need to be mastered for adequate practice. Feel free to select two or three of them that suit your needs or interests and help you to get started. By all means, start applying boundary critique as soon as possible, for with the experience comes the proficiency and the cultivation. You don’t need to speak all the times of ‚boundary judgments’ though, that would only risk to irritate others. Instead, ‚translate’ what in a concrete situation they mean to you into normal language. At latest when friends and colleagues first wonder what scheme makes you come up with those powerful questions, you’ll know you are on your way!

      Advanced practitioners will also want to study CSH with regard to its philosophical and ethical grounding in a concept of ‚reflective practice in a living civil society‘ (Ulrich, e.g., 2000, 2009, 2016), as well as to take the personal step from a stance of pragmatism to ‚critical pragmatism‘, a step in which methodological proficiency and personal cultivation of the mind merge (Ulrich, 2006, 2007, 2016).

      A a last observation, you refer to Peter Checkland’s (1981) CATWOE formula, which offers itself for some systematic discipline in problem solving. Let me just say that in the twelve years in which Peter and I jointly instructed SSM (Soft Systems Methodology) and CSH (Critical Systems Heuristics) in the Lugano Summer School, the participants hardly ever had problems with their combined use. The two approaches were experienced as being nicely complementary, in that SSM prepared the ground with basic problem structuring, subsequently followed by critical reflection and discourse along CSH lines on the normative implications of problem definitions and solution proposals.

      Thank you and kind regards.

      References

      Checkland, P. (1981). Systems Thinking, Systems Practice. Chichester: Wiley.

      Schwandt, T.A. (2015). Evaluation foundations revisited: Cultivating a life of the mind for practice. Redwood, CA: Stanford University Press.

      Schwandt, T.A. (2018). Evaluative thinking as a collaborative social practice: the case of boundary judgment making. New Directions for Evaluation, Vol. 2018, No. 158, pp. 125-137.

      Ulrich, W. (1983). Critical Heuristics of Social Planning: A New Approach to Practical Philosophy. Bern: Haupt, 1983; reprint edition Chichester and New York: Wiley, 1994.

      Ulrich, W. (1987). Critical heuristics of social systems design. European Journal of Operational Research, 31, No. 276–283.

      Ulrich, W. (1988). Systems thinking, systems practice, and practical philosophy: a program of research. Systems Practice, 1, No. 2, 1988, pp. 137-163. https://wulrich.com/downloads/ulrich_1988a.pdf

      Ulrich, W. (1993). Some difficulties of ecological thinking, considered from a critical systems perspective: a plea for critical holism. Systems Practice, 6, No. 5, pp. 583-611. https://wulrich.com/downloads/ulrich_1993.pdf.

      Ulrich, W. (2000). Reflective practice in the civil society: The contribution of critically systemic thinking. Reflective Practice, 1, No. 2, 247–268. https://wulrich.com/downloads/ulrich_2000a.pdf.

      Ulrich, W. (2005). A brief introduction to critical systems heuristics (CSH). ECOSENSUS project, The Open University, Milton Keynes, UK. https://wulrich.com/downloads/ulrich_2005f.pdf.

      Ulrich, W. (2006). Critical pragmatism: a new approach to professional and business ethics. In L. Zsolnai (ed.), Interdisciplinary Yearbook of Business Ethics, Vol. 1, Oxford, UK, and Bern, Switzerland: Peter Lang Academic Publishers

      Ulrich, W. (2007). Philosophy for professionals: towards critical pragmatism. Journal of the Operational Research Society, 58, No. 8, 2007, pp. 1109-1113. Rev. postpubl. version in Ulrich, W. (2016). Philosophy for professionals: towards critical pragmatism. (Reflections on Critical Pragmatism, Part 7). https://wulrich.com/bimonthly_march2016.html and https://wulrich.com/downloads/bimonthly_march2016.pdf (html and pdf versions)

      Ulrich, W. (2009). Reflections on reflective practice (5/7): Practical reason and rational ethics: Kant. Ulrich’s Bimonthly, March-April 2009.

      Ulrich, W. (2014). A Primer to Critical Systems Heuristics for Action Researchers. Rev. version, 10 August 2014 (orig. Centre for Systems Studies, University of Hull, Hull, UK, 31 March 1996). https://wulrich.com/downloads/ulrich_1996a.pdf.

      Ulrich, W. (2016). Philosophy for professionals: towards critical pragmatism. Ulrich’s Bimonthly, March-April 2016 (earlier version in: Journal of the Operational Research Society, 58, No. 8, 2007, pp. 1109-1113). https://wulrich.com/bimonthly_march2016.html and https://wulrich.com/downloads/bimonthly_march2016.pdf 
(html and pdf versions).

      Ulrich, W. (2017). The concept of systemic triangulation: its intent and imagery. Ulrich’s Bimonthly, March-April 2017. https://wulrich.com/bimonthly_march2017.html and https://wulrich.com/downloads/bimonthly_march2017.pdf (html and pdf versions).

      Ulrich, W. (2018). Reference systems for boundary critique: A postscript to «Systems thinking as if people mattered». Ulrich’s Bimonthly, January-February 2018 (25 March 2018). https://wulrich.com/bimonthly_january2018.html

      Reply
      • Very much so, ‘boundary judgements’ may have actually been my anchor point in Boundary Critique, perhaps a defining factor in developing my approach to system analysis. Rather than seeking explicit and highly fit system boundaries as a prerequisite and of course, an alternative to the futility of ‘everything is connected’ mindset, taking the need to make ‘judgements’ head on seemed like a thoroughly compelling proposition. Needless to say, from a design theory point of view this proves a very instructive pattern of investigation beyond the often inconsequentially idiomatic references to behaviour and/or role stereotypes like users, stakeholders or experience and empathy, etc..

        Alternatives tend to remain too distant, in effect, avoid approaching the subject, and as you observed, it is common to resort to arbitrary, if not ‘preordained’, rationalisations of power and credibility. For example, notions like human or user-centred, participatory and emancipatory, in design methodology and practice theories, fall short of developing robust paths for reasoning about actual situations and hardly go past cliché declarations and superficial analysis routines. In the Organisational or Management ‘sciences’, e.g. the Human Resources cannon, the issues may be couched in a different language, but the problems appear largely the same and similarly unwilling to deal with the supposedly taboo ‘power’ dimension, leaving a lot of room for misguided initiatives, as well as ‘collateral damage’.

        I’m glad that seeking synergies with CATWOE and SSM framings resonates in your experience and I couldn’t agree more on the points about the limitations of a generalist approach in absence of competence and expertise. Further still, the remarks relating to reflective practice are also very noteworthy, including adoption of ritualised formalisms like Reflection in Action where the potential for more internalised modes is unrealistic, e.g. extrinsically motivated people in complementary measures and reward environments. Thanks for expanding on the guidelines and great references.

        Reply
  4. Great post! Readers who are interested in program and policy evaluation–key elements of all things integration and implementation–may like to see how preeminent evaluation thought leader Tom Schwandt applies Ulrich’s critical systems heuristics in his 2018 paper, “Evaluative Thinking as a Collaborative Social Practice: The Case of Boundary Judgment Making” (https://onlinelibrary.wiley.com/doi/full/10.1002/ev.20318). On another topic salient to this blog, Schwandt has elsewhere also discussed ‘post-normal evaluation,’ building on the notion of post-normal science (https://journals.sagepub.com/doi/abs/10.1177/1356389019855501)

    Reply
    • Hello tgarchibald, thanks very much for your two hints, at that paper by Thomas Schwandt on the one hand and at the work done by Silvio Funtowicz on post-normal science on the other hand. I find both relevant and am sure many a reader of this blog will be grateful for them. As to Schwandt’s contribution, I have said a little about my interest in it in my reply to Piotr Kulaga’s comment.
      Best wishes and regards.

      Reply
  5. Dear Werner,

    I am a big fan of your Critical Systems Heuristic and the Emancipatory Boundary Critique.

    Although it is ‘just’ a set of questions, it is not an easy tool, because one can go deeper and deeper into system boundaries and assumptions made by experts beyond their expertise. Like Catherine is see a lot of value already in the “is/ought” reformulation of the questions. When we use the method in teaching or at workshops, I think I can see a bit of mischief (‘Schalk’ in German) in the eyes of those who ask the expert, especially when it comes to the ought-question. First the is-questions, no worries (s)he will have a solid answer. Wait until (s)he finishes and then go for the “ought”. “Is what you just said how it should be?”. So simple, so powerful.

    Furthermore the questions point to very relevant boundaries: Who will benefit, who will lose? What do you exactly mean by improvement? Is that how we should understand improvement? Whose knowledge was included or excluded? One of your examples was key for me to understand how fundamental your questions are. It was about an engineer who explains to the inhabitants of an area where a dam will be built that all will benefit from the dam economically. And then somebody asks: How come you know how financial benefits of the dam will be distributed among those living in the area? Obviously, the engineer does not know, (s)he just assumes a distribution.

    For all who did not yet do so, I suggest you familiarise yourself with and explore the questions. Don’t use them as “set-in-stone” but as question that can (and should) be adapted and reformulated. The main point is they help to uncover boundaries set by somebody outside her/his expertise.

    Reply
    • Dear Christian, I was extremely pleased to see your comment on my short text on boundary critique. In it you capture the emancipatory spirit of boundary critique in as succinct a way as I have seen it anywhere, including my own accounts!

      Yes – boundary critique as CSH explains and operationalizes it is only seemingly just a set of questions, much less an easy or even superficial tool; as you explain, „one can go deeper and deeper into system boundaries and assumptions made by experts beyond their expertise.“ It has us enter into a reflective and ideally also cooperative process of more and more unfolding the selectivity of people’s ‚facts‘ and ‚values‘ and the implications these may have for all the stakeholders concerned.

      Yes – boundary critique may be disappointing at first, as beginners learn they should not expect it to justify their boundary judgments or any claims to knowledge, rationality, and improvement based on them. Boundary critique, as the name says, has only critical validity; but as such it is a powerful tool. It enables us to ‚see through’ the selectivity of the facts people consider and to surface the partiality of the values they assume. It is a tool that not only equips us to handle our own assertions of fact and value in self-reflective and transparent ways but also to argue cogently against others who do not handle their assumptions so self-critically, by demonstrating that they rely on boundary judgments they fail to make clear and for which there exist options.

      Yes – despite looking simple at first, the boundary questions lead us straight to the normative core of people’s views and claims, that is, the reference systems – assumed contexts as defined in terms of the boundary questions, see, e.g., Ulrich (2018a) – on which people rely to make sense of propositions and argue their concerns. Once we have understood the idea and have learned to use it for reflective as well as argumentative ends, boundary critique indeed becomes a powerful tool for understanding why people so often talk past each other and accuse one another of getting their facts and values wrong, namely, because they do not recognize that they talk about different reference systems. How could they regard the same facts and values as relevant, given that their reference systems differ?

      Yes – the issues to which the boundary questions point are relevant indeed. Some of them may look a bit abstract or theoretical at first, but by actually using them one will soon discover what critical force they have in practice, as you describe it very nicely. The reason, of course, is that they touch on issues that are fundamental to all human intentionality and action – the sources of motivation, power, knowledge, and legitimacy that drive and condition what we think and do.

      Pardon the fragmentary nature of these rather spontaneous observations. So much more would need to be said with a view to encouraging young people, no less than mature citizens and professional people, to move beyond the surface of ‚just a set of questions‘ and to make boundary critique their personal path towards a new, deep competence. Without meaning to be pretentious, the boundary questions always again remind me of Kant’s admonition, in the Prolegomena (1783, p. A190), that once we have tasted the liberating sense of critique, we will ever after be disgusted with all the dogmatic twaddle (and I would add, quarrel) around us. We should indeed warn all beginners in boundary critique: Mind you, once you’ll have tasted its emancipatory spirit, you will not want to do without it any more. It will become part of your mindset rather than being just another ‚method‘.

      Just my way to say ‚yes‘ to your short account of the essence of boundary critique. I find it superb! Simple, but powerful! Thanks a lot!

      References (1): Cited items

      Kant, I. (1783). Prolegomena to Any Future Metaphysics. New York: Liberal Arts Press.

      Ulrich, W. (2018a). Reference systems for boundary critique: A postscript to «Systems thinking as if people mattered». Ulrich’s Bimonthly, January-February 2018 (25 March 2018). https://wulrich.com/bimonthly_january2018.html

      References (2): Passages on boundary critique and its emancipatory spirit [indicated in brackets] in some selected publications

      Ulrich, W. (1983 / 2014). Critical Heuristics of Social Planning: A New Approach to Practical Philosophy. Bern: Haupt, 1983; reprint edition Chichester and New York: Wiley, 1994 [see whole chap. 5, pp. 265-314, esp. pp. 280-310. Note: This book was written over 40 years ago. Although its basic ideas have stood the test of time, some of the terminology has meanwhile changed. Instead of ‚boundary critique’ it used the less handy, though more precise term ‚critical employment of boundary judgments‘; accordingly, ‚emancipatory boundary critique‘ was called ‚the emancipatory employment of boundary judgments‘. The most important change though, of language as of perspective, is that I now prefer to speak of the ‚discursive‘ or discourse-theoretical rather than the ‚dialectical‘ turn of practical philosophy].

      Ulrich, W. (1987). Critical heuristics of social systems design. European Journal of Operational Research, 31, No. 3, pp. 276–283 [see esp. p. 281f].

      Ulrich, W. (1993). Some difficulties of ecological thinking, considered from a critical systems perspective: a plea for critical holism. Systems Practice, 6, No. 5, pp. 583-611. https://wulrich.com/downloads/ulrich_1993.pdf [see esp. pp. 592-605, or pp. 11-25 in the post-publication version, respectively].

      Ulrich, W. (2000). Reflective practice in the civil society: The contribution of critically systemic thinking. Reflective Practice, 1, No. 2, 247–268. https://wulrich.com/downloads/ulrich_2000a.pdf [esp. pp. 251-254 and 257-260].

      Ulrich, W. (2001). The quest for competence in systemic research and practice. Systems Research and Behavioral Science, 18, No. 1, pp. 3-28 [esp. pp. 11-18 and 23-26].

      Ulrich, W. (2006). Critical pragmatism: a new approach to professional and business ethics. In L. Zsolnai (ed.), Interdisciplinary Yearbook of Business Ethics, Vol. 1, Oxford, UK, and Bern, Switzerland: Peter Lang Academic Publishers, 2006, pp. 53-85 [see esp. pp. 69-79].

      Ulrich, W. (2017). The concept of systemic triangulation: its intent and imagery. Ulrich’s Bimonthly, March-April 2017. https://wulrich.com/bimonthly_march2017.html and https://wulrich.com/downloads/bimonthly_march2017.pdf [html and pdf versions; see esp. pp. 1-8 and 11f].

      Ulrich, W. (2018a). Reference systems for boundary critique: A postscript to «Systems thinking as if people mattered». Ulrich’s Bimonthly, January-February 2018 (25 March 2018). https://wulrich.com/bimonthly_january2018.html [see esp. pp. 2-12, 15-18, and 21-31].

      Ulrich, W. (2018b). The idea of boundary critique. Farewell to Ulrich’s Bimonthly. Ulrich’s Bimonthly, March-May 2018. https://wulrich.com/bimonthly_march2018.html and https://wulrich.com/downloads/bimonthly_march2018.pdf [html and pdf versions; see esp. pp. 3-23].

      Reply
  6. Thank you Werner for this succinct summary of CSH, which has stood the test of time and is absolutely relevant to this day.

    I see the ‘fact/value’ set of ‘is/‘ought’ questions as being both expansive and fundamental to a more satisfactory progression, or adaptation, of society. Ideally, this framework of questioning ‘ought’ to underpin the allocation of research funding, research methodology, public policy and planning – whether within the public, private or third sector – to help address real-world complexity. Questioning boundary assumptions can be difficult, though – even unacceptable – in some settings. It brings to mind the point that decision-making should be as just as possible within time and cost constraints, yet the acceptance of that obstructs the horizon of knowledge (I think this was Derrida?).

    So perhaps somehow we need to try and shift that stuck-in-a-rut perception of ‘time and cost constraints’ to be able to advance knowledge about improved research design and action planning to help address complex problems? Lack of time, ingrained habits and routine procedures, short-termism, solutionism and so on are regular issues which are accepted as excuses without question, and can perhaps prevent such a great set of questions being taken up as a more thorough way of appreciating assumptions made about boundary judgements, to help underpin the legitimacy of practical action. The more these questions are used, the better and, as you say, could serve to strengthen the democratic process more meaningfully.

    Let us hope that one day they could become an accepted basis of heuristic support for research design to help address complexity, notwithstanding that would probably need an associated overhaul of research funding mechanisms. No doubt existing researchers would have a view about that? Could this process be costed in to research projects at the outset as a scoping phase, I wonder?

    Reply
    • Dear Catherine, thank you for your fine comment. It is always encouraging to see that there are some dedicated people ‚out there’ who appreciate the idea of boundary critique. I certainly agree with what you say, particularly regarding the importance of frameworks for questioning ‚ought‘ as distinguished from ‚is‘ assumptions.

      I find it amazing that this importance still isn’t a matter of course in the applied disciplines but needs advocates such as you. Why may this be so? Let’s look briefly at two examples that both carry the evaluative element in their names and yet have problems to deal openly and systematically with their own value content, I mean the examples of policy making in matters of research, including research design, funding, and evaluation (the field to which you refer), and of evaluation research, including program evaluation and policy analysis (of which I have myself many years of professional experience in government).

      I’ll begin with the latter as it remains paradigmatic to me in its attempt to bring evaluation and research together. Working as an evaluation researcher at the interface of science and politics can tell you a lot about the challenges involved in doing applied research; about what matters in it and at the same time constitutes its major difficulty. In my experience it is, quite clearly, the need for managing the tension between ‚is‘ and ‚ought‘ judgments so as to come to terms with their closely intertwined nature.

      I think that evaluation research since its beginnings after Word War II has suffered from a wanting philosophical grounding of its ‚evaluative’ core. What is it that methodologically speaking is to distinguish evaluation research from mere valuation? How can the judgmental nature and aim of evaluative research be handled according to clear standards of good practice? It seems to me that when it comes to ‚ought’ assumptions, the field’s judgmental character remains just as diffuse an issue in evaluation research as in most other fields of applied research. Unlike what one might expect, evaluation research does not stand out among the other applied disciplines by a particularly clear notion of excellence in handling values. Rather, and as I fear symptomatically, the judgmental character of evaluative research has remained an aspect that usually is treated rather discretely, not to say: as discretely as possible. Politicians presenting the results of evaluation studies refer to the studies’ scientific value, not their value content. Value judgments are still handled as something to be avoided or at least to be kept in the background, as if it constituted an aspect of evaluation research that disturbs the ‚research‘ in it, rather than being its object and aim.

      Thus it comes, as you note in your comment, that time and cost constraints are frequently advanced as excuses for this deficit, along with other opportunistic considerations. This is obviously a self-defeating excuse. If evaluative questions are to be dealt with adhering to scientific principles of good practice and one then does not invest the time and cost required for clarifying normative assumptions and implications systematically, so as to lift them out of the realm of merely subjective acts of belief, all efforts to deal ‚scientifically’ with such questions are wasted from the outset; for the difference between subjective and scientific judgment – belief and science – is not that the one is value-laden and the other is not, but only the degree to which the value judgments in question are disclosed and questioned systematically.

      I say ‚usually‘ because there are exceptions. A few of them are Levin-Rozalis (e.g., 2014/2015), Schwandt (e.g., 2018), and Gates (e.g., 2020). Reporting about a discussion she had with a colleague on the judgmental nature of evaluation research and on the feelings of unease it gives her, Miri Levin puts it succinctly:

      “We’ve learned that evaluation is judgmental,” I said, “because it gives value to the things it evaluates. But we’ve also learned—with great emphasis—that as evaluators, we’re not allowed to be judgmental. We can’t allow our personal judgment to bias the evaluation.” (Levin-Rozalis, 2015, p. 2)

      Traditional evaluation research insists on the requirement of not being judgmental because it assumes that once we allow judgment, subjectivity is all that is left and values will sooner or later take over, or at least go against (or ‚disturb’, as I said above) the scientific character of the field. As against this view, I would argue that systematic disclosure and assessment of ‚ought‘ assumptions through boundary critique – a process of unfolding their implications for those concerned – achieve precisely the contrary: by enhancing the value transparency of evaluative research, boundary critique is apt to prevent any illusion of objectivity (Ulrich, 1987). The ‚ought’ judgments in question can then be challenged so that it becomes apparent that there are options for these judgments, and participatory discourses can occur at the proper levels of decision-making and legitimation. In CSH I also speak of the ‚emancipatory‘ employment of boundary critique. Such a concept of evaluation research lays open the judgmental nature of evaluation research and indeed, sees in this effort its very aim. Thus it secures for itself a status of evaluative research rather than just evaluative judgment.

      Through its process of ‚systemic triangulation‘ of boundary judgments, factual judgments, and value judgments, boundary critique systematically examines how alternative ‚ought’ assumptions may change the appreciation of relevant facts and conversely, how varying selection of relevant facts may have the context of concern look different (see Ulrich, 2000 and 2017, which also contain an image illustrating systemic triangulation). In the language that I have come to use in my more recent writings, boundary critique is thus apt to explain and operationalize a ‚critical turn‘ in evaluation research.

      Turning now to the second example, research policy including research planning, funding, and prioritization, we take the step from evaluation research to the meta-level of research evaluation. A crucial difficulty here is how research policy might orient research priorities and resources towards societal aims and at the same time can apply highest standards of research excellence. The former ambition requires that the prioritization of research be based on politically legitimated notions of societal needs and standards of improvement, while the latter ambition demands adherence to objective standards of high-quality research, regardless of how useful it may be to society-at-large or, more likely, to different stakeholder groups in the population and among the researchers concerned, with what time horizon, and so on. Quality of life and quality of research do not necessarily coincide and will thus often imply clashing boundary judgments as to who ought to be and are the stakeholders concerned, their major concerns, and so on, making research policy a very complex idea.

      Boundary critique is not designed for this meta-level of research policy making, as little as for purely theoretical disciplines (although it becomes relevant as soon as theoretical knowledge gets applied to practical questions). This double self-limitation explains the emphasis I tend to put on the ‚applied‘ disciplines as the proper context for boundary critique: boundary critique is an ‚applied‘ rather than meta-level kind of examination. But then, apart from this increasing complexity, I see no principal reason why boundary critique and its methodological core principle of systemic triangulation should be of no relevance to prioritizing research proposals. Indeed, you may wonder whether there must not be some key concerns that link the two issues of evaluation research and research evaluation. As I tend to see it, a major link consists in the ultimate aim of benefiting citizens (yes, an overtly normative or ‚ought to‘ concern that is relevant to both issues). Good research practice can hardly be understood and identified without considering what it achieves for ordinary people, rather than just for the researchers themselves. Likewise, we can hardly understand what good research policy means without asking what it is to achieve for society-at-large.

      I would argue, therefore, that the two levels of assessing boundary judgments converge in a shared orientation towards serving the citizenry and hence, toward the aim of promoting a living civil society, a society in which research aims and achievements are part of a constant flow of public discourses to which ordinary citizens are admitted and can contribute in relevant and competent ways, no less than can researchers, experts and professionals along with political and corporate decision-makers. In fact, the very concept of boundary critique opens up a chance for ‚competent‘ citizenship to many people; for, as I have argued on various occasions, when it comes to identifying and questioning boundary judgments, expertise provides no major advantage over common sense and personal awareness of partiality. A critical employment of boundary judgments is possible without requiring any particular knowledge or skills that would not be accessible to ordinary citizens (see, e.g., Ulrich 1987, 1993, and 2000). In such a concept of competent citizenship, research practice and research policy meet, as the basic source of legitimacy for both is that they have to respond to the needs and concerns of citizens.

      A last reflection that I would like to offer here is the methodological turn that I advocate from common-sense pragmatism to ‚critical‘ pragmatism (Ulrich, 2006 and 2007). Pragmatism is about clear thinking in contexts of ‚applied‘ knowledge and inquiry. Its difficulty is that such thinking requires considering all the ‚practical bearings‘ (Peirce, 1878) that such applied knowledge and inquiry may have for those concerned, but a comprehensive consideration of conceivable consequences is beyond what applied research can usually achieve. Thus, while it is a meaningful effort to expand our grasp of situations by including consequences that may arise outside our boundary judgments, such an effort does not provide an arguable claim to actually having achieved comprehensiveness. And yet reason demands that we consider everything possibly relevant to our claims, regardless of what our boundary judgment may be. Pragmatism as Peirce understands it does not alter this requirement, it only makes its holistic implications more clear. Thus the very effort of being ‚pragmatic‘ in the philosophical sense of Peirce faces us with an unachievable ideal. The implication, as mentioned at the outset, is partiality; and the only way out of the impasse is by handling this partiality in transparent, self-reflective and self-limiting ways. Thus the idea at the heart of boundary critique, the inevitable partiality of our claims to knowledge, rationality, and improvement, is as relevant to pragmatic thought and action as it is to its origin in systems thinking: critical systems thinking unfolds into critical pragmatism, with boundary critique being the methodological core principle of both.

      References

      Gates, E. (2017). Toward valuing with critical systems heuristics. American Journal of Evaluation, 39, No. 2: pp. 201-220.

      Levin-Rozalis, M. (2014 / 2015). A purpose-driven action; the ethical aspect and social aspect of evaluation. In Miri Levin-Rozalis, Let’s talk program evaluation in theory and practice. Tel Aviv, Israel: Dekel Publishing House, and Monterey, CA: Samuel Wachtman’s Sons, Chap. 9, pp. 271-295. (Note: A short version was published under the same article title in New Directions for Evaluation, No. 146, Vol. 2015, pp. 19-32.

      Schwandt, T.A. (2018). Evaluative thinking as a collaborative social practice: The case of boundary
      judgment making. New Directions for Evaluation, Vol. 2018, No. 158, pp. 125-137.

      Ulrich, W. (1987). Critical heuristics of social systems design. European Journal of Operational Research, 31, No. 276–283.

      Ulrich, W. (1993). Some difficulties of ecological thinking, considered from a critical systems perspective: a plea for critical holism. Systems Practice, 6, No. 5, pp. 583-611.

      Ulrich, W. (2000). Reflective practice in the civil society: The contribution of critically systemic thinking. Reflective Practice, 1, No. 2, 247–268.

      Ulrich, W. (2006). Critical pragmatism: a new approach to professional and business ethics. In L. Zsolnai (ed.), Interdisciplinary Yearbook of Business Ethics, Vol. 1, Oxford, UK, and Bern, Switzerland: Peter Lang Academic Publishers, 2006, pp. 53-85.

      Ulrich, W. (2007). Philosophy for professionals: towards critical pragmatism. Viewpoint, Journal of the Operational Research Society, 58, No. 8 (August), pp. 1109-1113. 


      Ulrich, W. (2017). The concept of systemic triangulation: its intent and imagery. Ulrich’s Bimonthly, March-April 2017.

      Ulrich, W. (2018). The idea of boundary critique. Farewell to Ulrich’s Bimonthly. Ulrich’s Bimonthly, March-May 2018.

      Reply
      • Dear Werner, thank you for your comprehensive and excellent response to my comment – that is very much appreciated.

        You raise a number of valuable points that I feel should be mainstream thinking – if only! What matters? What is difficult? Many do not even recognise the tension between ‘is’ and ‘ought’, tending instead to accept the ‘is’ without question, and the handling of assumptions around values is rarely considered (in my experience). Even worse, anything around ‘value or values’ has historically been ‘othered’. Let’s hope that is beginning to change. To my mind, this is where the value of triple loop learning could come in at the science-policy interface – encouraging critical reflection about knowledge and values (although one can see why this is not necessarily popular…). Enlightened individuals, however, who see a mismatch between what they are expected to do, and what they think they ‘ought’ to be doing…and are bothered enough to want to do something about this dysfunctional situation, will hopefully lead this on to the next necessary developmental stage, rather than give up.

        What research achieves ‘with and for’ ordinary people (and for future generations) and for sustainability should really come to the fore: it is peculiar that it has ever been otherwise… This is not about naivety of the few around intransigent power structures, it is about naivety of the many around neglected power structures – the potential human resource of researchers, practitioners, the developers of policy intelligence, and the citizenry (who in many cases have been rendered one-dimensional, or reduced to ‘data’).

        Although you developed boundary critique as an applied examination, rather than for design at the meta-level of research policy-making (with all the complexities that entails), it’s interesting nevertheless that you feel that boundary critique/systemic triangulation could have some relevance in prioritising research proposals. Who is responsible for the real-world interaction of ‘progress’ made through the applied research of individual disciplines? A rhetorical question!

        Finally, your reflections about advocating a methodological turn towards critical pragmatism and the handling of the inevitable partiality of our claims (whether knowledge, rationality, improvement) ring true to me.

        Thanks again and best regards.

        Reply

Leave a Reply to tgarchibald Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: