Delivering from Our R&D Pipeline
In 2020 it was Variations 12.1 and 12.2; in 2021 Variations 12.3 and 13.0. In late June this 12 months it was Model 13.1. And now we’re releasing Model 13.2. We proceed to have an enormous pipeline of R&D, some quick time period, some medium time period, some long run (like decadeplus). Our objective is to ship well timed snapshots of the place we’re at—so folks can begin utilizing what we’ve constructed as rapidly as potential.
Model 13.2 is—by our requirements—a reasonably small launch, that principally concentrates on rounding out areas which have been below improvement for a very long time, in addition to including “polish” to a variety of present capabilities. However it’s additionally received some “shock” new dramatic effectivity enhancements, and it’s received some first hints of main new areas that we now have below improvement—significantly associated to astronomy and celestial mechanics.
However regardless that I’m calling it a “small launch”, Model 13.2 nonetheless introduces fully new capabilities into the Wolfram Language, 41 of them—in addition to considerably enhancing 64 present capabilities. And, as normal, we’ve put loads of effort into coherently designing these capabilities, so that they match into the tightly builtin framework we’ve been constructing for the previous 35 years. For the previous a number of years we’ve been following the precept of open code improvement (does anybody else do that but?)—opening up our core software program design conferences as livestreams. Through the Model 13.2 cycle we’ve finished about 61 hours of design livestreams—getting all kinds of nice realtime suggestions from the group (thanks, everybody!). And, sure, we’re holding regular at an general common of 1 hour of livestreamed design time per new perform, and rather less than half that per enhanced perform.
Introducing Astro Computation
Astronomy has been a driving pressure for computation for greater than 2000 years (from the Antikythera machine on)… and in Model 13.2 it’s coming to Wolfram Language in an enormous manner. Sure, the Wolfram Language (and WolframAlpha) have had astronomical information for effectively over a decade. However what’s new now could be astronomical computation absolutely builtin into the system. In some ways, our astro computation capabilities are modeled on our geo computation ones. However astro is considerably extra sophisticated. Mountains don’t transfer (at the least perceptibly), however planets actually do. Relativity additionally isn’t essential in geography, however it’s in astronomy. And on the Earth, latitude and longitude are good commonplace methods to explain the place issues are. However in astronomy—particularly with the whole lot shifting—describing the place issues are is far more sophisticated. Oh, and there’s the query of the place issues “are,” versus the place issues seem like—due to results starting from lightpropagation delays to refraction within the Earth’s ambiance.
The important thing perform for representing the place astronomical issues are is AstroPosition. Right here’s the place Mars is now:

What does that output imply? It’s very “right here and now” oriented. By default, it’s telling me the azimuth (angle from north) and altitude (angle above the horizon) for Mars from the place Right here says I’m, on the time specified by Now. How can I get a much less “private” illustration of “the place Mars is”? As a result of if even I simply reevaluate my earlier enter now, I’ll get a barely completely different reply, simply due to the rotation of the Earth:

One factor to do is to make use of equatorial coordinates, which are based mostly on a body centered on the middle of the Earth however not rotating with the Earth. (One route is outlined by the rotation axis of the Earth, the opposite by the place the Solar is on the time of the spring equinox.) The result’s the “astronomerfriendly” proper ascension/declination place of Mars:

And perhaps that’s ok for a terrestrial astronomer. However what if you wish to specify the place of Mars in a manner that doesn’t check with the Earth? Then you need to use the nowstandard ICRS body, which is centered on the middle of mass of the Photo voltaic System:

Typically in astronomy the query is principally “which route ought to I level my telescope in?”, and that’s one thing one needs to specify in spherical coordinates. However significantly if one’s “out and about within the Photo voltaic System” (say excited about a spacecraft), it’s extra helpful to have the ability to give precise Cartesian coordinates for the place one is:

And listed here are the uncooked coordinates (by default in astronomical models):

AstroPosition is backed by a number of computation, and particularly by ephemeris information that covers all planets and their moons, along with different substantial our bodies within the Photo voltaic System:

By the best way, significantly the primary time you ask for the place of an obscure object, there could also be some delay whereas the required ephemeris will get downloaded. The principle ephemerides we use give information for the interval 2000–2050. However we even have entry to different ephemerides that cowl for much longer intervals. So, for instance, we will inform the place Ganymede was when Galileo first noticed it:

We even have place information for greater than 100,000 stars, galaxies, pulsars and different objects—with many extra coming quickly:

Issues get sophisticated in a short time. Right here’s the place of Venus seen from Mars, utilizing a body centered on the middle of Mars:

If we choose a selected level on Mars, then we will get the end in azimuthaltitude coordinates relative to the Martian horizon:

One other complication is that should you’re one thing from the floor of the Earth, you’re wanting by way of the ambiance, and the ambiance refracts gentle, making the place of the item look completely different. By default, AstroPosition takes account of this once you use coordinates based mostly on the horizon. However you may swap it off, after which the outcomes might be completely different—and, for instance, for the Solar at sundown, considerably completely different:


After which there’s the velocity of sunshine, and relativity, to consider. Let’s say we need to know the place Neptune “is” now. Effectively, can we imply the place Neptune “truly is”, or can we imply “the place we observe Neptune to be” based mostly on gentle from Neptune coming to us? For frames referring to observations from Earth, we’re usually involved with the case the place we embrace the “gentle time” impact—and, sure, it does make a distinction:

OK, so AstroPosition—which is the analog of GeoPosition—provides us a technique to symbolize the place issues are, astronomically. The subsequent essential perform to debate is AstroDistance—the analog of GeoDistance.
This offers the present distance between Venus and Mars:

That is the present distance from the place we’re (based on Right here) and the place of the Viking 2 lander on Mars:

That is the space from Right here to the star τ Ceti:

To be extra exact, AstroDistance actually tells us the space from a sure object, to an observer, at a sure native time for the observer (and, sure, the truth that it’s native time issues due to gentle delays):

And, sure, issues are fairly exact. Right here’s the space to the Apollo 11 touchdown web site on the Moon, computed 5 instances with a 1second pause in between, and proven to 10digit precision:

This plots the space to Mars for daily within the subsequent 10 years:

One other perform is AstroAngularSeparation, which provides the angular separation between two objects as seen from a given place. Right here’s the end result from Jupiter and Saturn (seen from the Earth) over a 20year span:

The Beginnings of Astro Graphics
Along with having the ability to compute astronomical issues, Model 13.2 consists of first steps in visualizing astronomical issues. There’ll be extra on this in subsequent variations. However Model 13.2 already has some highly effective capabilities.
As a primary instance, right here’s part of the sky round Betelgeuse as seen proper now from the place I’m:

Zooming out, one can see extra of the sky:

There are many choices for a way issues needs to be rendered. Right here we’re seeing a practical picture of the sky, with grid strains superimposed, aligned with the equator of the Earth:

And right here we’re seeing a extra whimsical interpretation:

Similar to for maps of the Earth, projections matter. Right here’s a Lambert azimuthal projection of the entire sky:

The blue line exhibits the orientation of the Earth’s equator, the yellow line exhibits the aircraft of the ecliptic (which is principally the aircraft of the Photo voltaic System), and the crimson line exhibits the aircraft of our galaxy (which is the place we see the Milky Means).
If we need to know what we truly “see within the sky” we want a stereographic projection (on this case centered on the south route):

There’s loads of element within the astronomical information and computations we now have (and much more might be coming quickly). So, for instance, if we zoom in on Jupiter we will see the positions of its moons (although their disks are too small to be rendered right here):

It’s enjoyable to see how this corresponds to Galileo’s authentic remark of those moons greater than 400 years in the past. That is from Galileo:
The outdated typesetting does trigger a bit of hassle:

However the astronomical computation is extra timeless. Listed below are the computed positions of the moons of Jupiter from when Galileo mentioned he noticed them, in Padua:

And, sure, the outcomes agree!
By the best way, right here’s one other computation that might be verified quickly. That is the time of most eclipse for an upcoming photo voltaic eclipse:

And right here’s what it is going to appear like from a selected location proper at the moment:

Dates, Instances and Items: There’s All the time Extra to Do
Dates are sophisticated. Even with none of the problems of relativity that we now have to take care of for astronomy, it’s surprisingly tough to persistently “title” instances. What time zone are you speaking about? What calendar system will you utilize? And so forth. Oh, after which what granularity of time are you speaking about? A day? Per week? A month (no matter which means)? A second? An instantaneous second (or maybe a single elementary time from our Physics Mission)?
These points come up in what one may think can be trivial capabilities: the brand new RandomDate and RandomTime in Model 13.2. Should you don’t say in any other case, RandomDate will give an instantaneous second of time, in your present time zone, along with your default calendar system, and so on.—randomly picked inside the present 12 months:

However let’s say you need a random date in June 1988. You are able to do that by giving the date object that represents that month:

OK, however let’s say you don’t need an prompt of time then, however as an alternative you need a complete day. The brand new choice DateGranularity permits this:

You possibly can ask for a random time within the subsequent 6 hours:

Or 10 random instances:

You may also ask for a random date inside some interval—or assortment of intervals—of dates:

And, for sure, we appropriately pattern uniformly over any assortment of intervals:

One other space of just about arbitrary complexity is models. And over the course of a few years we’ve systematically solved drawback after drawback in supporting principally each form of unit that’s in use (now greater than 5000 base varieties). However one holdout has concerned temperature. In physics textbooks, it’s conventional to rigorously distinguish absolute temperatures, measured in kelvins, from temperature scales, like levels Celsius or Fahrenheit. And that’s essential, as a result of whereas absolute temperatures could be added, subtracted, multiplied and so on. identical to different models, temperature scales on their very own can not. (Multiplying by 0° C to get 0 for one thing like an quantity of warmth can be very mistaken.) Then again, variations in temperature—even measured in Celsius—could be multiplied. How can all this be untangled?
In earlier variations we had a complete completely different form of unit (or, extra exactly, completely different bodily amount dimension) for temperature variations (a lot as mass and time have completely different dimensions). However now we’ve received a greater resolution. We’ve principally launched new models—however nonetheless “temperaturedimensioned” ones—that symbolize temperature variations. And we’ve launched a brand new notation (a bit of Δ subscript) to point them:

Should you take a distinction between two temperatures, the end result could have temperaturedifference models:

However should you convert this to an absolute temperature, it’ll simply be in abnormal temperature models:

And with this unscrambled, it’s truly potential to do arbitrary arithmetic even on temperatures measured on any temperature scale—although the outcomes additionally come again as absolute temperatures:

It’s value understanding that an absolute temperature could be transformed both to a temperature scale worth, or a temperature scale distinction:

All of this implies you could now use temperatures on any scale in formulation, and so they’ll simply work:

Dramatically Sooner Polynomial Operations
Virtually any algebraic computation finally ends up in some way involving polynomials. And polynomials have been a welloptimized a part of Mathematica and the Wolfram Language because the starting. And in reality, little has wanted to be up to date within the elementary operations we do with them in additional than 1 / 4 of a century. However now in Model 13.2—because of new algorithms and new information constructions, and new methods to make use of fashionable laptop {hardware}—we’re updating some core polynomial operations, and making them dramatically quicker. And, by the best way, we’re getting some new polynomial performance as effectively.
Here’s a product of two polynomials, expanded out:

Factoring polynomials like that is just about instantaneous, and has been ever since Model 1:

However now let’s make this greater:

There are 999 phrases within the expanded polynomial:

Factoring this isn’t a simple computation, and in Model 13.1 takes about 19 seconds:

However now, in Model 13.2, the identical computation takes 0.3 seconds—practically 60 instances quicker:

It’s fairly uncommon that something will get 60x quicker. However that is a type of circumstances, and in reality for nonetheless bigger polynomials, the ratio will steadily improve additional. However is that this simply one thing that’s solely related for obscure, huge polynomials? Effectively, no. Not least as a result of it seems that huge polynomials present up “below the hood” in all kinds of essential locations. For instance, the innocuousseeming object

could be manipulated as an algebraic quantity, however with minimal polynomial:

Along with factoring, Model 13.2 additionally dramatically will increase the effectivity of polynomial resultants, GCDs, discriminants, and so on. And all of this makes potential a transformative replace to polynomial linear algebra, i.e. operations on matrices whose parts are (univariate) polynomials.
Right here’s a matrix of polynomials:

And right here’s an influence of the matrix:

And the determinant of this:

In Model 13.1 this didn’t look practically as good; the end result comes out unexpanded as:

Each measurement and velocity are dramatically improved in Model 13.2. Right here’s a bigger case—the place in 13.1 the computation takes greater than an hour, and the end result has a staggering leaf rely of 178 billion

however in Model 13.2 it’s 13,000 instances quicker, and 60 million instances smaller:

Polynomial linear algebra is used “below the hood” in a exceptional vary of areas, significantly in dealing with linear differential equations, distinction equations, and their symbolic options. And in Model 13.2, not solely polynomial MatrixPower and Det, but in addition LinearSolve, Inverse, RowReduce, MatrixRank and NullSpace have been dramatically sped up.
Along with the dramatic velocity enhancements, Model 13.2 additionally provides a polynomial characteristic for which I, for one, occur to have been ready for greater than 30 years: multivariate polynomial factoring over finite fields:

Certainly, wanting in our archives I discover many requests stretching again to at the least 1990—from fairly a variety of individuals—for this functionality, regardless that, charmingly, a 1991 inside be aware states:

Yup, that was proper. However 31 years later, in Model 13.2, it’s finished!
Integrating Exterior Neural Nets
The Wolfram Language has had builtin neural internet expertise since 2015. Typically that is robotically used inside different Wolfram Language capabilities, like ImageIdentify, SpeechRecognize or Classify. However you can even construct your personal neural nets utilizing the symbolic specification language with capabilities like NetChain and NetGraph—and the Wolfram Neural Internet Repository offers a frequently up to date supply of neural nets you could instantly use, and modify, within the Wolfram Language.
However what if there’s a neural internet on the market that you simply simply need to run from inside the Wolfram Language, however don’t must have represented in modifiable (or trainable) symbolic Wolfram Language kind—such as you may run an exterior program executable? In Model 13.2 there’s a brand new assemble NetExternalObject that permits you to run educated neural nets “from the wild” in the identical builtin framework used for precise WolframLanguagespecified neural nets.
NetExternalObject to this point helps neural nets which have been outlined within the ONNX neural internet trade format, which might simply be generated from frameworks like PyTorch, TensorFlow, Keras, and so on. (in addition to from Wolfram Language). One can get a NetExternalObject simply by importing an .onnx file. Right here’s an instance from the online:

If we “open up” the abstract for this object we see what primary tensor construction of enter and output it offers with:

However to really use this community we now have to arrange encoders and decoders appropriate for the precise operation of this explicit community—with the actual encoding of pictures that it expects:


Now we simply must run the encoder, the exterior community and the decoder—to get (on this case) a cartoonized Mount Rushmore:

Typically the “wrapper code” for the NetExternalObject might be a bit extra sophisticated than on this case. However the builtin NetEncoder and NetDecoder capabilities sometimes present an excellent begin, and typically the symbolic construction of the Wolfram Language (and its builtin capability to symbolize pictures, video, audio, and so on.) makes the method of importing typical neural nets “from the wild” surprisingly simple. And as soon as imported, such neural nets can be utilized immediately, or as parts of different capabilities, wherever within the Wolfram Language.
Displaying Massive Bushes, and Making Extra
We first launched timber as a elementary construction in Model 12.3, and we’ve been enhancing them ever since. In Model 13.1 we added many choices for figuring out how timber are displayed, however in Model 13.2 we’re including one other, crucial one: the flexibility to elide giant subtrees.
Right here’s a size200 random tree with each department proven:

And right here’s the identical tree with each node being instructed to show a most of three youngsters:

And, truly, tree elision is handy sufficient that in Model 13.2 we’re doing it by default for any node that has greater than 10 youngsters—and we’ve launched the worldwide $MaxDisplayedChildren to find out what that default restrict needs to be.
One other new tree characteristic in Model 13.2 is the flexibility to create timber out of your file system. Right here’s a tree that goes down 3 listing ranges from my Wolfram Desktop set up listing:

Calculus & Its Generalizations
Is there nonetheless extra to do in calculus? Sure! Typically the objective is, for instance, to unravel extra differential equations. And typically it’s to unravel present ones higher. The purpose is that there could also be many various potential kinds that may be given for a symbolic resolution. And sometimes the kinds which are best to generate aren’t those which are most helpful or handy for subsequent computation, or the best for a human to know.
In Model 13.2 we’ve made dramatic progress in enhancing the type of options that we give for probably the most sorts of differential equations, and techniques of differential equations.
Right here’s an instance. In Model 13.1 that is an equation we might resolve symbolically, however the resolution we give is lengthy and complex:

However now, in 13.2, we instantly give a way more compact and helpful type of the answer:

The simplification is usually much more dramatic for techniques of differential equations. And our new algorithms cowl a full vary of differential equations with fixed coefficients—that are what go by the title LTI (linear timeinvariant) techniques in engineering, and are used fairly universally to symbolize electrical, mechanical, chemical, and so on. techniques.

In Model 13.1 we launched symbolic options of fractional differential equations with fixed coefficients; now in Model 13.2 we’re extending this to asymptotic options of fractional differential equations with each fixed and polynomial coefficients. Right here’s an Ethereallike differential equation, however generalized to the fractional case with a Caputo fractional spinoff:

Evaluation of Cluster Evaluation
The Wolfram Language has had primary builtin help for cluster evaluation because the mid2000s. However in more moderen instances—with elevated sophistication from machine studying—we’ve been including an increasing number of subtle types of cluster evaluation. However it’s one factor to do cluster evaluation; it’s one other to research the cluster evaluation one’s finished, to attempt to higher perceive what it means, the right way to optimize it, and so on. In Model 13.2 we’re each including the perform ClusteringMeasurements to do that, in addition to including extra choices for cluster evaluation, and enhancing the automation we now have for technique and parameter choice.
Let’s say we do cluster evaluation on some information, asking for a sequence of various numbers of clusters:

Which is the “greatest” variety of clusters? One measure of that is to compute the “silhouette rating” for every potential clustering, and that’s one thing that ClusteringMeasurements can now do:

As is pretty typical in statisticsrelated areas, there are many completely different scores and standards one can use—ClusteringMeasurements helps all kinds of them.
Chess as Computable Information
Our objective with Wolfram Language is to make as a lot as potential computable. Model 13.2 provides yet one more area—chess—supporting import of the FEN and PGN chess codecs:

PGN recordsdata sometimes comprise many video games, every represented as a listing of FEN strings. This counts the variety of video games in a selected PGN file:

Right here’s the primary sport within the file:

Given this, we will now use Wolfram Language’s video capabilities to make a video of the sport:

Controlling Runaway Computations
Again in 1979 once I began constructing SMP—the forerunner to the Wolfram Language—I did one thing that to some folks appeared very daring, maybe even reckless: I arrange the system to basically do “infinite analysis”, that’s, to proceed utilizing no matter definitions had been given till nothing extra might be finished. In different phrases, the method of analysis would at all times go on till a set level was reached. “However what occurs if x doesn’t have a worth, and also you say x = x + 1?” folks would ask. “Gained’t the system blow up in that case?” Effectively, in some sense sure. However I took a calculated gamble that the advantages of infinite analysis for abnormal computations that individuals truly need to do would vastly outweigh any potential points with what appeared like “pointless nook circumstances” similar to x = x + 1. Effectively, 43 years later I feel I can say with some confidence that that gamble labored out. The idea of infinite analysis—mixed with the symbolic construction of the Wolfram Language—has been a supply of large energy, and most customers merely by no means run into, and by no means have to consider, the x = x + 1 “nook case”.
Nevertheless, should you sort x = x + 1 the system clearly has to do one thing. And in a way the purest factor to do would simply be to proceed computing without end. However 34 years in the past that led to a reasonably disastrous drawback on precise computer systems—and in reality nonetheless does right now. As a result of typically this type of repeated analysis is a recursive course of, that finally needs to be carried out utilizing the decision stack arrange for each occasion of a program by the working system. However the best way working techniques work (nonetheless!) is to allocate solely a set quantity of reminiscence for the stack—and if that is overrun, the working system will merely make your program crash (or, in earlier instances, the working system itself may crash). And this meant that ever since Model 1, we’ve wanted to have a restrict in place on infinite analysis. In early variations we tried to provide the “results of the computation to this point”, wrapped in Maintain. Again in Model 10, we began simply returning a held model of the unique expression:

However even that is in a way not secure. As a result of with different infinite definitions in place, one can find yourself with a state of affairs the place even attempting to return the held kind triggers further infinite computational processes.
In latest instances, significantly with our exploration of multicomputation, we’ve determined to revisit the query of the right way to restrict infinite computations. At some theoretical degree, one may think explicitly representing infinite computations utilizing issues like transfinite numbers. However that’s fraught with issue, and manifest undecidability (“Is that this infinite computation output actually the identical as that one?”, and so on.) However in Model 13.2, as the start of a brand new, “purely symbolic” strategy to “runaway computation” we’re introducing the assemble TerminatedEvaluation—that simply symbolically represents, because it says, a terminated computation.
So right here’s what now occurs with x = x + 1:

A notable characteristic of that is that it’s “independently encapsulated”: the termination of 1 a part of a computation doesn’t have an effect on others, in order that, for instance, we get:

There’s an advanced relation between terminated evaluations and lazy analysis, and we’re engaged on some attentiongrabbing and doubtlessly highly effective new capabilities on this space. However for now, TerminatedEvaluation is a vital assemble for enhancing the “security” of the system within the nook case of runaway computations. And introducing it has allowed us to repair what appeared for a few years like “theoretically unfixable” points round advanced runaway computations.
TerminatedEvaluation is what you run into should you hit systemwide “guard rails” like $RecursionLimit. However in Model 13.2 we’ve additionally tightened up the dealing with of explicitly requested aborts—by including the brand new choice PropagateAborts to CheckAbort. As soon as an abort has been generated—both immediately through the use of Abort[ ], or as the results of one thing like TimeConstrained[ ] or MemoryConstrained[ ]—there’s a query of how far that abort ought to propagate. By default, it’ll propagate all the best way up, so your complete computation will find yourself being aborted. However ever since Model 2 (in 1991) we’ve had the perform CheckAbort, which checks for aborts within the expression it’s given, then stops additional propagation of the abort.
However there was at all times loads of trickiness across the query of issues like TimeConstrained[ ]. Ought to aborts generated by these be propagated the identical manner as Abort[ ] aborts or not? In Model 13.2 we’ve now cleaned all of this up, with an specific choice PropagateAborts for CheckAbort. With PropagateAborts→True all aborts are propagated, whether or not initiated by Abort[ ] or TimeConstrained[ ] or no matter. PropagateAborts→False propagates no aborts. However there’s additionally PropagateAborts→Automated, which propagates aborts from TimeConstrained[ ] and so on., however not from Abort[ ].
But One other Little Checklist Operate
In our endless technique of extending and sharpening the Wolfram Language we’re always looking out for “lumps of computational work” that individuals repeatedly need to do, and for which we will create capabilities with easytounderstand names. As of late we regularly prototype such capabilities within the Wolfram Operate Repository, then additional streamline their design, and finally implement them within the everlasting core Wolfram Language. In Model 13.2 simply two new primary listmanipulation capabilities got here out of this course of: PositionLargest and PositionSmallest.
We’ve had the perform Place since Model 1, in addition to Max. However one thing I’ve usually discovered myself needing to do through the years is to mix these to reply the query: “The place is the max of that checklist?” In fact it’s not onerous to do that within the Wolfram Language—Place[list, Max[list]] principally does it. However there are some edge circumstances and extensions to consider, and it’s handy simply to have one perform to do that. And, what’s extra, now that we now have capabilities like TakeLargest, there’s an apparent, constant title for the perform: PositionLargest. (And by “apparent”, I imply apparent after you hear it; the archive of our livestreamed design overview conferences will reveal that—as is so usually the case—it truly took us fairly some time to decide on the “apparent”.)
Right here’s PositionLargest and in motion:

And, sure, it has to return a listing, to take care of “ties”:

Graphics, Picture, Graph, …? Inform It from the Body Shade
All the things within the Wolfram Language is a symbolic expression. However completely different symbolic expressions are displayed otherwise, which is, in fact, very helpful. So, for instance, a graph isn’t displayed within the uncooked symbolic kind

however reasonably as a graph:

However let’s say you’ve received a complete assortment of visible objects in a pocket book. How will you inform what they “actually are”? Effectively, you may click on them, after which see what colour their borders are. It’s delicate, however I’ve discovered one rapidly will get used to noticing at the least the sorts of objects one generally makes use of. And in Model 13.2 we’ve made some further distinctions, notably between pictures and graphics.
So, sure, the item above is a Graph—and you’ll inform that as a result of it has a purple border once you click on it:

It is a Graphics object, which you’ll be able to inform as a result of it’s received an orange border:

And right here, now, is an Picture object, with a lightweight blue border:

For some issues, colour hints simply don’t work, as a result of folks can’t keep in mind which colour means what. However for some cause, including colour borders to visible objects appears to work very effectively; it offers the suitable degree of hinting, and the truth that one usually sees the colour when it’s apparent what the item is helps cement a reminiscence of the colour.
In case you’re questioning, there are some others already in use for borders—and extra to return. Bushes are inexperienced (although, sure, ours by default develop down). Meshes are brown:

Brighter, Higher Syntax Coloring
How can we make it as straightforward as potential to sort right Wolfram Language code? It is a query we’ve been engaged on for years, progressively inventing an increasing number of mechanisms and options. In Model 13.2 we’ve made some small tweaks to a mechanism that’s truly been within the system for a few years, however the modifications we’ve made have a considerable impact on the expertise of typing code.
One of many huge challenges is that code is typed “linearly”—basically (aside from 2D constructs) from left to proper. However (identical to in pure languages like English) the that means is outlined by a extra hierarchical tree construction. And one of many points is to understand how one thing you typed matches into the tree construction.
One thing like that is visually apparent fairly domestically within the “linear” code you typed. However typically what defines the tree construction is kind of faroff. For instance, you may need a perform with a number of arguments which are every giant expressions. And once you’re one of many arguments it will not be apparent what the general perform is. And a part of what we’re now emphasizing extra strongly in Model 13.2 is dynamic highlighting that exhibits you “what perform you’re in”.
It’s highlighting that seems once you click on. So, for instance, that is the highlighting you get clicking at a number of completely different positions in a easy expression:

Right here’s an instance “from the wild” displaying you that should you sort on the place of the cursor, you’ll be including an argument to the ContourPlot perform:

However now let’s click on in a unique place:

Right here’s a smaller instance:

Consumer Interface Conveniences
We first launched the pocket book interface in Model 1 again in 1988. And already in that model we had most of the present options of notebooks—like cells and cell teams, cell types, and so on. However over the previous 34 years we’ve been persevering with to tweak and polish the pocket book interface to make it ever smoother to make use of.
In Model 13.2 we now have some minor however handy additions. We’ve had the Divide Cell menu merchandise (cmdshiftD) for greater than 30 years. And the best way it’s at all times labored is that you simply click on the place you need a cell to be divided. In the meantime, we’ve at all times had the flexibility to place a number of Wolfram Language inputs right into a single cell. And whereas typically it’s handy to sort code that manner, or import it from elsewhere like that, it makes higher use of all our pocket book and cell capabilities if every impartial enter is in its personal cell. And now in Model 13.2 Divide Cell could make it like that, analyzing multiline inputs to divide them between full inputs that happen on completely different strains:

Equally, should you’re coping with textual content as an alternative of code, Divide Cell will now divide at specific line breaks—which may correspond to paragraphs.
In a totally completely different space, Model 13.1 added a brand new default toolbar for notebooks, and in Model 13.2 we’re starting the method of steadily including options to this toolbar. The principle apparent characteristic that’s been added is a brand new interactive software for altering frames in cells. It’s a part of the Cell Look merchandise within the toolbar:

Simply click on a facet of the body type widget and also you’ll get a software to edit that body type—and also you’ll instantly see any modifications mirrored within the pocket book:

If you wish to edit all the perimeters, you may lock the settings along with:

Cell frames have at all times been a helpful mechanism for delineating, highlighting or in any other case annotating cells in notebooks. However up to now it’s been comparatively tough to customise them past what’s within the stylesheet you’re utilizing. With the brand new toolbar characteristic in Model 13.2 we’ve made it very straightforward to work with cell frames, making it reasonable for customized cell frames to change into a routine a part of pocket book content material.
Mixing Compiled and Evaluated Code
We’ve labored onerous to have code you write within the Wolfram Language instantly run effectively. However by taking the additional onetime effort to invoke the Wolfram Language compiler—telling it extra particulars about the way you anticipate to make use of your code— you may usually make your code run extra effectively, and typically dramatically so. In Model 13.2 we’ve been persevering with the method of streamlining the workflow for utilizing the compiler, and for unifying code that’s arrange for compilation, and code that’s not.
The first work you must do to be able to make the very best use of the Wolfram Language compiler is in specifying varieties. One of many essential options of the Wolfram Language typically is {that a} image x can simply as effectively be an integer, a listing of advanced numbers or a symbolic illustration of a graph. However the principle manner the compiler provides effectivity is by having the ability to assume that x is, say, at all times going to be an integer that matches right into a 64bit laptop phrase.
The Wolfram Language compiler has a complicated symbolic language for specifying varieties. Thus, for instance

is a symbolic specification for the kind of a perform that takes two 64bit integers as enter, and returns a single one. TypeSpecifier[ ... ] is a symbolic assemble that doesn’t consider by itself, and can be utilized and manipulated symbolically. And it’s the identical story with Typed[ ... ], which lets you annotate an expression to say what sort it needs to be assumed to be.
However what if you wish to write code which might both be evaluated within the abnormal manner, or fed to the compiler? Constructs like Typed[ ... ] are for everlasting annotation. In Model 13.2 we’ve added TypeHint which lets you give a touch that can be utilized by the compiler, however might be ignored in abnormal analysis.
This compiles a perform assuming that its argument x is an 8bit integer:

By default, the 100 right here is assumed to be represented as a 64bit integer. However with a kind trace, we will say that it too needs to be represented as an 8bit integer:

150 doesn’t slot in an 8bit integer, so the compiled code can’t be used:

However what’s related right here is that the perform we compiled can be utilized not just for compilation, but in addition in abnormal analysis, the place the TypeHint successfully simply “evaporates”:

Because the compiler develops, it’s going to have the ability to do an increasing number of sort inferencing by itself. However it’ll at all times be capable to get additional if the consumer provides it some hints. For instance, if x is a 64bit integer, what sort needs to be assumed for x^{x}? There are actually values of x for which x^{x} gained’t slot in a 64bit integer. However the consumer may know these gained’t present up. And to allow them to give a kind trace that claims that the x^{x} needs to be assumed to slot in a 64bit integer, and this can enable the compiler to do far more with it.
It’s value mentioning that there are at all times going to be limitations to sort inferencing, as a result of, in a way, inferring varieties requires proving theorems, and there could be theorems which have arbitrarily lengthy proofs, or no proofs in any respect in a sure axiomatic system. For instance, think about asking whether or not the kind of a zero of the Riemann zeta perform has a sure imaginary half. To reply this, the kind inferencer must resolve the Riemann speculation. But when the consumer simply needed to imagine the Riemann speculation, they might—at the least in precept—use TypeHint.
TypeHint is a wrapper which means one thing to the compiler, however “evaporates” in abnormal analysis. Model 13.2 provides IfCompiled, which helps you to explicitly delineate code that needs to be used with the compiler, and code that needs to be utilized in abnormal analysis. That is helpful when, for instance, abnormal analysis can use a complicated builtin Wolfram Language perform, however compiled code might be extra environment friendly if it successfully builds up comparable performance from lowerlevel primitives.
In its easiest kind FunctionCompile helps you to take an specific pure perform and make a compiled model of it. However what when you have a perform the place you’ve already assigned downvalues to it, like:

Now in Model 13.2 you need to use the brand new DownValuesFunction wrapper to provide a perform like this to FunctionCompile:

That is essential as a result of it helps you to arrange a complete community of definitions utilizing := and so on., then have them robotically be fed to the compiler. Typically, you need to use DownValuesFunction as a wrapper to tag any use of a perform you’ve outlined elsewhere. It’s considerably analogous to the KernelFunction wrapper that you need to use to tag builtin capabilities, and specify what varieties you need to assume for them in code that you simply’re feeding to the compiler.
Packaging MassiveScale Compiled Code
Let’s say you’re constructing a considerable piece of performance which may embrace compiled Wolfram Language code, exterior libraries, and so on. In Model 13.2 we’ve added capabilities to make it straightforward to “package deal up” such performance, and for instance deploy it as a distributable paclet.
For example of what could be finished, this installs a paclet known as GEOSLink that features the GEOS exterior library and compilerbased performance to entry this:

Now that the paclet is put in, we will use a file from it to arrange a complete assortment of capabilities which are outlined within the paclet:

Given the code within the paclet we will now simply begin calling capabilities that use the GEOS library:

It’s fairly nontrivial that this “simply works”. As a result of for it to work, the system has to have been instructed to load and initialize the GEOS library, in addition to convert the Wolfram Language polygon geometry to a kind appropriate for GEOS. The returned end result can be nontrivial: it’s basically a deal with to information that’s contained in the GEOS library, however being memorymanaged by the Wolfram Language system. Now we will take this end result, and name a GEOS library perform on it, utilizing the Wolfram Language binding that’s been outlined for that perform:

This will get the end result “again from GEOS” into pure Wolfram Language kind:

How does all this work? This goes to the listing for the put in GEOSLink paclet on my system:

There’s a subdirectory known as LibraryResources that accommodates dynamic libraries appropriate for my laptop system:

The libgeos libraries are the uncooked exterior GEOS libraries “from the wild”. The GEOSLink library is a library that was constructed by the Wolfram Language compiler from Wolfram Language code that defines the “glue” for interfacing between the GEOS library and the Wolfram Language:

What’s all this? It’s all based mostly on new performance in Model 13.2. And finally what it’s doing is to create a CompiledComponent assemble (which is a brand new factor in Model 13.2). A CompiledComponent assemble represents a bundle of compilable performance with parts like "Declarations", "InstalledFunctions", "LibraryFunctions", "LoadingEpilogs" and "ExternalLibraries". And in a typical case—just like the one proven right here—one creates (or provides to) a CompiledComponent utilizing DeclareCompiledComponent.
Right here’s an instance of a part of what’s added by DeclareCompiledComponent:

First there’s a declaration of an exterior (on this case GEOS) library perform, giving its sort signature. Then there’s a declaration of a compilable Wolfram Language perform GEOSUnion that immediately calls the GEOSUnion perform within the exterior library, defining it to take a sure memorymanaged information construction as enter, and return a equally memorymanaged object as output.
From this supply code, all you do to construct an precise library is use BuildCompiledComponent. And given this library you can begin calling exterior GEOS capabilities immediately from toplevel Wolfram Language code, as we did above.
However the CompiledComponent object does one thing else as effectively. It additionally units up the whole lot you want to have the ability to write compilable code that calls the identical capabilities as you may inside the constructed library.
The underside line is that with all the brand new performance in Model 13.2 it’s change into dramatically simpler to combine compiled code, exterior libraries and so on. and to make them conveniently distributable. It’s a reasonably exceptional simplification of what was beforehand a timeconsuming and sophisticated software program engineering problem. And it’s good instance of how highly effective it may be to arrange symbolic specs within the Wolfram Language after which use our compiler expertise to robotically create and deploy code outlined by them.
And Extra…
Along with all of the issues we’ve mentioned, there are different updates and enhancements which have arrived within the six months since Model 13.1 was launched. A notable instance is that there have been no fewer than 241 new capabilities added to the Wolfram Operate Repository throughout that point, offering particular addon performance in a complete vary of areas:
However inside the core Wolfram Language itself, Model 13.2 additionally provides a number of little new capabilities, that polish and spherical out present performance. Listed below are some examples:
Parallelize now helps computerized parallelization of quite a lot of new capabilities, significantly associated to associations.
Blurring now joins DropShadowing as a 2D graphics impact.
MeshRegion, and so on. can now retailer vertex coloring and vertex normals to permit enhanced visualization of areas.
RandomInstance does significantly better at rapidly discovering nondegenerate examples of geometric scenes that fulfill specified constraints.
ImageStitch now helps stitching pictures onto spherical and cylindrical canvases.
Features like Definition and Clear that function on symbols now persistently deal with lists and string patterns.
FindShortestTour has a direct technique to return particular person options of the end result, reasonably than at all times packaging them collectively in a listing.
PersistentSymbol and LocalSymbol now enable reassignment of elements utilizing capabilities like AppendTo.
SystemModelMeasurements now provides diagnostics similar to rise time and overshoot for SystemModel management techniques.
Import of OSM (OpenStreetMap) and GXF geo codecs at the moment are supported.