hckrnws
> The idea, particularly as realized in the GitHub pull request workflow, is that the real “unit of change” is a pull request, and the individual commits making up a PR are essentially irrelevant.
I loathe GitHub PRs because of this. Working at $dayjob the unit of change is the commit, and every commit is reviewed and signed off by at least 1 peer.
And you know what? I love it. Yes, there's some overhead. But I can understand each commit in its entirety. I and my coworkers have caught numerous issues in code with these single-purpose commits of digestible size.
Compare this to GitHub PRs, which tend to be beastly things that are poorly structured (not to mention GitHub's UI only adding to the review problems...) and multipurpose. Reviewing these big PRs with care is just so much harder. People don't care about the commit message, so looking at the git log it's just a mess that's hard to navigate.
My ideal workflow is commits are as small as possible and PRs "tell a story", meaning that they provide the context for a commit.
I will split up a PR into
- Individual steps a a refactor, especially making any moves their own commits
- I add tests *before* the feature (passing, showing the old behavior)
- The actual fix or feature commit is tiny, with the diff of the tests just demontstrating how behavior changed
This makes it really fast to review a PR because you can see the motivation while only looking at a small change. No jumping around trying to figure out how the pieces fit together.
The main reason I might split up a PR like that is if one piece is likely to generate a discussion before its merged. I then make that a separate PR and as small as possible so we can have a focused discussion.
I hate squash merges.
As someone who has often had to dig into the history to figure out what happened, I always want to see at least this. And I wouldn't be opposed to seeing it broken down even more as it was worked on. Not one big squash merge that hides what really happened.
I'll also add one more to your list: Any improvements that came out of the review but stayed in that merge should each be individual commits. I've seen hard-to-trigger bugs get introduced from what should have been just a style improvement.
One of the problems is that GitHub's UI and workflow isn't very good for this in various ways (can't review commits, can't really diff with previous after amending commit).
So as a rule, I tend to stick with "1 PR == 1 commit", except when there's a compelling reason not to.
Worrying about individual commits is also what makes it possible for you to later use git bisect without going crazy.
And to have a useful "git blame". My editor setup shows me a subtle git blame on each line of code, and I find it quite helpful to know who changed what last and why. Both when coding, and when debugging.
This is why, contra the linked article about commit messages, I strive to make minimal and cohesive commits with good messages. Commits are for future archaeology, not just a way to save my work every minute in case my hard drive dies.
How often do you find that command useful?
In ~18 years of git use, I have never needed it, but I see it mentioned often as an important reason to handle commits in some certain way. I wonder what accounts for the difference.
I think it’s not that you couldn’t have used it but because you discount it it wasn’t something you reached for. If you flip the script as something that’s out there and explicitly look for opportunities to use it it’s there. Alternatively, you don’t structure your commits carefully and thus git bisect for you is a mess that would pull up a giant amount of code anyway.
Heck, I used it yesterday because I had a PR where I was cleaning things in the C++ build system where things stopped working in CI in weird ways I couldn’t figure out but was fine locally. I used bisect locally to figure out which commits to test. You just have to think that a blind bisect search is going to be more effective than trying to spot check the commits that are a problem (and for tricky bugs this is often the case because your intuition can mislead you).
I’ve also used it to find weird performance regressions I couldn’t figure out.
I used it a few weeks ago to track down a weird regression/bug in Firefox. A few years ago I used it to track down a regression in Wine.
That's probably the most important case: large complex codebases with lots of changes where "wtf is going on" isn't so obvious from just the code.
I've never used it for any of my personal projects or even at my dayjob, because they've much smaller with far fewer changes (relatively speaking).
It's useful when the codebase is difficult to debug directly. Eg, your users have a bug that maybe appears on specific hardware, which the developers don't have. The users can't be expected to comprehend the code base enough to debug, but bisect is a mechanical process that they are capable of.
Having said that, bisect is also an O(log N) method and it's useful where otherwise you might end up spending O(N) time debugging something. I have myself split a configuration change into many stupidly-small commits (locally, without the intention to push that) purely so I could run bisect instead of manually reviewing the change to figure out which part broke stuff.
Git bisect is one of those tools that - once you learn how to use it effetively - fundamentally changes how you think about your git repos. I have had people tell me that a clean git history isn't worth the effort but once they really grok what you can do with that solid foundation really come around.
One case where git bisect really saved me was when we discovered a really subtle bug in an embedded project I was working on. I was able to write a 20 line shell script that built the code, flashed the device, ran 100 iterations of a check, and do some basic stats. I left the bisect chugging away over the weekend and came back to an innocuous-looking commit from 6 months earlier. We could've spent weeks trying to find the root cause without it.
whats N here?
In the case of git bisect, the number of commits. In the alternate case, it depends what debugging strategy you decided to use.
Occasionally, but when it's useful, it's very useful. But generally only if most commits are buildable and roughly functional, otherwise it becomes a big pain (as does any manual process of finding what change introduced a regression).
Same. I've only done bisect debugging a few times. I'm almost always able to use more traditional debugging especially if I have a good idea about where the bug must be from behavior.
Bisects are good when the bug is reproducible, you have a "this used to work and now it doesnt" situatiuon, and the code base is too big or too unfamiliar for you to have a good intuition about where to look. You can pretty quickly get to the commit where the bug first appears, and then look at what changed in that commit.
How do you trace the origin of breaking changes, especially those arising from integration problems? For fairly busy codebases (>10 commits per day), and a certain subset of regressions, bisect is invaluable in finding the root cause. you can always do it the "hard way", so it's not the only way
I don't really ever find myself having to do that. I guess it's been a long time since I worked in an environment which did not use an "only merge to main after passing CI" workflow, and back then we weren't using git, anyway.
There was one git-using startup I worked for which had a merge-recklessly development style, and there was one occasion when I could have used `git bisect` to disprove my coworker's accusation that I had thoughtlessly broken the build (it was him, actually, and, yuck - what a toxic work environment!), but the commit in question was like HEAD~3 so it would probably have taken me longer to use git bisect than to just test it by hand.
> I don't really ever find myself having to do that. I guess it's been a long time since I worked in an environment which did not use an "only merge to main after passing CI" workflow, and back then we weren't using git, anyway.
You _never_ have bugs that slip through the test suite? That is extremely impressive / borderline impossible. Even highly tested gold-standard projects like SQLite see bugs in production.
And once you find a regression that wasn't covered by your test suite, the fastest way to find where it originated is often git bisect.
Bugs that slip through the test suite and it's important to trace the origin and it requires building more than a couple best-guess versions to find it. And even then, if it wastes an hour or two once in a blue moon that's not a big motivator for workflow changes.
You're skeptical of a far stronger claim than the one they actually made.
> I guess it's been a long time since I worked in an environment which did not use an "only merge to main after passing CI"
It's the same for me, but some integration bugs still escape the notice of unit tests. Examples from memory: a specific subset users being thrown into an endless redirect due to a cookie rename that wasn't propagated across all sub-systems on the backend, multiple instances of run-time errors that resulted from dependency version mismatches (dynamic loading), and a new notification banner element covering critical UI elements on an infrequently used page - effectively conflicting CSS position. In all these cases, the CI tests were passing, but passing tests don't mean your software is working as expected in all cases.
I also find git bisect to be useful, but very rarely and never for personal projets.
In the cases you mentioned, robust e2e and integration tests would ideally be able to catch the bugs. And for the UI issue in particular, I wouldn't think to track down the commit that caused it, but just fix it and move on.
Honestly if you haven't ever used got bisect I'd say you're missing out on a very powerful tool. To be able to, without any knowledge of the code base, isolate down to the exact commit that introduced a big is incredibly powerful
Often enough that I am truly shocked to hear someone say they have never needed it.
A coworker taught me how to use it long ago, else I would never have known it was there to reach for.
And the few times I've reached for it, I was really thankful it was there.
I’ve just used it two times in the last few months. One was to track down a commit which triggered a bug I found in Git. I wouldn’t be able to troubleshoot it myself. And I couldn’t send the whole repository because it’s not OSS. But with a script to reproduce the bug and half an hour I was able to find the problematic change.
I also tried to make a minimal reproduction but wasn’t able to.
I don't use git much. But at my job, where we use mercurial, where the unit of work is commit, I use bisect frequently. When one of our automated tests starts failing, I can run bisect and easily find the commit that caused it.
I personally bisect regularly to find when issues were introduced.
I used it multiple times to track down which commit introduced confusing bug.
Squash merge gives you the same thing.
Squash merges simply guarantee that git bisect will not be able to pinpoint a breaking change, because that history is gone.
If you treat a PR as a unit of work, then there is nothing to bisect. If you don't treat it as a unit of work, then people just edit their git history to merge commits just like a squash.
> If you treat a PR as a unit of work, then there is nothing to bisect.
You're bisecting the history of PR merges.
You ignored the part where I claimed that without squash merging, people will just do it manually with git rebasing or amending.
Here’s the most relevant (to me) difference:
- The real unit of change lives in Git
- The real unit of change lives on some forge
I want it to live in Git.
They are the same if you use squash merge.
modulo the commit message, which github apparently takes a lot of effort to not surface when needed; the most egregious example is that it's a complete afterthought to fill out right before the 'squash and merge' button becomes green.
You can make the PR description the commit message through configuration.
>Working at $dayjob the unit of change is the commit, and every commit is reviewed and signed off by at least 1 peer.
Respectfully, that's the dumbest thing I've ever heard.
Work on your feature until it's done, on a branch. And then when you're ready, have the branch reviewed. Then squash the branch when merging it, so it becomes 1 commit.
Commit and push often, on branches, and squash when merging. Don't review commits, review the PR.
I've had people at various jobs accidentally delete a directory, effectively losing all their progress, sometimes weeks worth of work. I've experienced laptops being stolen.
If I used your system, over the years me and various colleagues would have lost work irretrievably a few times now, potentially sinking a startup due to not hitting a deadline.
I feel your approach shows a very "Nothing bad will ever happen" attitude.
Yes, of course you should have a backup. Most of those don't run every few minutes, though. Or even every few hours.
"Just trust the backup" feels like a really overkill solution for a system that has, as a core feature, pushing to a remote server. And frankly, a way to justify not using the feature.
Is this what Gerrit does?
Pretty much, yes. I've only used Gerrit a few times so my direct experience is limited.
This encourages commit size to grow drastically.
What's the difference between this and squash merging PRs? A commit or a PR can be large. I don't see the difference.
> A commit or a PR can be large. I don't see the difference.
They made it pretty clear they're talking about not-large commits. And they're contrasting that with any-size PRs.
That's a false dichotomy, lurker. A PR or a commit can be large or small.
It's not a false dichotomy. They're just using different terms than you would, based on their experience with how the people around them use those systems.
It's comparing carefully manicured git history hacking to unregulated PRs. It's pure dogma.
I think it's the review process. Do you review 1 commit or multiple?
Comment was deleted :(
JJ just surpassed a milestone for me personally where there were no hick-ups for more than 6 months and it feels genuinely superior to git and also to sapling for performance, stability and UX. If you ever considered switching i think it might be now. Colocated mode does not feel like a second class citizen and works really well too so there is always the option to use a sophisticated git client like fork for certain tasks. VisualJJ is also a great albeit not open source vscode extensio n that is slowly catching up to ISL. (If you are used to ISL and think visualJJ looks empty and lacks features: most things are hidden in the context menu which takes some getting used to.)
I began using Jujutsu as my VCS about 2 months ago. Considering most of my work is on solo projects, I love the extra flexibility and speed of being able to safely fixup recent commits. I also love not having to wrangle the index, stashes, and merges.
`lazyjj` [1] makes it easier to navigate around the change log (aka commit history) with single keypresses. The only workflow it's currently missing for me is `split`.
For the times when I have had to push to a shared git repo, I used the same technique mentioned in the article to prevent making changes to other developer's commits [2].
It's been a seamless transition for me, and I intend to use Jujutsu for years.
[1] https://github.com/Cretezy/lazyjj [2] https://jj-vcs.github.io/jj/latest/config/#set-of-immutable-...
Check out jjui - it is VASTLY better, and the dev is extremely open and responsive to feature requests.
Hey! I'm the author of lazyjj, let me know if you are missing any features in it!
Huh, reading the penultimate "“Units of change” and collaboration" section reinforces the feeling that Github PRs really are a poor way to do code submission/review, and have been holding back a lot of the industry from better ways of working for a long time.
GitHub-style PRs are the worst way of reviewing changes, except (in practice, if not in theory) for all the others :P.
When my then-employer first stated using git, I managed to convince them to set up Gerrit -- I had my own personal Gerrit instance running at home, it was great. I think it lasted about a year before the popularity factor kicked in and we switched to GitLab.
At least part of the difficulty is that approximately everyone who would need to learn how to use a new mechanism is already familiar with PRs. And folk new to the industry learn pretty quickly that it's normal and "easy". The cultural inertia is immense, and that means we're much more likely to evolve our way out of the hole (or switch to a completely different paradigm) than we are to en-mass switch to a different way of working that's still Git.
There are ways to make the PR experience more reasonable; GitHub has introduced stacked PRs which queue up. Even without that, I find disabling merge commits puts my team in a sweet spot where even if I'd prefer Gerrit, I'm not tearing my hair out and neither are the rest of my team who (mostly) don't care all that much.
I get to use both Gerrit (rebase-cherrypick workflow) and gitlab (PR/MR workflow) at work.
I think that MR is better for smaller projects i.e. ~10devs - it's lower overhead, just commit while you work then write up a description when you push.
I think rebase-CP is better for larger projects like ~100 devs committing to a repo - linear git history and every commit having a clear description+purpose+review is worth the overhead at that point.
So one-off tools and infra and stuff get chucked into gitlab while "the product" is in Gerrit.
I've never really used Github's PR system, always something else, but once in awhile I stumble in there in there from an opensource repo and they seem impossible to read with the individual commits.
Why doesn't Github just flatten/squash the stack of commits visually? Like I don't care what they're doing inside .git, can't they just diff the latest commit against the base commit and display that? So it's visually merged even if it's not merged in git's log?
---
At my work we do single commit. I find it annoying to work that way, I sometimes try to make clones of certain commits so I can restore to that point if I need to, but for reviewing, having everything in one neat little bundle, it's nice.
> GitHub has introduced stacked PRs which queue up
I missed that. How does it work?
It's only available for orgs, not for personal repositories, and when we tried to use it in my team the size of our OIDC claim meant something broke so we had to turn it back off again :P. So I didn't get much experience with it.
But it seems like it gives you a place in a queue, meaning you don't need to keep rebasing as earlier PRs get merged: you can see what the state of the main branch should be by the time your PR reaches the head of the queue and develop accordingly.
https://docs.github.com/en/repositories/configuring-branches...
Thanks!
Graphite app helps with stacked PRs and has a good explanation. Not affiliated just a happy user
It's maddening, considering their size, and that there are proven other examples out there of versioned change sets and divorcing the commit ID from the content ID.
Surprised no one has mentioned https://graphite.dev/ yet. Our team uses it for stacked PRs, and it works really well.
> Code review for the age of AI
Gross.
How do they compare?
Jujutsu is “just” a piece of software that you install, and it is free and open source. Graphite.dev is a service, and it is not free, but as a service it gives you features that Jujutsu cannot like automatically merging stacked PRs.
For something as fundamental as source control in this day and age, I’d go with the former open source option (and have recently been learning Jujutsu)…
This article covers my own experience with JJ very accurately. I'll even go as far as to say that if had to write my own article about jj, I'd use exactly the same talking points and examples. Great writeup
I would really love a comparison between JJ and fossil. I use fossil for personal projects instead of git. So I'd like to know if I should consider JJ.
I think the biggest difference is one of philosophy: in Fossil (as I understand it) every change you make to the code is tracked, and there's no concept of rebasing or making the history prettier. If you make a commit and then realise you've left some debug logs in the code, you've got to make a new commit to delete those logs. This has the advantage that you never lose data, but the disadvantage that all of those commits end up in your long-term project history, making things like bisecting or blaming more difficult.
In JJ, however, changes are explicitly mutable, which means if you finish a commit and then realise you want to go can and change something, you've got multiple ways to squash your fix directly into the original commit. In addition to this, you also have a local history of every change you make to your project (including rewriting existing commits) so you still never lose any data locally. However, when you finish adjusting your commits and push them to someone else's machine (e.g. GitHub), they will only get the finished commits, and not the full history of how you developed them.
I know some Fossil people feel very strongly that any sort of commit rewriting should be banned and avoided at all costs, because it creates a false history - you can use Jujutsu like that, but it probably doesn't add much over Fossil at the point. But in practice, I think Jujutsu strikes the right balance between allowing you to craft the perfect patch/PR to get a clean history, and preventing you from losing any data while you do that.
> I think the biggest difference is one of philosophy: in Fossil (as I understand it) every change you make to the code is tracked, and there's no concept of rebasing or making the history prettier. If you make a commit and then realise you've left some debug logs in the code, you've got to make a new commit to delete those logs. This has the advantage that you never lose data, but the disadvantage that all of those commits end up in your long-term project history, making things like bisecting or blaming more difficult.
how you deal with accidental committing of text that was not supposed to be there (say, accidental pasting a private email)?
Fossil can purge certain checkins, but it isn't intuitive, and will throw away both what you want to remove, and anything that you wanted to keep. Its a bit of a manual rebuild.
It looks like it is aimed at larger projects, whereas fossil is nice for personal projects because it was designed by and for a small team.
JJ itself uses Github, where as fossil is very easy to self-host including issue tracking, wiki etc.
jj is just a CLI, analogous to git alone. It doesn’t come with any of the other stuff Fossil does. I don’t think large/small project makes any difference either.
> You don’t need to explicitly tell jj about what you’ve done in your working copy, it’s already tracked. This removes the need for an “index” or staging area
Does this mean that you have to proactively remember and undo any config file changes you made e.g. while fixing an issue in a test environment? Sounds a little risky.
As others have pointed out, gitignore exists and you should try and build your configuration so that most of the time you're changing gitignored files rather than checked in ones. That said, you can do some pretty cool stuff with Jujutsu in this regard. Because changes are always automatically rebased, you can often create the change that will ultimately become your final commit, then create a new change where you edit any config you need to, then a change on top of that that's basically your staging area.
For example, I recently had a set up that looked something like this:
@ w - the checked out change where I'm editing files
|
* x - local config change
|
* y (bookmark 1) - CI config change that sets up a temp test env
|
* z (bookmark 2) - Start of the changes that will eventually get reviewed and merged
With this set up, I could make changes in an environment that included all of the config changes I needed. Then when I was happy with something and wanted to "save my work", I could run `jj squash --to z`, which would squash everything in change w into change z, rebasing everything else automatically on top of that. Then I could run `jj git push`, and this force-pushed the changes at y and z to their separate branches on GitHub. I had a pull request for the branch at z which other people could then review and where all the tests could run. Meanwhile the branch at y had updated the CI config to remove the usual tests and deploy everything to a temporary environment. So each push automatically updated my test env, and updated the pull request with the "clean" changes (i.e. without my config file fiddling).If I wanted this sort of setup more long term, I'd find some other way to achieve it without relying on Jujutsu, but for ad-hoc cases like this it's absolutely great.
some of us do `git add .`, you can't ship an unbuildable commit because you forgot something
what you should have is support for local, gitignore-able config files
Like the author, I'd appreciate a stacked PRs approach, integrated with GitHub (unfortunately). E.g. `a → b → c → d` where I have PRs open for `b`, `c` and not yet on `d`, that are "linked" to the jj changes. So 1 change per PR or it could even be multiple. I've lately become a huge fan of git-spice, that just works.
Stacked PRs is clearly a nice workflow... except that forges like Github generally have really poor support for them.
I wish Github would just allow you to say "this PR depends on this other PR", and then it wouldn't show commits from the other PR when reviewing the stacked PR.
But as far as I know you can't do that.
If you change the target branch it will only show commits that are against that. I wish GitHub did that automatically because it has the parent commit tree but when do you that you'll only see commits that diverge from the target branch/pr.
Is that what you want?
Not really. I've tried that with Gitlab, but it's kind of awkward. My PR isn't to merge branch B into branch A. For instance if the project owners see the PR for B and say "yeah great! I'll merge this" and press merge, it will just merge it into branch A and then close your PR!
It's a hacky workaround at best. I want proper support. I want Github to disable the merge button until the dependencies are merged. I want it to actually list the dependencies and link to them. I want the target branch to be `main`.
I think if they did those 3 things it would go 90% of the way. Too much to ask though clearly.
That works but depending on how you merge the PR, you end up needing to do a rebase on all of your future PRs.
I really wish 1PR = 1 commit with revision based diffs. Similar to gerrit.
Another person has commented that you can do this, but it's a little known thing. git-spice automatically manages it for you too.
It also doesn't work from forks.
This has annoyed me a few times, just recently a week ago. Thanks GitHub.
Graphite.dev does this, and it works great.
When I'm developing I inevitably fix one thing after another as I pass through the code. What I'd like is a tool to take such PRs and automatically split it up into loosely coupled, cohesive chunks.
With jj, I often do this and use jj split -i, which opens an interactive editor (similar to git's interactive add/rebase) which I can select parts of the change to be split into a separate change/commit. This enables me to take a large piece of work, split it into individual chunks, and open PRs for each change.
I have a tool called git-split for splitting commits: https://github.com/tomjaguarpaw/git-split/
I'm not sure you'd call it "automatic" though. Were you thinking of using an LLM to split the commits with some semantic awareness? It doesn't do that!
It should be possible to do that with LLM integration. Consider it a feature request :)
> If I had s -> t -> u -> v and wanted to reorder them, it’s as easy as jj rebase --revision u --after s, and I’d end up with s -> u -> t -> v,
How did t end up after u?
I'd expect that to fork into (s -> t) and (s -> u -> v). Either that or maybe (s -> t -> v) and (s -> u).
I'm not sure if you meant "how" or "why". As for "how", it's done by rebasing 'u' onto 's' and then rebasing 't' and 'v' onto the rebased 'u'.
As for "why", I think it behaves different from what you expected in two ways. The first is that `--revision` rebased only the specified revisions without theirs descendants (there's also `--source` if you want to include the descendants). The other things it that `--after` is a short form of `--insert-after`, so it's designed for inserting commits between other commits. There's also `--before/--insert-before`. There's also `--destination` which is what you expected (perhaps `--onto` would have been a better name for it). You can pass multiple commits to all these arguments, btw (yes, that includes `--destination`, so you can create new merge commits with `jj rebase`). https://jj-vcs.github.io/jj/latest/cli-reference/#jj-rebase has many examples.
> it's done by rebasing 'u' onto 's' and then rebasing 't' and 'v' onto the rebased 'u'.
That sounds like 2 operations, not 1. I think that's why I was confused.
The docs do clarify that there's an extra rebase going on though, so thanks!
Yes, -d sounds like what I expected.
I've tried out jj a little bit personally, but without exaggeration I am using git submodules in every single "real" project I'm actually working on, so lacking support for submodules is a complete non-starter for me :/
What I do is I git clone --recursive and then init a colocated repository. JJ will find the submodules and ignore anything in them, it works decently well.
As entrenched as git is, I feel like its only a matter of time until it's dethroned.
The basic workflow is fine. And there are some very powerful features. But then you try find the parent of a given branch and you're left wondering how the f#!@ thats so hard.
It's definitely nit picking. It's probably 85-90% of what you want it to be. But there is definitely enough room for improvement to warrent moving beyond it. I think the workflows and integratoins (github, gitlab, etc.) make it stickier as well. I just dont think we should assume everyone is just going to use git for the next 20+ years.
I think the larger holes in git are its lack of reasonable support for large files and also its inability to keep secrets.
Both relate to the same fundamental problem/feature that a git commit is a hash of its contents, including previous content (via the parent commit, who's hash is part of the input for the child commit hash).
> you try find the parent of a given branch
what you mean by that?
Skimmed the article so I admittedly can't speak to much to the content of it, but just wanted to give my 2c on working on individual things after spending a lot of time working with a stack-based VCS in mercurial/sapling -- jj felt pretty hard to get used to and after a couple of weeks I gave it up. I think it needs a competitive visualization tool to Interactive Smartlog.
I've settled on using Interactive Git Log in VSCode.
I often have "no changes" in my Codespaces, because I haven't committed then yet.
Would be nice, if Codespaces keeps the JJ change stored somewhere, so it isn't tied to the lifetime of the machine.
Jujutsu - I like the name.
I often see programming as going MMA with data, until you get what you want out of it.
Using brute force algorithms is going berserk, vibe coding is spamming summons in hope they get the job done, etc.
Agreed, I'd just like to complement vibe coding feels like spamming retarded Mr Meeseeks hoping they get get the job done. It's _probabilistic programming_
Sounds a lot like what’s being used internally at Google
It is a 20% project that became a full time thing and is used internally at Google, afaik
The workflow is indeed similar to perforce/piper “shelves”.
A thing to learn from JJ's performance is that not everyone is cut out to direct Star Wars movies.
Or Star Trek.
Nice writeup -- had been wondering about how it compares to Git (and any killer features) from the perspective of someone who has used it for a while. Conflict markers seems like the biggest one to me -- rebase workflows between superficially divergent branches has always had sharp edges in Git. It's never impossible, but it's enough of a pain that I have wondered if there's a better way.
For me, it's not so much that jj has any specific killer feature, it's that it has a fewer number of more orthogonal primitives. This means I can do more, with less. Simpler, but also more powerful, somehow.
In some ways, this means jj has less features than git. For example, jj doesn't have an index. But that's not because you can't do what the index does for you, the index is just a commit, like any other. This means you can bring to bear your full set of tools for working on commits to the index, rather than it being its own special feature. It also means that commands don't need certain "and do this to the index" flags, making them simpler overall, while retaining the same power.
Comment was deleted :(
Comment was deleted :(
It’s clear from multiple posts over years that jj has a better UX than plain git.
Why not just merge it into git?
It's too different. At best you could make it an off-by-default mode.
And I'm sure there are some people who actually prefer Sapling or raw git.
personally, I am not interested in learning bunch of new interfaces to be - maybe - marginally more efficient
and deal with more abstractions layers if anything goes wrong
> It’s clear from multiple posts over years that jj has a better UX than plain git.
it is clear that multiple people like it
not that UX is strictly superior
i'm growing tired of these git alternatives.
I feel like people should just learn to use git.
If anything, I think the jujutsu project as it exists today is probably most attractive to people who already know Git quite well. It's hard to articulate why a given VCS design or workflow is pleasant without having some firsthand experience of nontrivial things that went wrong or at least chaotic.
Among the hard work on the jj maintainers' plate is making it just as attractive and intuitive to people without that background.
Why on Earth would these be in any way incompatible?
I’ve been using git for 15+ years,and definitely share the sentiment that basically everyone needs to learn it well as it is just that entrenched.
…but I love that people are coming up with potential alternatives that have a meaningful chance of moving the whole VCS space forward.
jj seems to be the slowly evolving winner of these alternatives. It just does things on top of git that you cannot do by using git directly no matter how well you know git.
git is dumb. it's in the name. it's good enough, but it is not the best possible tool. neither is jj, but it's a damn fine improvement in some workflows.
I used jj for a bit. It messed up my code, put everything in staged and what not. I might try it again when it's stable.
It's by design. It's quite stable, too. You were probably confused by assuming it works like git, it really doesn't when you're working on a change. It kinda starts looking like git when changes are committed and pushed to master/main.
I've been using jj, but am considering going back to plain git now because all the latest llm tools know git and not jj. I don't recommend spending time on it now.
There are many available paths to learning that don't leave you beholden to what was in training data from ~2021. This attitude will definitely hurt one's opportunities for personal growth and enrichment.
Crafted by Rajat
Source Code