-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
namedtuple are not rendered in the variable browser as expected #1477
Comments
This code doesn't run for me - Having said that, the particular output that you quote: debugpy/src/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_safe_repr.py Lines 117 to 175 in ef9a67f
So I suspect that the actual URL that you have in your data is running afoul of string limits for collection items - this is maxstring_inner in the same file, and it is set to 30.
This isn't configurable via launch.json, unfortunately, though you can of course just edit your local copy of debugpy. There's also an undocumented launch.json setting |
And I think these should be made configurable going forward. We're unlikely to do so for the current major version of debugpy, but given that a major rewrite is in the works, I suggest filing a feature request to configure everything that you think would be useful to configure, and then I'll add that under #1448 |
Thanks @int19h. I've written that code on the fly, so excuse me if that was not running at first. Not sure a configurable option in the setting file is the best here. Debugger and Python in general are way too complex already and rich of settings. To me this should be either it shows everything or not. If this fits the debugpy direction, I will file a feature request. I don't want to waste each other times if it does not. |
To clarify, do you mean that for this tuple it shouldn't be trying to exclude anything at all? The problem with showing everything in the most general case is that once you get a local variable containing, say, a multi-megabyte JSON string, or a collection of a few million items, it effectively makes the debug session useless because it gets stuck trying to As far as direction, on this particular subject we're not married to any particular approach. There are technical limitations on what we can do, but beyond that I would like to determine what works best for the users, and make it sufficiently configurable to cover all the disparate use cases. The only catch here aside from dealing with possibility of extremely large values is that whatever repr is used, if it is not trimmed, it should be possible to |
I've not extensively tested PyCharm to a million rows resultset or MB of data, but indeed I was put off by the fact that during my testing phase with 30 lines I was getting anything. |
Absolutely! The current values are definitely overly conservative, which shouldn't be surprising given that they were picked something like 13 years ago, in the context of typical hardware of that time and VS as a client, by an extremely scientific process of "let me make a guess". It's been so long that I can't remember all the details, but IIRC another intent was to produce values that usually fit within the typical length of the Variables pane (again, in VS, and given typical screen size & resolution back then), on the basis that it can be expanded to look at individual items. If actual use shows that these limits are awkward, and if people prefer to get a long value that can be scrolled, they should be revised. Keep in mind tho that scenarios here involve more than just a tuple. For example, consider something like this: x = ([1, 2, 3, ..., 100], "foo") If we use a large limit for that inner list, then in Variable explorer you will not be able to see "foo" at a glance, because it will be all the way to the right, so you'll have to scroll every time to check what it is. IIRC the logic behind this was that non-collection items are generally more interesting than nested collections, so the latter should be collapsed more aggressively to provide a view that provides a good overall idea of what the value is without unduly biasing any particular item. That's why the max length is different on different levels: maxcollection = (15, 10) Similarly if there's a tuple where the first item is a string followed by a bunch of stuff, if we show the full length of the string, it effectively hides the other items from view. Thus the limits are also drastically different for strings:
With that in mind, what do you think sensible limits for collection items would look like, such that they'd work well not only for this particular case, but for other cases you can anticipate in real world data? |
@zljubisic I'd also love to hear your opinion on this. |
You made a point for designing a variable explorer as a plugin. Though I think I've seen something once, have to check. I guess that, taking in consideration strong cases were extended iteration will make impossible to see anything, at least I would consider the full display of the very first line (or perhaps every other X, which yes can be at a config). |
Firstly, @WSF-SEO-AM reported non functional code. I tried to make it work, but with no success. @WSF-SEO-AM you are trying to pop key "keys" that doesn't exist. Than you a trying to create Apart from this, it is a really bad practice to create rows property inside of append method that you call in the Test constructor. Can @WSF-SEO-AM update the example to something that works, that we can continue the discussion? |
@zljubisic I think the specific issue here is clear enough. I'd like to tackle the bigger problem here, which is that, broadly speaking, values displayed in Variables pane for collections often aren't helpful for their intended purpose (which is to provide a useful condensation of information at a glance). This is not quite identical to the debug console issues we've been discussing, but it is very closely related (the main difference here, really, is the typical size of the "output window"). |
pydevd actually has a plugin system for this already, and the vendored version that we ship in debugpy should be able pick up any installed plugins. Take a look here. However, I'm not aware of any libraries actually using it in practice, even though it has been around for a very long time now (predating debugpy). It's effectively only used to special-case popular libraries like numpy and pandas in pydevd itself. This is not surprising given that, from the perspective of libraries' authors, it is an API specific to one - even if popular - Python debugger implementation. It basically reverses the problem - now, instead of the debugger having to support each library separately, the library has to support each debugger separately. The way I see it, ideally, debugpy shouldn't have to special-case numpy, but numpy also shouldn't have to special-case debugpy. So I would say that this begs not just for plugins, but for an API that is a well-defined standard way of doing them that strives to be debugger-agnostic while allowing for debugger-specific extensions, similar to DAP itself. Ideally, in form of a PEP, so that builtins and stdlib can also use it. But that is a long-term goal that would take a while to achieve. In the meantime we're likely to use debugpy as a platform to prototype said API, which would also serve as a customization point. Still, this leaves open the question of what the actual implementation of any given plugin would be like. Named tuples are a standard library data type, so debugpy is going to provide the default implementation for that for the foreseeable future. I want to make sure that this implementation is "good enough" for most people that they wouldn't need to bother customizing (beyond maybe tweaking a few configuration knobs). |
Can't really say how good this could be for the mass. Even a plugin could be just an overarching but over complex approach at this stage. |
We are trying here to (re)invent the wheel. @int19h What do you say? |
No, I can see only the expected results. |
That screenshot does not contain any examples of values that are too big to fit, which is what the issue is fundamentally about. Sure, by rearranging the UI you can make for more space, but that doesn't really solve the problem - at some point you'll get a tuple with a 200-char URL in it; what then? As I noted earlier, the current limits are obviously too small and will be revised upwards. But, again, that doesn't solve the problem, just makes it less acute. |
…er as expected Significantly increase safe repr limits for strings and collections.
…er as expected Significantly increase safe repr limits for strings and collections.
…er as expected Significantly increase safe repr limits for strings and collections.
Just FYI, I'm using this API to enhance rendering of some in-house object types, and would hate to lose it. On the other hand, I'd love some standardized way to offer alternative views on the same data, perhaps with an interface similar to the notebook cell output API. |
I've created a collections.namedtuple to store the results of a report which is used to create 'Rows' appended into an array.
While everything seems to be working fine from a logic perspective, I noticed VS Code no longer show the repr from the second item onward. This was used to be the case, but I can't tell when it did stop.
I can replicate this with VS code 1.83 and the Insider v.185. To exclude the possibility on an error code, I ran past PyCharm where code works just fine.
A minimal reproducible code below:
The text was updated successfully, but these errors were encountered: