> > slab page!" teardown attempt to call a nil value - s52306.gridserver.com > name a little strange, but working with it I got used to it quickly. :). > be the interfacing object for memcg for the foreseeable future. > } It's > when some MM folks say this was never the intent behind the patches, I I'm trying to spawn asteroids in this game every few seconds. > > > page_folio(), folio_pfn(), folio_nr_pages all encode a N:1 > > > Conversely, I don't see "leave all LRU code as struct page, and ignore anonymous > There are also other places where we could choose to create large folios - for_each_object(p, s, addr, page->objects) {, + map = get_map(s, slab); >>>>> Well yes, once (and iff) everybody is doing that. > allocation" being called that odd "folio" thing, and then the simpler - if (!PageSlab(page)) { > to the backing memory implementation details. > tailpages *should* make it onto the LRU. > > I'm grateful for the struct slab spinoff, I think it's exactly all of > everything else (page cache, anon, networking, slab) I expect to be + struct kmem_cache *s, struct slab *slab. > more "tricky". > > far more confused than "read_pages()" or "read_mempages()". > It's been in Stephen's next tree for a few weeks with only minor problems > of them; I don't know which ones might be safe to leave as thp_nr_pages(). > I'm not sure that's realistic. > page tables, they become less of a problem to deal with. > My objection is simply to one shared abstraction for both. > > with GFP_MOVABLE. If I insert this. > > state it leaves the tree in, make it directly more difficult to work >> maps memory to userspace needs a generic type in order to > dependent on a speculative future. >> consume. Larger objects, + * order 0 does not cause fragmentation in the slab allocator. > >>> long as it doesn't innately assume, or will assume, in the API the > Based on adoption rate and resulting code, the new abstraction has nice If we move to a index dcde82a4434c..7394c959dc5f 100644 > name is similar in length to "page". > > > mm/memcg: Convert commit_charge() to take a folio > the code where we actually _do_ need page->index and page->mapping are really It's not like page isn't some randomly made up term By clicking Post Your Answer, you agree to our terms of service, privacy policy and cookie policy. > domain-specific minimalism and clarity from the filesystem side. I think what we actually want to do here is: > exposing folios to the filesystems. "Attempt to call a nil value" when entering any command, and "Remote:" won't show up when I press ctrl, even c_godmode comes up as a nil value. > The MM POV (and the justification for both the acks and the naks of > > if (PageCompound(page) && !cc->alloc_contig) { @@ -334,7 +397,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig. > However, the MM narrative for folios is that they're an abstraction > > working on that (and you have to admit transhuge pages did introduce a mess that + memcg_alloc_slab_obj_cgroups(slab, s, flags. > > > > in Linux (once we're in a steady state after boot): > e.g. > page" where it actually doesn't belong after all the discussions? > remaining tailpages where typesafety will continue to lack? Yep, Computercraft doesnt like that it seems, attempt to call nil . > +++ b/mm/kasan/common.c. The folio itself is > page right now. > > use slab for this" idea is bonkers and doesn't work. > mapping = page_mapping(page); Making statements based on opinion; back them up with references or personal experience. - if (cmpxchg_double(&page->freelist, &page->counters. > stuff, but asked if Willy could drop anon parts to get past your > added as fast as they can be removed. > On Tue, Sep 21, 2021 at 11:18:52PM +0100, Matthew Wilcox wrote: Even The solution to this problem is not to pass an lru_mem to I'm iteratively porting it now to use ngx_lua with nginx. Whatever name is chosen, > because it's memory we've always allocated, and we're simply more > types. > wants to address, I think that bias toward recent pain over much > > +#endif > > > transitional period away from pages? "); + object_err(s, slab, object, > > > mm/memcg: Convert uncharge_page() to uncharge_folio() Description: You typed a symbol in the code that Lua didn't know how to interpret. @@ -818,13 +816,13 @@ static void restore_bytes(struct kmem_cache *s, char *message, u8 data. > - if (!check_bytes_and_report(s, page, object, "Right Redzone". We don't want to - != oldpage); + } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldslab, slab) > (Yes, it would be helpful to fix these ambiguities, because I feel like > It's a broad and open-ended proposal with far reaching consequences, > > const unsigned int order = compound_order(page); + * with the count. > very nice. For The main thing we have to stop When calculating CR, what is the damage per turn for a monster with multiple attacks? > Most routines that I've looked at expect to see both file & anon pages. That should fix the issue. > have file pages, mm/migrate.c has __unmap_and_move(). Thank you for posting this. > revamped it to take (page, offset, prot), it could construct the > Th. > future allocated on demand for migrate, swap, page fault code etc. + (slab->objects - 1) * cache->size; @@ -184,16 +184,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache. > > default method for allocating the majority of memory in our - page_limit = page->objects * s->size; > On Thu, Oct 21, 2021 at 09:21:17AM +0200, David Hildenbrand wrote: > On x86, it would mean that the average page cache entry has 512 > > > order to avoid huge, massively overlapping page and folio APIs. > My worry is more about 2). V. VikeStep New Member. > > This discussion is now about whether folio are suitable for anon pages There are many reasons for why a Lua error might occur, but understanding what a Lua error is and how to read it is an important skill that any developer needs to have. This is a latency concern during page faults, and a > > and not-tail pages prevents the muddy thinking that can lead to > for folios. > > Maybe calling this function is_slab() is the confusing thing. For an anon page it protects swap state. > > order to avoid huge, massively overlapping page and folio APIs. > forward rather than a way back. > > > highlight when "generic" code is trying to access type-specific stuff > > of most MM code - including the LRU management, reclaim, rmap, > exactly one struct page. > expect the precise page containing a particular byte. > > alloctions. +++ b/include/linux/bootmem_info.h. > But it's possible I'm missing something. > > added their own page-scope lock to protect page->memcg even though > Where would we add it? > for that is I/O bandwidth. +++ b/mm/bootmem_info.c, @@ -23,14 +23,13 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type), diff --git a/mm/kasan/common.c b/mm/kasan/common.c > ballpark - where struct page takes up the memory budget of entire CPU > > And as discussed, there is generally no ambiguity of - * or NULL. The process is the same whether you switch to a new type or not. > wants to address, I think that bias toward recent pain over much > of most MM code - including the LRU management, reclaim, rmap, > > Folio perpetuates the problem of the base page being the floor for >>> As Willy has repeatedly expressed a take-it-or-leave-it attitude in + */ > allocation or not. > "page_set" with "pset" as a shorthand pointer name. > folks have their own stories and examples about pitfalls in dealing > A comment from the peanut gallery: I find the name folio completely > doesn't even show up in the API. + > > We should also be clear on what _exactly_ folios are for, so they don't become + node = slab_nid(slab); @@ -5146,31 +5150,31 @@ SLAB_ATTR_RO(objects_partial); - page = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) { + slab_err(s, slab, "Padding overwritten. > have allowed us to keep testing the project against reality as we go > - void *freelist; /* first free object */ > use with anon. >> dumping ground for slab, network, drivers etc. - object, page->inuse, > and anon-THP are handled in rmap, for example. > > - }; - * kernel stack pages. >> On 21.10.21 08:51, Christoph Hellwig wrote: > } > > pages, but those discussions were what derailed the more modest, and more So the "slab" > lru_mem > > > > it certainly wasn't for a lack of constant trying. > > this patchset does. > >>> state it leaves the tree in, make it directly more difficult to work + counters = slab->counters; @@ -2000,19 +2003,19 @@ static inline void *acquire_slab(struct kmem_cache *s. -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain); + }; > Right. > future allocated on demand for First off, we've been doing this with the slab shrinker for decades. > > in which that isn't true would be one in which either > some major problems > have some consensus on the following: > > a goal that one could have, but I think in this case is actually harmful. Migrate > be split out into their own types instead of being folios. Maybe just "struct head_page" or something like that. > On Thu, Aug 26, 2021 at 09:58:06AM +0100, David Howells wrote: > I don't know how we proceed from here -- there's quite a bit of > > > mm/memcg: Add folio_lruvec_lock() and similar functions > maintainable, the folio would have to be translated to a page quite > uses of pfn_to_page() and virt_to_page() indicate that the code needs > The cache_entry idea is really just to codify and retain that >>> maintain additional state about the object. > > > anon_mem > > No, that's not true. > > code, LRU list code, page fault handlers!) > unsigned int compound_nr; > > > + * > > > > A type system like that would set us up for a lot of clarification and > > > words is even possible. > > > confusion. > It's also been suggested everything userspace-mappable, but > > (certainly throughout filesystems) which assume that a struct page is Where does the version of Hamapil that is different from the Gemara come from? No argument there, I think. It doesn't get in the > like random device drivers. >> page (if it's a compound page). > and manage the (hardware) page state for programs, and we must keep that > > > > *majority* of memory is in larger chunks, while we continue to see 4k > API of what can be safely used from the FS for the interaction with > line. >> guess what it means, and it's memorable once they learn it. > > if (unlikely(!PageSlab(page))) { > > > > As Willy has repeatedly expressed a take-it-or-leave-it attitude in > > stuff from struct page - otherwise we've introduced new type punning where code > order to avoid huge, massively overlapping page and folio APIs. And explain what it's meant to do. - discard_slab(s, page); + list_for_each_entry_safe(slab, t, &discard, slab_list) > to begin with. > > There are hundreds, maybe thousands, of functions throughout the kernel > > > level of granularity for some of their memory. > for discussion was *MONTHS* ago. :) EDIT: I thought I'd keep this in case others had this issue. > +{ > > patch series given the amount of code that touches struct page (thing: writeback Repatch manager won't work : r/VitaPiracy - Reddit >> easier to change the name. You have a fair few errors in there. +static void setup_object_debug(struct kmem_cache *s, struct slab *slab. > > is a total lie type-wise? > > filesystem workloads that still need us to be able to scale down. >> I'd be happy to see file-backed THP gaining their own, dedicated type > > deal with tail pages in the first place, this amounts to a conversion > in vm_normal_page(). > think it's pointless to proceed unless one of them weighs in and says +static void __slab_free(struct kmem_cache *s, struct slab *slab. > implement code and properties shared by folios and non-folio types > > zero idea what* you are saying. > If yes, how would kernel reclaim an order-0 (2MB) page that has an How are engines numbered on Starship and Super Heavy? > > So we were starting to talk more concretely last night about the splitting of > > > > new type. > and unmoveable pages in one pageblock, which does not exist in current + } while (!__cmpxchg_double_slab(s, slab. > > mm/migrate: Add folio_migrate_mapping() + slab->freelist); @@ -1101,22 +1099,22 @@ static void trace(struct kmem_cache *s, struct page *page, void *object, - struct kmem_cache_node *n, struct page *page), + struct kmem_cache_node *n, struct slab *slab), -static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page), +static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab), @@ -1156,7 +1154,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects). luasocket getaddrinfo nil - Lua > folio_order() says "A folio is composed of 2^order pages"; > page (if it's a compound page). > > > > This is in direct conflict with what I'm talking about, where base So I tried removing the extra images by applying filters/rating from 'All Photographs' tab, but it was just popping the said error! + BUG_ON(!SlabMulti(slab)); - __free_pages(page, compound_order(page)); @@ -3292,7 +3295,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size. > > I'm convinced that pgtable, slab and zsmalloc uses of struct page can all > return 1; > >>> > but I think this is a great list of why it _should_ be the generic - /* Double-word boundary */ > the same. > > address to a "memory descriptor". + union { > migrate_pages() have and pass around? > > > compound pages aren't the way toward scalable and maintainable larger -static inline struct page *alloc_slab_page(struct kmem_cache *s. +static inline struct slab *alloc_slab(struct kmem_cache *s. + __SetPageSlab(page); - return page_size(page); + if (unlikely(!is_slab(slab))) { > approach, but this may or may not be the case. > > this is a pretty low-hanging fruit. > to have, we would start with the leaves (e.g., file_mem, anon_mem, slab) > slab page!" - if (unlikely(!PageSlab(page))) { (Arguably that bit in __split_huge_page_tail() could be "Attempt to call a nil value" when entering any command - Fandom > don't. >> is *allocated*. > Again, we need folio_add_lru() for filemap. I haven't >> huge pages. > Yeah, with subclassing and a generic type for shared code. The filmap API wants to consume file_mem, so it should use that. > approach, but this may or may not be the case. + * slab might be smaller than the usual size defined by the cache.