dmidecode-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [dmidecode] [Patch v2 4/4] dmidecode: add dmi sysfs support


From: Jean Delvare
Subject: Re: [dmidecode] [Patch v2 4/4] dmidecode: add dmi sysfs support
Date: Thu, 26 Feb 2015 15:25:40 +0100

Hi Ivan,

Le Thursday 26 February 2015 à 15:02 +0200, Ivan Khoronzhuk a écrit :
> On 02/26/2015 02:09 PM, Jean Delvare wrote:
> > Out of curiosity, I fetched the latest version of libdmifs and tried
> > this patch set. I compared the new output with what dmidecode would
> > output before and I get the following difference:
> 
> I've compared it also, and didn't find difference.
> I'd used kernel 3.13.
> So I have reasonable questions about how it was checked
> in order to find out the issue, if such presents.
> 
> >
> > --- /tmp/dmidecode.old      2015-02-26 12:23:17.624968398 +0100
> > +++ /tmp/dmidecode.new      2015-02-26 12:25:05.643322338 +0100
> > @@ -813,6 +813,7 @@
> >     Base Address: 0x0000000000000CA2 (Memory-mapped)
> >     Register Spacing: Successive Byte Boundaries
> >   
> > -Handle 0x0042, DMI type 127, 4 bytes
> > -End Of Table
> > +Handle 0xBABA, DMI type 186, 186 bytes
> > +   <TRUNCATED>
> >   
> > +Wrong DMI structures length: 2500 bytes announced, structures occupy 2682 
> > bytes.
> 
> 
> what kernel did you use? I'm asking because such kind of issue can happen
> in case of incorrect format of /sys/firmware/dmi/entries.
> FYI, beginning from k3.18, dmi sysfs contains regression.
> Look at:
> https://lkml.org/lkml/2015/2/18/224
> "firmware: dmi_scan: fix dmi scan to handle "End of Table" structure"

I was running kernel 3.19 and you are right, that was the problem and
the patch above fixed it. Thanks.

> It's currently applied on efi/urgent, no need to review it.
> I's obvious that the problem can be in it. But, to be sure,
> could you please present me the output of:
> "ls /sys/firmware/dmi/entries/"

Entry 127-0 was missing, with your patch it is back. So it works fine
now, no difference in the output and valgrind is happy.

That being said, this experience shows that the libdmifs code is not
robust enough. It should not assume that everything
under /sys/firmware/dmi/entries is perfect. A missing entry
under /sys/firmware/dmi/entries is NOT a good excuse for accessing and
printing uninitialized memory to the user.

But again, all this would never happen if you were using the right
interface in the first place.

> (...)
> I didn't change anything in interpreting of dmi table by dmidecode.
> There is no reasons to worry about.

I never trust people who tell me that ;-) (nothing personal.)

>  So this only confirms that something
> wrong in kernel dmi sysfs or libdmifs current implementation. It not proves
> that using extra layer is bad idea.

It does to some degree. I understand you don't like it because you spent
time on the current implementation, but if you try to take some distance
and be a little more objective, it should be obvious that more
interfaces and data manipulations lead to a higher risk of bugs. That's
exactly what happened here. Sure, known bugs will be fixed, but the
conclusion still stands.

> > I also made some performance measurements. As I expected, the new
> > libdmifs implementation is 3 times slower than the /dev/mem access
> > (from 1.56 ms to 4.71 ms on average on my workstation.) The system CPU
> > time grows even more (0.74 ms to 3.41 ms, 4.6 times slower).
> 
> Sorry, but your comparison is not correct.
> You have to compare ti with own dmidecode implementation
> of dmi sysfs reading. I'm sure that results will be not so impressive.
> The circumstances are not comparable.

You are right that comparing your sysfs access and the sysfs access I
proposed would be better and more fair. Obviously at this time I can't
do it as the code doesn't exist ;-) That being said I am certain that
what I proposed will be a lot faster. There's no way that the unpacking
and repacking cycle you are going through can be faster than reading the
raw table in one go. I can predict that reading raw data from /dev/mem
or reading the same raw data from sysfs will perform about the same.

> PS: and let check it when "End of Table" issue will gone.

That makes no significant difference to the performance. If anything,
your implementation seems to be even slower now (4.86 ms on average.)

-- 
Jean Delvare
SUSE L3 Support




reply via email to

[Prev in Thread] Current Thread [Next in Thread]