identifier
stringlengths
1
43
dataset
stringclasses
3 values
question
stringclasses
4 values
rank
int64
0
99
url
stringlengths
14
1.88k
read_more_link
stringclasses
1 value
language
stringclasses
1 value
title
stringlengths
0
200
top_image
stringlengths
0
125k
meta_img
stringlengths
0
125k
images
listlengths
0
18.2k
movies
listlengths
0
484
keywords
listlengths
0
0
meta_keywords
listlengths
1
48.5k
tags
null
authors
listlengths
0
10
publish_date
stringlengths
19
32
summary
stringclasses
1 value
meta_description
stringlengths
0
258k
meta_lang
stringclasses
68 values
meta_favicon
stringlengths
0
20.2k
meta_site_name
stringlengths
0
641
canonical_link
stringlengths
9
1.88k
text
stringlengths
0
100k
correct_subsidiary_00108
FactBench
2
39
https://www.ftc.gov/news-events/events/2000/12/mobile-wireless-web-data-services-beyond-emerging-technologies-consumer-issues
en
The Mobile Wireless Web, Data Services and Beyond: Emerging Technologies and Consumer Issues
https://www.ftc.gov/site…e_default_en.jpg
https://www.ftc.gov/site…e_default_en.jpg
[ "https://www.ftc.gov/themes/custom/ftc_uswds/uswds/dist/img/us_flag_small.png", "https://www.ftc.gov/themes/custom/ftc_uswds/uswds/dist/img/icon-dot-gov.svg", "https://www.ftc.gov/themes/custom/ftc_uswds/uswds/dist/img/icon-https.svg", "https://www.ftc.gov/themes/custom/ftc_uswds/uswds/dist/img/close.svg", "https://www.ftc.gov/sites/default/files/styles/crop_thumbnail/public/images/images/nov-18-2013-924am/ftc_hq6_400x350.jpg?itok=sO6cBfTu", "https://www.ftc.gov/sites/default/files/styles/crop_thumbnail/public/ftc_gov/images/gaming-controller-hero.jpg?itok=9PjrGCSl", "https://www.ftc.gov/system/files/styles/crop_thumbnail/private/ftc_gov/images/refunds-tableau.jpg?itok=uqgwQ2yN", "https://www.ftc.gov/sites/default/files/styles/crop_thumbnail/public/ftc_gov/images/khan-220.png?itok=BvIjcyJT" ]
[]
[]
[ "" ]
null
[]
2013-07-24T13:36:04-04:00
The FTC hosted a public workshop to examine emerging wireless Internet and data technologies and the privacy, security, and consumer protection issues they raise.
en
/themes/custom/ftc_uswds/favicon.ico
Federal Trade Commission
https://www.ftc.gov/news-events/events/2000/12/mobile-wireless-web-data-services-beyond-emerging-technologies-consumer-issues
WIRELESS WEB WORKSHOP DECEMBER 11, 2000 Opening Remarks Chairman Pitofsky Panel Number 1 The Wireless World - Where are we today? Where are we going? Panel Number 2 The International Experience: Wireless in Europe Panel Number 3 Business Models, Consumer Relationships and M-Commerce Panel Number 4 Opportunities and Challenges: Industry and Consumer Perspectives FEDERAL TRADE COMMISSION THE MOBILE WIRELESS WEB, DATA SERVICES & BEYOND: Emerging Technologies & Consumer Issues Monday, December 11, 2000 Federal Trade Commission 600 & Pennsylvania Ave., NW Room 432 Washington, D.C. 20580 CONFERENCE PROCEEDINGS VOLUME 1 PROCEEDINGS - - - - - MR. WINSTON: Well, Good afternoon, everyone, and welcome to the FTC's wireless workshop. I'm glad to see so many people here who have survived walking across Pennsylvania Avenue, which is a challenge in and of itself. I'm Joel Winston, I'm the Acting Associate Director for Financial Practices at the FTC, and I'm looking forward to a good day and a half on wireless technology. As we enter the wireless age, I think this workshop is a very timely and important one. It's an opportunity for all of us to learn more about this exciting new technology and about the consumer issues it raises. It follows in the footsteps of a number of other recent FTC workshops we have had on technology and consumer issues, including the issue of online privacy. We're very fortunate today and tomorrow to have an exemplary group of speakers and panelists who will share their knowledge and insight with you over the next day and a half. First I would like to introduce Robert Pitofsky, who has served as the Chairman of the Federal Trade Commission for the last five years. This is Chairman Pitofsky's third stint at the FTC, having previously served as a Commissioner and Bureau Director of the Bureau of Consumer Protection. I'm sure you know that Chairman Pitofsky has had a very distinguished career in antitrust and trade regulation and is widely recognized as one of the nation's foremost scholars in this field. I could stand here for a long time and extol Chairman Pitofsky's virtues, but let me just say one thing that comes from a profile in today's New York Times of Chairman Pitofsky, a very good profile, and I think there's one sentence in here that really sums it up. The Times says that, "Mr. Pitofsky has been a central player i the transformation of the agency from what was known as the little old lady of Pennsylvania Avenue to a formidable institution that is the leading regulatory body of Internet and consumer issues, as well as one of Washington's two antitrust enforcers." I think that's a pretty good summary. Chairman Pitofsky? (Applause.) CHAIRMAN PITOFSKY: Thank you very much, Joel, and welcome to all of you, I add my welcome to this mobile wireless web workshop, the latest in a series of workshops that the FTC has been holding in recent years. In 1995, we held an extensive set of hearings on globalization and the impact of technological innovation on competition and on consumers, and since then we've held workshops on online privacy, advertising disclosures and new media, online dispute resolution, global electronic commerce, business-to-business electronic marketplaces and so forth, and I'm especially pleased about that. It is in the tradition of what this place really was designed to be, not just a law enforcement agency but an agency that met with business leaders, consumers, academics, and other government agencies and tried to anticipate important economic trends, and I think that's certainly what we're trying to do. This time we explore the wireless data services sector of the economy, a new technology that has been heralded as allowing people to communicate and gain access to information when they want, where they want and how they want. I understand that some industry analysts predict that over the next five years, mobile commerce and wireless data services, including Internet services, access to wireless devices, will grow at an even more rapid pace than electronic commerce and the Internet did over the last five years, which itself is amazing. A few examples of what this wireless technology could be about. It's the eve of the holiday and you decide to do some last-minute shopping. As you walk into a mall, your all-in-one cell phone, pager and digital assistant rings with a message. Your phone talks to you and says, "Doing some last-minute shopping? Stop by the gift shop next to the food court and give the cashier this number and you'll receive 25 percent off your next purchase." You find a parking space on the street outside your favorite store, and then you realize that you don't have the right change. Not a problem, you take out your cell phone, dial a number, point it at the meter and enter the amount and the time that you want to keep this space, and the cost is automatically deducted from some account you have elsewhere. After a full day of shopping, you decide to grab a cup of coffee. With a buddy list on your smart phone, friends and family within a five-block radius can be alerted to your location in case they want to stop by and chat. I'm not so sure about that one. I mean, it's not an unmitigated virtue, all this technology, I mean, whatever happened to a quiet cup of coffee? But all these examples make clear the benefits of mobile commerce for business and for consumers. They are potentially enormous but like other high-tech developments, they can be good and bad. They can be profoundly pro-consumer, but there are risks involved. If people can be located any time they use wireless technology, is that a good thing? Well, in some respects it is, but I'm reminded of Professor Larry Tribe's comment somewhere, I think it's in his treatise, that part of human dignity is the ability to hide. We also want to educate ourselves and other interested parties about these emerging technologies and the implications for consumers. Toward that end, we have, as has become our tradition recently, brought together industry representatives, privacy advocates, consumer advocates, government officials and researchers to explore three fundamental questions. First, where is wireless Internet and data technology today, and where is it going? What types of relationships will consumers have with this new equipment and with various providers of wireless and data services? Critically, will consumers' wireless data services be supported by advertising, as many Internet sites are, or will consumers pay separately for these services? Second, what privacy and security issues do wireless devices raise? For example, how will location information be used? Is transmission of personal information secure in this wireless media? As wireless devices converge so that cell phones, personal digital assistants, electronic wallets become a single device, what are the risks of identity theft -- are they increased and what security measures are possible? Third, what forms will wireless advertising take? How can companies make effective advertising and privacy disclosures on small screens? How do traditional concepts like clear and conspicuous and equal prominence apply with respect to this new medium? Obviously there's a great deal to cover over the next several days and a lot for us to learn and I hope to learn from each other. In providing a forum for discussion of the privacy, security and consumer protection issues raised by these new technologies, we hope the FTC can facilitate the dialogue among the various interested constituencies. In the best traditions of this agency, we look forward to exploring these complex and fascinating issues with all of you. Thank you. (Applause.) MR. WINSTON: Why don't we have the first set of panelists come up to the table. Before we begin, one request, which may seem a little ironic, but if you could turn off your wireless devices so we don't have all this beeping going on, thank you. Our first series of presentations concerns wireless technology. First we will have Walt Mossberg of The Wall Street Journal to tell us where technology is today. Next, Bill Bodin of IBM's Pervasive Computing Lab will show us where wireless technology may be heading in the not-too-distant future. Then last, we'll have Danny Weitzner of the World Wide Web Consortium explain some of the mechanics of the wireless web, to give some of us without a technical background some context and some vocabulary on the discussion that will follow the next day and a half. Now, some ground rules. After the three presentations are completed, we should have at least 15 minutes or so for questions from the audience. Those who are in the overflow rooms, hopefully you can hear me, who would like to ask a question should come downstairs to this room, 432, where we will have some microphones set up in the hallway and you can ask questions, in the doorway there. You might want to come down around 2:15 or so. Now, as many of you probably know, Walt Mossberg is the author and creator of the weekly Personal Technology column in The Wall Street Journal and is a contributing editor to the Journal's monthly magazine, Smart Money. Walt is the source that consumers and industry insiders go to for the straight scoop on technology. Walt? (Applause.) MR. MOSSBERG: Well, thanks, Joel. I'm sorry to start off by disappointing people, and picking up on the Chairman's speech, I'm not an industry official, not a privacy advocate, not a government official, I'm just a newspaper reporter, and you are just going to have to settle for that in the next 20 minutes or so, but I can assure you of a few things. One, wireless communications, wireless data communications, are going to be incredibly important. Two, they will be incredibly important at a much later date than most of the people speaking to you at this conference and, indeed, on Wall Street and in the press say they will. Everything about it, every single thing about it, is being grossly over-hyped, just as was true of the Internet, is still true of the Internet, and even of the PC itself. Three, they will not work nearly as well as their manufacturers and service providers say they will. They will be a source of considerable frustration. They are today, and they will continue to be over at least the next four or five years. If you don't believe me, I would point out that even the older technology of wired and portable computing with no effort to do wireless Internet is still so inconvenient, so clumsy and so difficult that very few people in this room are taking notes on any electronic device. Danny, who's I'm sure a brilliant fellow and works in the heart of the world wide web and has a laptop is -- thank you, Danny, you knew just what I was going to say -- using a fountain pen, and when I attend the computer and Internet industry's most sacred inner circle, high-tech conferences, attended by people like Bill Gates and Steve Case and all these people and you give a talk like this and you look out over the room, 80 to 90 percent of the people are using pen and paper to take their notes, and that's -- these are just important cautions I want to leave with you. I would also say, picking up on the Chairman's point, there are many more qualified and in-depth privacy experts in the audience and certainly a lot more people from the industry. Based on the experience of the Internet, I will absolutely guarantee that the wireless Internet will be saturated with the worst sort of marketing pitches, worst both in terms of how annoying they are, how ineffective they are for the shareholders of the companies trying to market things, and how much they will clog the bandwidth, which, as you know, is very limited for the things you really want. This is true in every pixel of your computer today, and it will be true in the far more limited number of pixels on the screen of a wireless device. Well, what I am going to do in a very short time here is to try to talk about where we are and give you some context for thinking about this, and for those of you who read my column, and even those who don't, let me just explain that what I do is try to look at the state of what we have in technology, services, technologies and devices, for consumers and small businesses today and also out into the near future. I think it is folly, although I'm sure some will attempt it, to predict what will happen even as far out as five years, but I can certainly talk about the next year, year and a half. As a result, I get to see -- and have seen, in fact, as I stand here -- lots of the things that will not come on the market until the next six months or nine months or a year. They are brought to my office, I look at them, I try them out and so forth, and some of them I write about and some of them I like and some of them I hate. Everything I do is from the perspective of normal, nontechnical, mainstream consumers. So, from that perspective, where do we stand now on wireless? Well, let me start by giving you a way to think about the Internet that I think is a little different. The Internet is not an activity you perform on a box called a PC. In fact, we are just closing out the first year of the post-PC era. What do I mean by that? What I mean by that is we are just closing out the first year in which we see the introduction of a proliferation of devices that can do digital things, all of which have been performed only on this thing called the PC. We are going to see a massive diffusion of digital tasks, including Internet tasks, both in wired devices and in wireless devices. The best way I think to think about the Internet is to compare it to the electrical grid. In this room, God knows when this room was built, but in this room there are electrical outlets, I think, and there are electrical outlets in every room you will be in today, and I think we are gradually evolving into a situation where there will be sort of plugs into the Internet everywhere you go. Some of them will be physical, like ethernet jacks or telephone lines, and some of them will be virtual, think of it as a wireless plug into the Internet grid. Now, the electrical grid provides a certain kind of power to an innumerable array of devices. Some of them make toast; some of them wash dishes; some of them play music; some of them are PCs. I think that that's what the Internet is going to become and is rapidly in the process of becoming. There will be an innumerable array of devices that will take some portion, not everything, but some portion of the services and the information and the entertainment and the commerce opportunities that are afforded by the Internet and present them to you in a form that is convenient for where you are and what you are doing at that moment. You will still have something like a PC that will give you a more full-form experience at the cost of crashing every two or three hours. You will also have something like a phone with a small screen that will give you a very tiny amount of information that's appropriate and formatted for that screen, and you'll have lots of things in between. But when you go and plug an electrical device in today and you use it, when you made toast this morning, when you made coffee, when you used your hair dryer, you did not turn to whoever was with you and say, "Hey, I'm on the electrical grid." And this phrase we have now, "I'm on the Internet," will look as ridiculous and archaic as that in the next five years, I think, because everything you do with a powered device that has any kind of user interface, a screen or an audio interface, anything, will be informed by some extent by some sort of content or service provided over what we now call the Internet. So, when you turn on the television, a television program will have information and richness behind it that comes from the Internet, and you're not going to say, "I'm on the Internet." You're just going to think I'm watching television. And I'm not talking about the early WebTV idea of looking at sort of the web pages on the screen. I'm talking about television shows being enhanced. When you're on the phone, when you're doing many things, the Internet will be behind it. Part of this new spectrum of digital devices and services will be wireless, which is what we're here to talk about. Now, where we are today on wireless is that it is essentially in the United States an extremely bad, extremely limited service on extremely inappropriate and bad devices that cost a lot of money but is accompanied by, as I said a few minutes ago, unrelenting and unrealistic and ridiculous hype that is propagated by people who hope to make a lot of money by convincing you to buy the stuff. That is where we are today. Only a true techie geek or somebody who has invested in this is ever going to try to read an e-mail on this screen, and yet you can pick up any newspaper and any magazine and read articles that make it sound like this is not only cool, not only useful, but ubiquitous. It's not. There are millions, tens of millions probably phones in this country that are so-called web-enabled, but there are only thousands of people that use it. This is a web-enabled phone. Most of you probably have a web-enabled phone. I took a look at it once or twice, because it's my job. They pay me to do it. There's no way in the world I would scroll through these ridiculous menus to try to look up some piece of information which would then require me to constantly scroll to get the next bit of data at the hopelessly slow speeds we have in this country. So, I think today what we need is two or three important things. We need new devices. This is never talked about. Everybody talks about the networks and the services and the spectrum and the fees. We need new devices. This will not happen until we have new devices. This is a great voice device; in fact, this was a fairly advanced voice device. It's a very bad data device. And if I need to know what movie is playing somewhere, if I need to know what a stock is trading at, how low the Internet stocks have gone today -- and I have to assure you that one of the bad things a few years ago and one of the great things today is under the ethics policies of The Wall Street Journal, I have no shares of any technology company -- but if I want to know, the best thing for me to do is to hit the speed dial button on this phone which calls a service called Tell Me, which is free, which is a voice recognition and audio service that takes all this material from the Internet, and guess what, transfers it into what this device was made for, which is voice. On this particular device, that's the best way to get Internet information. Now, we have this. This is a Blackberry, the new Blackberry with the bigger screen, and this if I turned it on would be collecting my e-mail. It would be a little rude since I'd attempt to look at it here. It also can get the web. You can actually read a reasonable amount of an e-mail message on the thing. It's still slow, but at least the device is better. But I have got to carry both of these devices. This also has a calendar and appointment book, not as good as a Palm, much clumsier. What I really want is a Palm, about this size, that I can make phone calls on and can get this data. And there's a race on right now, there's a huge race on between the guys that make phones and the guys that make PDAs to try to create sort of hybrid devices. I'm covering this race. If you read my column, I review these things as they come out. There are a few of them that have come out. I have not seen one that's really great yet. I'm not personally convinced that we're going to have everybody carrying just one device, because people will have preferences. Some people -- I mean, you are not going to have a device this small that's a great data device, because the screen can't be very big on a device this small, but if you're primarily a phone call person, you may want to carry this. If you're primarily a data person, you may want to carry something like this with some phone capability that you occasionally use. The Handspring Visor is a device that is essentially a clone of the Palm, made by the people who developed the Palm, but it has an expansion slot in the back, and one of the things you can pop into that slot is a phone. It's just a small little phone. [Holding up Blackberry device.] When it pops in, this thing becomes a phone, and, of course, because it has an antenna, you can also get wireless data. So, now you have a device with the wonderful Palm interface that synchronizes well with your computer, get your calendar, get your appointments, you can make phone calls on it. You can also get the web and e-mail on it and so forth, and that will be an -- that's probably the best early effort to combine everything in a PDA format. The phone attachment I think will be out probably right toward the end of this year. I used it for a few weeks and wrote a fairly favorable review. The phone guys are fighting back with somewhat bigger phones that when you open the lid, you see sort of a regular phone window, but then you can do this and get a Palm-type screen, which can get all your web data. Kyocera, which has taken over the phone franchise from QualComm, will have a phone like that out in a few weeks. Sprint brought one out a few weeks ago that actually isn't very good, but they're working on it. So, there's a race, and there's going to be more of these devices coming. But even if we get the perfect device, we have the network problem, and the network problem, again, is the source of great hype. It's not going to be fixed nearly as fast as people hope, and here is the source of the greatest embarrassment technologically for the United States of America in the 23 years since the personal computer came out and maybe for long before that. We are now in a situation where we are behind Europe and behind Japan in a key consumer mainstream technology for the first time really I mean in my lifetime, and I'm 53. It's because of a failure, a massive blunder, there is no other word for it, committed by our government and by our industry 20 or 30 years ago, I don't know the details, when it was decided not to set a technical standard for wireless phones in this country. In Europe, they picked a technology and they went forward, and you know what? We did it for television, we do it for all kinds of other basic technologies. The idea in this country is competition. It's what this building is about and certainly The Wall Street Journal is about, and I'm all for capitalist competition, but there is a role for standard-setting bodies and for government bodies in saying, Okay, here's the basic technological choice, now you boys go off -- or girls -- go off and compete and make money and make this better. We didn't do that. We decided to have an unending competition on the very basic technical choice, unlike Europe, and what's the result? The result is that today we have less digital voice coverage, even poorer digital voice coverage, much more expensive phones, much more power vested in the most backward part of the industry, which is the carriers, and much less technologically advanced phones. This is the only area of digital technology in the history of recent digital technology where the coolest things now come out first in Europe and Japan and arrive here maybe a year, year and a half later. This is the opposite of what drove our economy in the PC era, when people in Europe and Japan were dying to see the new computer or the new software or the new web service, and all of it came out here first. If you don't think that this has phenomenal international economic and even national security implications, I would suggest you're wrong, and I speak with some slight qualifications as the former international economics lead correspondent for the Journal and as the former national security correspondent. This is important, and we've blown it.We have flatly blown it. There are lots of other consequences. Every time somebody wants to extend wireless voice or data services to a part of the United States where it doesn't exist or to improve it, you have to build a tower. The same in Europe, the same in Japan. At least in Europe and here, you have a big environmental fight, and I'm not disparaging the people who don't want towers, I'm just making a point, you get an environmental fight. Here you have to have it three times, because you have three ridiculous incompatible standards. You have CDMA, TDMA and GSM. If you want to introduce -- if you're Nokia or Ericsson or even Motorola, which is an American company, and you want to introduce a very nice, interesting handset with new data capabilities, you've got to wait on, beg, plead with and court these phone carriers, because they control it all here. In Europe, it's all one standard.What works with one carrier works with another carrier. So, we made a big blunder, and you all know and you'll hear about later that there's a thing called 3G, and that's going to unify the world and we're all going to move toward it, but I'll tell you two things about that. One is that Europe will get there faster by over a year, and the other is it will disappoint you.Everything I know about it off the record tells me that there's an excellent chance it will be nowhere near as fast and as high bandwidth as it's supposed to be. So, we're faced with, no matter what great devices come out, for a while with a slow, weak, wireless bandwidth proposition that will be worse in the United States than anywhere else. Now, we have a silver lining, and I'm going to go out on a limb here and say this, and that is American innovation and American high-tech. We've been somewhat behind not only in the actual roll-out of good wireless devices and a wireless infrastructure here, I think we've been behind on mind share. I think our best technical minds have been very slow in getting to this, and what has happened is they've been just locked into the PC, while the rest of the world was moving forward into wireless, and so we haven't really paid much attention to it. But now people are beginning to work on some very interesting things, and I would caution you when you think about this, when you read about it, when you listen to the other speakers here, that this progression from GSM to 3G and all this stuff that's approved by the ITU and all of that may not be the whole story. There are at least two technologies I know of in the United States that are unofficial, not approved by the European international bodies that I think could break us out of the box. One is Ricochet. It's by a company called Metricom, and it's here right now. It's unlicensed spectrum, and it operates at 128 kbps, which is stunning. It's so much faster than any wireless technology anywhere in the world that it's breathtaking. It's almost broadband, and it's wireless, and it is here now, and it's rolling out slowly and without the benefit of sanctions from any government body because it's unlicensed spectrum, in six or eight cities, and they have plans to be in something like 70 cities in the United States. A slower version of it three years ago was rolled out in three cities, including D.C., and three years ago its slower version was still faster than the fastest wireless technology we think of today. That's one. There's another one called I-Burst that I heard about just recently which promises a megabit. So, I just want to leave you with -- I know that some of what I said was discouraging or gloomy, but I do think the same kind of American ingenuity that has given us a lot of benefits in the PC and the Internet space in a fixed wire is finally being applied to wireless, and there may be a ray of hope. I look forward to listening to everybody else, and thank you very much for your time. (Applause.) MR. WINSTON: Thank you, Walt, for a very interesting presentation, although I wish you had been a little bit more up front about how you really feel about these things. Our next speaker is Bill Bodin. The range of wireless devices and services available today is impressive enough, but much more is on the way as we will hear from our next speaker, Bill Bodin. Bill leads the advanced technology and prototyping efforts for IBM's Pervasive Computing Division. As you'll hear from Bill's presentation, this lab is where the future of technology is actually taking place today. Bill? (Applause.) MR. BODIN: I just have a little bit of reconfiguration here to do. There's a couple of different -- well, I am totally depressed. You mean it's not going to be as easy as they say it's going to be? I don't think it ever is. As I was introduced, I'm an STSM, senior technical staff member, with IBM and charged with the mission of making all these advanced technologies actually real and incorporating them into a lot of the mainstream product ideas that IBM has. What pervasive computing is, just to kind of define it, is basically delivering any data over any network to any device, and as Walter said, it's not going to be an easy proposition, and there are a lot of things that make it tough. In other words, on the any device side here, we have a heterogenous mix of clients, right? Things like screen phones, things like PDAs, wireless PDAs, transiently connected PDAs, PDAs with color VGA screens, PDAs with monochromatic screens that are 160 by 160 pixels like Palm devices. Walter, I was taking notes on my Palm, by the way -- MR. MOSSBERG: You were the one. MR. BODIN: -- I was cheating on you over there. I was writing down all these new ideas. And devices like Internet access devices in light of wireline devices or wireless devices, in-vehicle information systems that will help us navigate in vehicle scenarios, those kinds of scenarios go on and on, but basically they'll be shored up by technologies like speech recognition, technologies like the ability to replicate e-mail, calendaring data, location-based services directly in the car and communicate with location-based services wherever you are, all the way to set-top boxes and things we call gateways, either residential gateways or enterprise gateways, with these gateways bringing the broadband experience into the home, right, and also providing network address translation, IP filtering, VPN functionalities, virtual private networking functionalities, inside the home or the enterprise seamlessly to these devices, and as you can see here, attributes of the network here are availability, security, and scalability. One thing I did want to mention is that IBM has recently, as recently as last week, appointed a CPO position. Now, CPO is chief privacy officer, and8 that's Harriet Pearson for the IBM Company. So, you might want to make note of that. Any data, data in this case in terms of these broad scenarios are news, weather, sports, banking, travel, stocks and things like that, all sorts of things all the way to the enterprise side of it, which has us tied into the ERP processes, the CRM, customer relationship management processes, all the way to the back end. So, as you can see, the fabric here is really taking this any data paradigm, moving it through networks and delivering it to any device very seamlessly. Now, the challenge is how do we make that seamless and how do we make that a great proposition for the consumer? One thing that we're working on is standardizing a toolset for all of these particular client devices, client devices as varied as Internet appliances, wired, wireless Internet appliances, web pads, automotive, set-top boxes and service gateways. Now, the last thing that we will be working on is interfaces that transcend operating system infrastructure into the mobile phone arena. Just a little bit of infrastructure speak here before we get into the nature of pervasive computing and the scenarios of pervasive computing. The device9 architecture that we're working with puts a very strong emphasis on the ability to run cross-platform program logic on devices, and what we have here is a JVM, something called a Java virtual machine that actually abstracts the application logic from the hardware beneath it so that we can take an investment that we make in one particular application that might run on a cell phone and move it over to a PDA environment or move it over to a set-top box environment and even an in-vehicle information system and maintain the investment, preserve the investment that we make in that particular device. Now, you might also notice that to the left-hand portion of this, I have things that are relative to things called GUIs, graphical user interfaces, or speech recognition, text to speech. A lot of those particular features are what we call native features, features that run languages that are compiled with particular devices and particular CPUs in mind. In our lab, and a few people in the audience have actually been to our lab in Austin, this is basically the schematic for the lab. The lab really uses service gateway technology, the same technology that I talked about just a while ago, to bring a broadband experience into the home, whether it's a cable modem scenario or a DSL scenario, even dial-up for that matter, but brings that capability in and shares Internet connectivity with a host of devices. Now, you'll notice that the host of devices can range anywhere from WAP-related phones or Internet appliances and conventional home PCs, all the way to wireless web pads, and I do have an example -- I do have an example here of an appliance that has a bit more acceptable user interface for actually giving you a broadband experience. Now, this is a wireless web pad. This wireless web pad is -- it's an 802.11 client, which means that it's running a fairly rich bandwidth in terms of its ability to take information from a gateway and actually populate this device. It's running at 11 megabits. If I wake it up into its awake state, we'll see that we have a web page here, and this web page obviously could be any web page. This particular appliance is basically a research grade appliance, but it gives you an idea of where technology is actually going in a very robust, handheld kind of way. That's one device. You'll notice that there are a lot of other devices on here, devices that we deal with every day, things like microwave ovens or refrigerators or washing machines. In our lab, we experiment with all of this infrastructure and all of these devices, and, in fact, all of these devices are actually network-enabled. So, we have rapid-cook technology that actually cooks -- it cooks ten times faster than regular oven cooking technology, but it's aided by the web, because we can actually get recipes, and these aren't recipes that allow us to bring food to the cooking stage, but they are recipes that are device-specific, and they are recipes that we can actually download dynamically simply by clicking on a URL either on the device or on the wireless web pad and delivering them straight away to the device itself. So, that device can then use that granular recipe data to ideally cook the food. So, it's not necessarily implementing Internet-enablement or Internet connectivity for connectivity's sake, but it's actually using it for something beneficial. You will notice that many of these devices here, the microwaves and dishwashers and things like that are all connected simply by the power line. We are using a combination of technologies there. We're researching CeBus technology, researching Ethicon technology, but we're moving bits over the power line using this service gateway, simply by plugging this service gateway in, which has a modem inside, not a modem that really modulates things over telephone wires but a modem that actually modulates data right over the power lines so our networked appliances can actually communicate. Just a few words on standards, actually more than a few words on standards. OSGI, HAVI, WAP, XML, SyncXML, all important standards to us. OSGI, meaning open services gateway initiative, is an approach that we're actually implementing in our stack of software that we offer for pervasive computing that enables us to keep the platform open, to keep it very minimalistic but then to increase the opportunity that many, many more businesses can actually populate a device with that kind of an approach. HAVI, you know, I guess not too many of us thought too many years back that our home audio/video receiver or our VCR or our DVD player would actually be network-enabled, but the time is now coming with the HAVI architecture to actually implement network connectivity in those kinds of devices, as well. Once those devices are actually communicating back and forth, it becomes easier for us to actually deal with any one of those devices or actually take data from those devices and deliver it to things like WAP phones, for instance. Imagine, you know, making a trip to Blockbuster because you think you might need to rent a movie that night but actually being able to communicate back and forth with your 300-disk DVD carousel. You'll be able to take that data, have it transcoded uniquely for that device and delivered to that device. In fact, one of the things I'd like to do as part of the very last part of this presentation is to bring an online tour of this lab up and running here and actually communicate to that lab with a WAP device. More on HAVI, you will notice that HAVI is an architecture, and, you know, even things like havlets, let's say, many of you in this room may have heard of servelets. Servelets are technology that are server side that enable servers to interact with, you know, things like search engines and do all sorts of logic. Now we even have havlets, which will be living in our home theater stack. A few words on cellular protocols. I think Walter was right on the money here. He talked about the disappointing data rates that we get in the U.S. here. In fact, CDPD, which purports to be 19-2 in terms of its data rate is, you know, more like 4800 when you really get down to it. So, it's 4800, maybe 9600 tops, and that's what we have had to put up with for quite some time. A couple things on the forefront. GSM in two different flavors, one at 43.2 kilobaud and another at 120 k-baud, the 120 k variety coming in early 2001 and the lower bandwidth -- maybe, okay -- may be coming currently. RF technology, RF technologies like 802.11, like the one that I just had here, the wireless web pad, they run at about 11 megabits tops. They're currently available from a number of vendors, IBM, Cisco, Nokia. Bluetooth is something that's coming more or less on the horizon. Bluetooth is going -- the promise of Bluetooth I'll talk about in a minute, but basically it's close-range device interaction that forms things called personal area networks or PICO nets, and then Home RF, which is more of a consumer-grade variety of wireless. In terms of wireline, Home PNA. Home PNA is a home networking and phone line alliance which basically enables us to communicate via phone lines. That's actually how I have a large majority of the PCs in my house wired up, and my kids now enjoy a music collection -- I'll not tell you where I got that music collection, of course -- but, you know, they have an old Pentium I, 133, upstairs, and, you know, it was either buy them a stereo or just buy a network card.So, the network card prevailed, and they had a phone jack there, so now they have 10 megabits that they can communicate downstairs to the monster PC in the kitchen, all right, and they have all their Britney Spears and all their NSync and all the tunes that make them happy upstairs now, and they share that, you know, in realtime. There is no local copy of it, and like I said, 10 megabits. What was that? Do I have a question already? Just a few other technologies, POTS, conventional dial-up, DSL, many flavors, cable satellite. Bluetooth's vision actually is to create ad hoc personal area networks wherever you are and however you need to collaborate. In other words, if you're in a business scenario and you have a number of individuals that want to collaborate online, right, but they don't have the necessary network connectivity to the firewall there, they might use Bluetooth in a way that they form this ad hoc personal area network and actually collaborate between themselves, either pass notes during a presentation and make that presentation stronger or do something of value.6 Ease of connectivity is one of the benefits of Bluetooth, yet to be realized, but something that needs quite a bit of work. Freedom to work anywhere, high interoperability and a lot of new applications. Lots of industry adopters worldwide. One thing, like I said, this really brings up the point of personal area networks, and you see here what is called a scatternet. This scatternet is basically an overlapping of networks. In other words, there are two nodes to the right there that are communicating with another node in the middle that is communicating with a further node as a gateway to other devices. Now, these can be any kinds of devices in the future. They might be PDAs, might be cell phones, or some other ubiquitous device that emerges, might be laptops, you know, full-feature devices that have broadband capability within airports, with technologies like Walter mentioned like Ricochet, but there are many other trials going on with technologies other than Ricochet, like 802.11, in various airports at high speed, 11 megabits. But some of the scenarios here are, you know, either mobile PCs communicating with -- you know, communicating with phones that are nearby, hopefully your phone; the phone on your hip, the phone in your pocket; digital cameras that use Bluetooth to communicate to Bluetooth-enabled cell phones so that you can actually share that digital photo experience with anybody that you want in realtime. In other words, you're standing on a mountaintop taking a picture of the family. Well, there is no reason that that picture can't instantaneously be a part of your family's web page, be delivered to that web page and be administered to that web page seamlessly. Even digital ink, there are companies that are coming up with technologies that have pens that are Bluetooth-enabled, pens that since they are in the proximity of a more beefier CPU can actually decode what you're writing down, store that, transfer the pen contents to the robust device and store that as documentation. So, just a background on Bluetooth, basically 2.4 gigahertz variety communication, either 10-meter or 100-meter optional, eight devices per PICO net, ten PICO nets, and you can see there are a couple different bandwidths, either 400K or 700K depending on the implementation. Now, just to revisit this pervasive computing residential topology here, one thing you'll notice that I didn't necessarily make evident here is that the car -- and we have a car in our lab. That car is considered to be docked, all right? You don't just park your car in the garage anymore; you dock your car, okay? Exactly. This car, since it's in the proximity of a service gateway and the service gateway is actually capable of delivering wireless content from a broadband Internet connection, can do things like replicate e-mail and calendaring to the car and things like that. In fact, the in-vehicle information system topology looks a bit like this, where we either have satellite-to-ground stations or we have all sorts of connectivities that enable us to do things with either wireless modems or Bluetooth capabilities, interact with PDAs, interact with cell phones in a hands-free way and take advantage of delivering data right when we need it, either for -- either for doing things like navigating around town or navigating through your e-mail. Fairly extensive web infrastructures are going to be critical for this, and one thing I want to get to here is not just the service side of the structure that makes it all possible but something called transcoding actually, and if I refer back to the notes that I took on my PDA here, I heard a term called -- I guess it was pixilation clogging, right, so we have clogged pixels on devices that are very, very small. We have small -- small bits of -- small user interfaces here, which it's very critical to populate them ideally for the -- for a good user experience to actually come about. We call that at IBM, we call that transcoding, and what that does is take data of one particular style and brings about a change in that data, a fundamental change in how that data is actually constructed and how that's going to be rendered on a particular device, matching, like I say here, the form factor to the capabilities of the client devices, personalizing the data for environmental requirements. One environmental requirement might be if you're driving, right, and your car knows you're driving because it's making progress and either inertial guidance or GPS navigation is telling it that it's being driven, you may not be able to actually read your e-mail on this in-vehicle information system, but you might be able to hear it. So, environmental factors come into play. Enhanced B-to-B communications, this is supporting a wide range of devices and systems. One just pictorial example of what transcoding looks like or content adaptation is here in this particular web page where we have Yahoo's weather forecast, right? We have a very rich set of graphics that went along with that weather forecast, but it is possible to deliver content fairly seamlessly to a WAP-enabled phone, and as you can see here, that the actual rendering on the WAP-enabled phone tells you the essence of it. It doesn't necessarily deliver the ad banners or all the eye candy that you might be accustomed to on conventional web pages, but it does get down to the nuts and bolts of it. So, things like converting images from one type to another type, things like reducing the size or the bravacity (phonetic) of text and converting languages from one type to another type is what transcoding is all about. And really, what it does for customers, for users, for actual companies that deliver data is make it a little bit more palatable to deliver that to a wide variety of devices, because they actually render content one way, but it's transcoded dynamically for a number of different devices depending on those devices' characteristics. One thing we've done fairly successfully with Safeway in the UK is actually bring the experience of shopping to a fairly small PDA. This is a Palm PDA, actually Palm PDA made by -- rebranded by symbol, made to our specs that has an on-board scanner right inside, okay? So, you can go into your pantry, you can shop in your pantry, you can have all those items actually accumulated to this device and then actually hot-sync this device to have your groceries delivered. It can be used anytime. It can be used while you're at the doctor's office, waiting in line anywhere, and it's actually boosted sales by about 10 percent. You'd think that people would be more judicious on what they bought when they had something like this to organize their thoughts, but the thought is that they're actually spending less time at the convenience stores and more time at the Safeway store. All of our technology's based on open architectures, just another note on that, and I'll be ready to go into this demonstration on the lab in just a minute, but I thought I would give you a little idea of how the thing is laid out, how the floor plan actually is. We have a family room, a kitchen, garage, all with network connectivity. It's really a living lab where developers work every day. The server farm is located right there in the lab with video conferencing facilities. And just to give you a look at what it looks like, large-screen televisions rendering web pages, but web pages that are being delivered dynamically from things like the service gateway here and web pages that actually allow you to interact with other appliances within the home, things like e-fridges, and I think I probably have a close-up on this. You know, everybody has to have one of these. If you have an advanced technology lab and you don't have an e-fridge, then you're nowhere in this business. And we have to go out on a limb actually. We do a lot of things that we think are very, very leading edge. Some things will get adopted; some things will not. We even have antenna arrays in the fridge that actually detect the RF tags that are affixed to food items, okay, so you can actually tell dynamically what you have in your fridge at any one given time, and since you can tell, right, since your fridge knows that, there's no reason your WAP phone doesn't know that, as well. So, we transcode that to WAP phones and things like that. Rapid code technology with wireless web pads, this is an older version wireless web pad, the one I had up here is a little bit newer version, but with the same CPU cloning and the same wireless infrastructure. Internet access devices, in-vehicle information systems, all with voice capability. So, what I'll do now is -- hopefully I'll wake this PC up and start really quick a browser session, and I will also disconnect this and connect it to the video here, and hopefully we see something, right? Okay, so, we have a browser session started here, I'll maximize that, look for my bookmark that I left earlier -- pays to get here early -- and we'll just bring up the lab here. Since I'm bringing up a new -- now, hopefully this will work, Walter, or else you can claim to be right on every point here. MR. WEITZNER: I think he will claim that anyway. MR. BODIN: Fair enough, though. MR. MOSSBERG: Just to clarify one thing, though, while you're doing this, Bill, I want to make sure people understand that a lot of what you described had to do with wireless networks like 802.11 and Bluetooth inside a building, but the actual Internet is being received to your house or to this lab over a wireline network. MR. BODIN: Correct. MR. MOSSBERG: You are distributing wirelessly inside the building, but it is not as if the broadband Internet connection is coming wirelessly to the building. MR. BODIN: That's correct, absolutely correct. MR. MOSSBERG: That's an important distinction. MR. BODIN: Absolutely. If you notice here, I actually have some presets that I can actually go and visit here.Hopefully everything, like I said, all of the network is working correctly here, but we have a little bit -- it looks like we have the Fox News channel on, and if I -- well, who is that? MR. MOSSBERG: So, you are going all the way to Boston and back to show what's happening two blocks from here? MR. BODIN: That's right, that's right. Hey, you know, I use this elsewhere, you know? I'm going to take my WAP phone, and I think that we have a connection here, so it looks reasonable. Now, I have family and -- oh, I have to use the mike? Okay, so, I have a user interface here that actually is the user interface being used from the service gateway. In other words, I have -- number one corresponds to lights on, number two, lights off, mundane things but things that actually enable us to prove some concepts here, and what I just did was I just -- to hit number two, which is lights off, and if we're still receiving data here, and we should be, we should notice a change in the device itself, in the lights itself -- let's refresh that real quick, and I did get an indication that it did actually happen over my -- over my phone. Number three would be, in fact, blinds up. Now, these just -- what these do for us is actually give us a way to pioneer how we can actually get back into the lab. I mean, the lab is behind a firewall. We are going over public Internet services to actually make, in effect, changes on devices like this but then tunnel back into the service gateway so that we can actually effect the change on the device itself. I think I'm all set. What I'm going to do is I'll click over to just a couple of different rooms here, and as you can see here, our network connection is still continuing to be up and running, albeit a little bit slow right now. You notice I just zoomed in on the ceiling fan. If I turn fan on, what that really does is it's going through the service gateway, right, from here, via tha public Internet, but it's also effecting a change in the thermostat. It's actually changing the set point of the thermostat to a temperature below the current temperature, and it seems like we have a very, very slow connection here, but if I -- hey, there you go -- there you go. So, we're getting a few frames, and that might be because I'm set up at a bit of a high frame right now, I'm just blowing the buffer on the navigator here, but as you can see, a lot of the interaction with devices is actually possible. Whether or not all of it will happen or not, I'm sure that they will not, but we go out on a limb. We study some of the more advanced, evolving technology, and hopefully we win on a few and a lot of business partnerships are the result. That's all I have. Thanks a lot. (Applause.) MR. WINSTON: Thank you, Bill. For those of us who are trained in law rather than technology, I can't tell you how impressive that was. I'm just hoping we're not going to be quizzed on this later, because I'm not sure I'm ready. In our last technology presentation, we have Danny Weitzner. Danny's the director of the World Wide Web Consortium's technology and society activities. In this role, he's responsible for the development of technology standards that enable the web to address social, legal and public policy concerns, including privacy. Some of you may recognize Danny from his presentations at previous FTC workshops. Danny? One reminder before Danny begins, we're going to be having questions starting in about 15 minutes, so for those of you in the overflow rooms who want to ask a question, you might want to come down in a few minutes. Again, we'll have microphones in the doorway. MR. WEITZNER: Thank you, Joel, very much. To prove what Walt Mossberg writes, something about Bill's presentation crashed my laptop, so bear with me while this restarts. MR. MOSSBERG: And it will scan your hard disk and punish you. MR. WEITZNER: That's right, that's right. MR. MOSSBERG: If only someone in Washington would do something about Microsoft, that's... MR. WEITZNER: I don't think I have a snappy comment on that last one. First of all, let me -- while we're getting started up here, let me thank the Commission for holding this workshop. Indeed, I was -- had the privilege to be at the first workshop that the FTC held in 1996 on privacy and the web, which certainly for me was an important opportunity to start to get a handle on some of these issues, and I think that this workshop is actually in many ways analogous. Someone who I won't mention was saying, boy, we're really so far behind on all these issues, and I think that's the way a lot of us probably on just about all sides of the table here felt back in 1996. Whether we feel that we've caught up any further on these issues and the web space, I guess I have a glass half full view and think that we have made some real progress, in some part thanks to the FTC, since 1996, but there is clearly quite a bit to do. And my laptop is still churning a little bit. Just by way of introduction, let me say that -- I hope you can see this. Good, okay. My name is Danny Weitzner, and I'm with the World Wide Web Consortium. For those of you who don't know the W3C, we're the organization that sets the technical standards for the web. So, we're best known for our work in areas like HTML, the basic language that just about all web pages are written in, we're known more on the hype side for a lot of our work on XML, the next generation markup language that is somewhere in Walt Mossberg's time scale of a couple of years from now going to make I think a very substantial impact on the way we all experience the wireless and the wireline web, the web we know today, and we also have the responsibility, I suppose, of trying to keep track of where a lot of these technologies are going. So, one of the implications of that has been that we've spent a lot of time over the last year or two in very close consultations with standards bodies that are oriented specifically towards the wireless world, such as the WAP Forum, to try to make sure that as we develop many of the new technologies that Bill pointed to and that Walt maligned, that they at least all work together to one degree or another, and I'm going to talk a little bit about what that means. What I'm going to do today in the time we have left is engage in what I think is going to be an exercise in dramatic oversimplification. I was asked to talk about a lot of the underlying technologies in the wireless space. Number one, I'm probably not the right person to do that, and number two, it's pretty impossible to do that at this point in time and come out with any kind of meaningful understanding of what we ought to expect relative to the public policy space. So, what I'm going to try to do is to talk a little bit about how many of the new technologies that we've heard about are going to change the experience that people have -- both of their traditional voice wireless telephone experience around the world and how those same technologies are going to change the experience that people have of the web today. I think that for many of us those are hopefully kind of solid starting points, and what I hope to do is to point out some of the changes that are going to be underway and the ways that I think we'll need to respond as a public policy community. I am lucky enough to have just returned from a joint workshop that W3C held with the WAP Forum last week in Munich specifically on mobile web privacy issues, trying to understand some of the privacy requirements that many of the new wireless access technologies bring to core web infrastructure, and we had a very fruitful two-day set of initial discussions and I think all came away feeling that there's a huge amount of work to do in this area, both in terms of clarifying the basic public policy requirements and understanding how to help the technology in both of these industries evolve to better address the privacy issues. I know that's going to come up later today and tomorrow, so I won't dwell on that. Oh, and I think I brought up the wrong presentation. MR. MOSSBERG: The analog, Danny. MR. WEITZNER: Well, I did all this work on these slides, so, you know -- there we go, okay. So, this is the web today in, as I promised, diagramatically oversimplified form. There's a client on the website, that's us when we sit down at a web browser, one of these machines with Internet Explorer or Netscape Navigator or Opera or any of a number of web browsers, in effect, there are a bunch of them out there, and we interact with a web service, whether it's the FTC website where we find the agenda for today's meeting or CNN to find out what in the world is happening with the election saga at this moment. This is a pretty straightforward set of interactions here I think we all understand and that I would dare say even the public policy communities around the world are beginning to get their hands around the kind of relationship here. The world changed, the public policy world, particularly with respect to privacy issues changed somewhat significantly when we added another element to this relationship, and that's what I would generically call third-party embedded content. Specific examples of that include ads, banner ads served by the various networks and serving organizations and streaming media, in fact, so all of a sudden what was really a two-way relationship, a simple two-way relationship between me, the user, and a content provider out there has three components, and I think any of you who have paid even the slightest bid of attention to some of the questions that have come up in the online advertising debate understand that the addition of this extra architectural element, this extra piece of the network into the relationships that we have raise all kinds of questions, including some of the questions that the Chairman alluded to. How does notice work? Who do you go to complain to if you have a problem? Is it the web service or is it that third-party content provider? The mobile web world, what I think is actually sort of appropriately identified by Bill as pervasive computing environments, introduces I think a dramatic extra degree of complexity in the kinds of relationships that we're going to participate in as users that content providers are going to interact with and perhaps particularly a dramatic expansion in the relationships that the public policy community is going to have to look at and understand and respond to in one way or another. So, again, this is a simplification of a lot of the very complex set of technologies that Bill talked about and that I'm sure that we'll hear about during the day, but in what is really I'll just say my own kind of personal reflection on what's different about this new environment from the web environment today, what I think has to be striking to all of us is that all of a sudden, there are a whole new set of intermediaries between us as users on the left side and the traditional web service up there on the top right side. So, what are some of those intermediaries or what in many contexts are identified as gateways, what do some of those do? Well, you heard about some of them from Bill. For example, when I'm accessing some kind of service here on Bill's little Thinkpad -- little Palm Pilot here or whatever IBM calls it -- here, I'll give it back to you -- when I'm accessing -- oh, and there's another one here, they're everywhere -- what I'm accessing -- no, let me keep this -- when I'm accessing a service -- accessing a traditional web-based service on a little screen like this, as Bill pointed out, the service, in order to deliver meaningful content to me, has got to, number one, assume that I don't -- that I can't see color, that all I'm going to see is black and white. Number two, it's going to assume that I don't have anything that looks like a mouse. So, any of those web pages that allow you to move a mouse pointer over certain parts of the screen and have other stuff pop up, that may not work here. Number three, the data path, the bandwidth available between this device -- this device -- well, I don't know about this device, because it looks like it's been modified all kinds of ways, but between a typical device like this and a typical web page is dramatically smaller than the bandwidth available between an average PC that runs a web browser and a website. So, the expectation is either that people are going to sit around for hours and hours while home pages download onto these little devices or those home pages, that content that's over there on the right side are going to have to be, in IBM's words, transcoded. They are going to have to be changed so that people can -- so that people's little devices like this will remain useful. Now, how is that going to happen? The way that the technology seems to be evolving is that the devices that we carry around will be identified to these gateways. The gateway will know I'm using this particular kind of, in fact, customized Palm Pilot, and when it receives a request, it may do one of two things. It may then tell the web service that I'm trying to access, Danny's got this funny device and it's got this ID number, make sure you send content that that device can understand, or instead, it may do something different, and it may do -- it may actually engage itself in this kind of transcoding that Bill was mentioning, so that the -- that full, feature-rich, content-rich, color-rich, mixed-media web page that we're used to seeing when we look at the CNN.com site is going to be shrunk down in all kinds of ways dramatically by this intermediate device. Another function of the gateways is going to be that, for example, if I'm trying to order the proverbial airline ticket over this cell phone because I'm stuck in O'Hare because it's snowing, how am I going to do this with these little four lines of display, and more importantly, with these nine buttons where I have to press them multiple times just to get a single letter? Well, probably what I'm going to want to do is I'm going to want this gateway, this entity that sits in between me and the content provider, in between me and in my case probably United Airlines, I'm going to want United Airlines to know that I'm Danny Weitzner, that I'm at this address, that I use this credit card number, that I use this frequent flyer number, that I like window seats, et cetera, et cetera, et cetera. Who is going to store that information about me? Who am I going to trust to store that? Am I going to enter it all in, and I'm going to say window -- no, I'm not going to do that, because I'll miss the last plane possible. What instead I'm going to do is I'm going to rely on these -- on the user profiles that are stored by someone in between me and United Airlines to send that information, to send them my credit card number, to send them other preferences that I may have. Finally, lots of services that I may be interested in, some of the services that the Chairman, in fact, mentioned, the buddy service, the "I'm downtown and I want to have a beer with someone" service, rely on knowing a fair amount about the location of me, assuming that I'm connected to the particular wireless device I'm using, and they rely on some ability to provide that location information hopefully to the people I want to see it only, to services out there. And again, this is not going to be a situation where I'm going to see on this little device that maybe is enabled with a direction-finding service, like GPS, that's going to tell me I'm at 49 degrees, 22 minutes, et cetera, et cetera, and I'm not going to type in all these things, I promise you, into these phones. Even the geekiest of people are not going to want to do that. Instead, we are going to end up in many cases relying on gateway services provided somewhere inside the network to relay that information to the people who we want to have it. Now, finally, the question about how the third-party content, such as ads or streaming media, gets between the services and the users in this network I think is substantially an open question, but no doubt there will be a lot of interest in doing that. What I would also point out here is that my expectation is that when I'm using this kind of network, I'm going to -- I'm going to have access to in some sense what are technically really two different kinds of services. I expect that I'm going to have access through the traditional web services we understand that can be accessed -- where the same service can be accessed either through one of these mobile devices that goes to this relatively complex network architecture, or through a PC web browser, and also a web service that I access in this very simple way. I don't think that just because we add all this complexity we're going to lose what we now understand to be the traditional and perhaps somewhat quaint kind of web interactions that we have today, and I would suggest to you that it's very important for all of our thinking going forward that the user here, the client on the left side, in many cases isn't going to know what kind of service he or she is accessing. Maybe it's a traditional website. Maybe it's a service provided uniquely by that user's network provider, their cellular network provider. The user's not going to know, and in many cases the user's not going to care. So, I want to try to address very quickly how you do all this, how do you make that very complex world possible. As I said, I think that whereas the web in 1996 was in a state of significant development, if not confusion, it has now more or less settled down, and the way that we from a technology standpoint interact with web services is relatively clear and stable. All of us essentially use the basic Internet protocols to access websites, TCP/IP and HTTP. Those are the -- TCP/IP is the basic network service that moves information around, whether it's for the web or e-mail or for anything else. For the web we use a protocol called HTTP, which is specially designed to give people access to web pages and to create links among web pages. We access a pretty uniform kind of content.People who want to make content available on the web today know that they have to do it with a particular language called HTML. That will change probably over time. And increasingly even on today's web, the way that people's public policy-oriented services, such as information about privacy policies, information about the signatures or the authentication of documents, are managed according to a developing set of technical standards. On the wireless web today, I think what we see is a pretty substantial diversity. I don't know that I'm quite as pessimistic as Walt about the kind of Tower of Babel problem, but it's pretty clear that at all of these levels, there's quite a lot of diversity. There is not a single standard for the underlying network transport; there's not a single standard for the protocol that moves information around in these networks. Content has not settled down into a single standard, and certainly for security and privacy and other kinds of policy issues, there's not a single standard that everyone can rely on. Let me -- since we're coming close to the end of time for this panel, I want to try to conclude very quickly. These are some issues that I think we have to keep in mind as we explore this public policy space, and they're my effort really to point out what I think are some of the critical differences between policy frameworks and ways of thinking that we've evolved for the web and the ones that are going to be appropriate for this environment. First of all, as I talked about, there are going to be a variety of gateways, a variety of intermediaries that stand in between the user and the service at the other end of the network. In many cases, those gateways are for purely technical purposes. In some cases, those gateways will exist to manage or in some cases alter the business relationship or other kinds of relationships that the user has with the service provider on the other end. What I would suggest -- and finally -- and those gateways are important both from the user perspective and also from the content provider perspective. I think it's sort of axiomatic on the web today that when I put up a website, I don't have to negotiate with anyone to make sure that that content is available to every single user on the web who wants to see it. Whether that's the same in this environment I think is an open question. I think that what I would suggest, though, is that for all of the importance of these gateway relationships, the boundaries between interactions that go through these gateways and those that don't, interactions that are secured perhaps by some intermediary, interactions where perhaps some intermediary is watching out for my privacy rights, interactions where perhaps an intermediary is monitoring the intellectual property rights of a content provider, that from the user perspective, those interactions will blur together with the interactions that we have today with a typical website. I think that from my perspective, what is most important as we go forward is to build on the common shared information space that we have with the Internet and the web today. This is not to say that every single device has to be able to access every single kind of content. As I think was eloquently pointed out, I may not want to watch a full motion video on this little device, and we shouldn't require that that is the case, but I do think we have to pay very careful attention to making sure that the evolving protocols ensure the possibility of consistent access and make sure that we don't create Balkanization between different islands of content spread around. Finally, I would say -- and this was really the subject of the workshop that we had with the WAP Forum last week, so it's very much a work in progress, because of the fact that users will be navigating across different environments between the traditional web world and the new kinds of services that will be possible through the wireless world, I think there was a fair amount of consensus just in the discussion that we had that users do expect a common experience. They expect that if they have a privacy relationship with a website in the traditional web world, that that relationship will be carried over when they access that same service in the wireless world. I think that they expect that if they have digitally signed a document with a web service, maybe it was a check that they sent, maybe it was securing access to a credit card statement or something like that, that they will have that same kind of security in the wireless world. So, we have to be very careful I think to create a consistent set of expectations, and certainly work to build technology and policy approaches that alert users when they're crossing boundaries and when the expectations are changing. So, since I see several people encouraging me to conclude, I will do that, and thank you very much, and I look forward to questions. (Applause.) MR. WINSTON: Thank you, Danny. I think we have a few minutes for questions. We're going to do this the old-fashioned way. If you have a question, raise your hand, and I will call on you. Wait for the microphone to come from someone, there's a microphone right there, and if you could just identify yourself and your organization when you ask the question. MR. DANIELS: Sure, Seth Daniels. I think everyone did a very good job, and I want to thank you, but one thing that really came across to me is that there are really different technologies that we're talking about. We're talking about cell phone technology, and then we're talking about a new technology that wasn't really clearly defined, that being wireless IP. I think we identified that Ricochet and a couple other providers are talking about wireless IP, and if you're talking about wireless IP, WAP does not necessarily apply, because WAP is more device-dependent with the issues involved with the cell phone technology. So, what's kind of interesting is even in this room of people that should be in the know, there's still a lot of confusion or not -- maybe I shouldn't say confusion, but there's not a clear message being sent, and to the consumers, it's even more obscured. The other part or maybe more of a question is what are the regulatory requirements for the wireless IP providers that are not making calls but seem to be somewhat outside of the scope of some of the guidelines as I have read them relative to location reporting? MR. MOSSBERG: Can I just say, I can't -- I don't know anything about regulatory requirements, maybe others do, but I think you're right, that just observing the three of us so far, and the conference has hardly started, this word "wireless," this term "wireless" is way too broad, and it means many things to many people. Bill showed you what I thought was a very interesting demonstration of a whole kind of wireless that I didn't even mention, which is essentially wireless inside buildings. 802.11, which incidentally has been renamed just to confuse you further Wi-Fi, Bluetooth, which has produced more press releases than actual devices, and Home RF and some other things, all of those things are designed to allow devices that are properly equipped to talk to each other over relatively short distances, and one of the things that they can do in talking to each other is to pass along an Internet connection and, of course, hopefully a broadband one, but that Internet connection today is primarily coming across the same wired system, not wireless, but wired that we all know about. So, in other words, I wrote about -- for those of you who read my column, a couple weeks ago I reported on what it was like to set up an 802.11 high-speed wireless network inside my house to distribute a wired DSL line coming into my house, and I hope you're following me. I don't have wireless Internet in my house just because I have a wireless network that operates within the walls of the house. I have a wired Internet connection that's distributed wirelessly. The experience is wonderful, incidentally, in terms of being able to carry laptops and eventually things like this around in rooms, and I actually have one or two of these in my house that I'm testing, but it's not the same as being out on the street or in this building or in a cab and trying to get this thing to give me a broadband wireless connection. So, we have to be very careful in the terminology that we use. I'm sorry, I don't know about the regulation part. Did you guys -- okay. MR. WINSTON: Yes, over here. If you could introduce yourself again. MR. LEMAITRE: Mark LeMaitre, I work for Nextel Communications. This is going to be difficult. I'm not going to talk about technology, but it's quite obvious that amongst the speakers today so far there's been both a desire and a concern about extending the wireline existing Internet experience out to the wireless device. I think it's both desirable in some circumstances and very difficult practically to achieve, but I was interested to -- and I was at the workshop that Danny was at last week. One of the things that we found was that -- or discussed was that in order to make the experience a lot more compelling in a wireless environment, certainly with the PDA, the notion of where I am and what I'm doing becomes extremely important, and so whilst I agree that protocols that we have got on the -- being developed on the Internet today for privacy satisfy the notion that I'm in front of a big screen surfing content, when I get into a wireless environment, the stakes go up in that I've now got information about my personal location, my personal --you know, my state, what am I doing. What am I doing and where am I doing it are very difficult things for people to give away easily, and I'm wondering if you can just touch, Danny, on the notion that as the stakes go up, so do the controls, and the levers that we have to put back in the consumers' hands have to get better. MR. WEITZNER: I think that there is no question that they do. There is -- and one of the points that I found particularly striking about the workshop, and I don't want to give anything more than my personal impressions of it, because we're still working on developing a kind of a common statement coming out of it, but my personal impression was that there is a shared sense across the web industry and the wireless industry, however you define those boundaries, and everyone in between that putting users in control of their personal information, particularly, as you pointed out, when it comes to very sensitive information such as the location of your device or whether you consider yourselves at work or on -- or having fun at any given moment, whether you're receiving calls or not receiving calls, et cetera, whether you're in a restaurant or in a bar or whatever else you're doing, that indeed I think we need much finer-grained user control mechanisms for the wireless world than we currently have for the web world. In the web world we've taken one step in developing a privacy-oriented standard, P3P, which I won't rattle on about, but I think that the wireless world introduces a whole set of requirements on top of that. I think we need the consistency of a common platform like P3P, and this is true really for essentially any protocol we're talking about, I would suggest, that the user is aware of, whether it's security or privacy or any number of other things, but we clearly need more features available, and most importantly, I think we need a higher degree of control so that users are comfortable operating in an environment where they are, in fact, disclosing and relying on the disclosure of quite a bit of personal information. One of the points that I would just bring out quickly from the workshop that, Mark, you had a lot to do with raising was really the question of who is the user going to trust in these sorts of situations? The wireless carrier is the source maybe of that location or maybe it's some other entity in the network that knows your location. Who is the user going to rely on to mediate in some sense the disclosure of that information to make sure that as it's used in various other parts of the network, it's used consistent with the desire of the user, and how are we going to work that, and what kinds of interoperability protocols do we need across all the services that are going to participate both on the web and on the wireless side?How are we going to get that all to work together? I think it's a very substantial challenge that the wireless world has brought here and one that I think we've got a lot to do to figure out. MR. MOSSBERG: Can I interject a note of deep skepticism for a moment, as I have been trying to do all morning? I don't even for a nanosecond doubt your sincerity, Danny, or those of the people of this workshop, although I would point out that WAP is a 100 percent utter failure as of the moment on cell phones, but to tell you about the privacy thing, I just would like to note that we have had four or five years now of experience with consumers using the wired web on very powerful devices which could afford you a tremendous amount of privacy protection, and we have done very, very badly. There is no privacy and a very bad level of security for people using the web on computers in a wired way today, and I personally now -- speaking as a journalist who is paid to offer opinions, that's what being a columnist means, I would tell you that I believe you won't hear this from many other people who write in The Wall Street Journal -- I believe we need a federal law that is very tough and very powerful on privacy that would cover wired and wireless, and in the absence of a federal law enforceable by jail terms -- I'm very serious about this -- none of -- as I said, I attribute complete integrity and honorable intentions to you guys, but none of that will matter, because to the extent the wireless web and location-based services and user profiling become economically important and marketable, you will have the same kind of irresistible pressure from people ranging from the worst sort of hucksters to the most honest businesses to try to sell you things based on that. There has to be some basic legal -- I'm not talking about micromanaging every transaction, but a law that would set out at least some general guidelines on who -- saying that the consumer should be in control. In other words, little things like opt-out versus opt-in, and I know there are privacy people here who know much more about it. Sorry, I just needed to try to inject reality, that's all. MR. WEITZNER: Could I just respond very quickly? All I can do is to say that I think that what we have to do is take a global perspective on this and recognize that any of these infrastructures that we're talking about exist in a context that I think will always be marked by a real diversity and a real divergence in real standards. Already Europe has I think the kind of environment you might want to have, without getting into it too far, and what we see is the need for services to be able to exist in a variety of legal environments -- MR. MOSSBERG: Well, ultimately you need a treaty -- ultimately, I'm sorry, there is a role for governments, and you need a treaty ultimately. MR. WEITZNER: And I'm not disagreeing with that in any way. I think there is absolutely a role for governments. I think the question on the table is whether it is more than is currently happening, and I think there are serious arguments on both sides of that. MR. MOSSBERG: Nothing is currently happening. MR. WEITZNER: Let me just say real quickly, you say you're not trying to micromanage. I am actually talking about trying to micromanage, because I think whether you're working in an environment where there's a real comprehensive privacy framework such as the European Union or whether you're working here where I think everyone would agree there's a much lower profile legal environment, without making judgments about it, that from the user standpoint clearly what users want is the ability to make very fine-grained choices -- MR. MOSSBERG: I'm sorry, I meant the government should not be micromanaging, but the user needs to be in control. They need to be able to say no, you can't -- you know, I want you to know my location because I want to know what's playing at the nearest movie theater, but that goes no farther, and by the way, I don't want you to serve up an ad based on my location or I do want you to serve up an ad. MR. WEITZNER: That's right, and I was just riffing off of your micromanagement to say that the need for micromanagement is there. I tend to agree with you that it's not at the level of regulation. MR. WINSTON: We are going to be spending a lot of time on this topic over the next day and a half, and it's obviously one people have a lot of opinions about, so why don't we hold off for now. We do need to move on to our next speakers, so I want to thank our panelists. We have enjoyed your presentations. (Applause.) MR. WINSTON: If you could all wait around, we have one more speaker before the break. We are now going to turn from the discussion on technology to look at the international experience in the wireless area. As we've heard, both Europe and Japan are ahead of the U.S. in terms of deployment of wireless services to consumers, and so we may be able to learn some lessons from the international experience that will help us. Our next speaker is Jason Pavona, who is the director of wireless strategy and personalization for Terra Lycos. He's been instrumental in building the infrastructure necessary to take the Lycos network into the next generation of content, including Lycos' extension into wireless, and he's going to be speaking about the development of the wireless space abroad and offer an assessment of how that may translate to the U.S. Again, hopefully we will have time for a few questions afterwards, but we'll see how it goes. Jason? (Applause.) MR. PAVONA: All right, so, I am going to start off with just a couple remarks and then really kind of delve into what's reality now in Europe and finish a little bit on Asia. I, unlike some of the other panelists, do not share the skepticism that's really out there in the market. I think that as you'll see moving forward that as the Internet grew, it was about applications and services, and wireless is much the same way. You know, we have kind of three mantras that we move to in this space that make it pretty appropriate. The first one is the right device for the right person. I think one of the most interesting things that I saw was not the presentation that Walter gave but the fact that he spent the entire time actually using his Blackberry checking e-mail. So, obviously he found a very interesting way to use wireless, and I think that that's really what this is all about. It's finding the right device and the right application that drives adoption and drives usage across the board. Obviously there are technology needs that need to be addressed, there are regulatory needs that need to be addressed, but at the end of the day, this is an industry and this is a mechanism for delivering data and information that is incredibly important and going to happen. We just need to make sure that we're addressing it. So, let me kind of walk through how Europe is different. First, GSM technology standard, one standard across the board makes it very easy for operators and content providers to work together to deliver up content and services and to roam across different countries. Next, lack of a traditional land line infrastructure, why is this important? Basically in the U.S. we are very lucky to have the ability to get almost universal access to the Internet, while in Europe, for example, in Italy, it can take up to six months to get a telephone line into your house. Obviously this means that mobile access is something that people are incredibly willing and have the need to get as soon as possible. So, what you'll see is that while in the United States the GPL land line structure is important, it will leverage wireless, where it's opposite in Europe, where the traditional wireless user will leverage land line as it continues to grow out if it grows out. Negotiation, metropolitan environment, moving towards oligopoly. Obviously most of the countries in Europe had a state-owned telephone agency. That has obviously opened up over the last decade in terms of competition, but it is still very much pervasive where the traditional carrier still holds a majority of the penetration of wireless users. Next, operators building portals. They're very much moving towards an AOL versus EarthLink and Terra Lycos model. So, AOL provides access and they also provide content in a walled garden type environment. Now, this is obviously changing across the board wherepeople want open environments to get information, but it's important to realize that in some ways, especially in Europe and Asia, carriers want to control what information people are seeing. Next, pay-by-the-drink culture. In Europe, it's very different, where people actually do not pay for every call. If you receive a call, you do not pay for that call. So, it obviously drives adoption upwards. Next, limited flat rate pricing. The culture in the U.S. is all about what am I going to pay? I want to know what's the most I can pay, and if there's a maximum, that's important to me. So, that's very different than where everything is pay by the drink. Prepaid and low credit card use, which means that people understand what the billing is or they have already set up a calling plan that's important to them. And caller pays, obviously driving usage, as we talked about before. So, I did not put up this slide to make everyone have a problem reading it or test your eyesight, but it really goes to showing kind of where online penetration and mobile penetration are important. So, as you can see, mobile penetration in most European countries is extremely high. This is not true in the United States; however, the lines are completely different that PC at home or actually access to land line infrastructure is hugely -- has a huge penetration versus the wireless penetration, and this goes back to the point where in the U.S. we are very much considered centric on the home and the PC and how that's very different than wireless and that's why most people have had a bad experience. Now, take it to a different level. What happens if the only way I can access e-mail is on a WAP phone or on a device that has a limited ability to view that information? I will tend to use that advice, I will just use it in a very different way, and that's what we're talking about here. So, how to view the European market, really three ways that we really look at it, Internet focused, Internet aware and mobile focused. Internet focused is much like the U.S. where PC and Internet access is pervasive and the relationships with portals and other content providers are already there. Next, Internet aware, where, you know, there's medium PC penetration, Internet access rates are lower, there's really not a distinct relationship on the Internet side for particular access. And then finally mobile focused, where there's a very low Internet penetration but high mobile penetration, and what you'll see is these cultures taking on devices and services very differently than they would in the U.S. or around the globe. So, what have we kind of learned across Europe and Asia with our joint ventures, and what mobile applications do people really prefer to see? Number one by far, and I think that this -- this line should be across the top, e-mail access. It's really about communicating with one another. Instant messaging, it's about being able to access people on the go wherever they are and be able to get important messages to them. Obviously there is -- there is an incredible need for that not only in the U.S. but around the world, and it's incredibly important that we have the ability to do that, whether it's on a small device, whether it's on a traditional PC, whether it's a voice application, it doesn't matter, it's about communicating with individuals across the board. As you move down into some of the other content areas that we've worked with, driving directions is incredibly -- has been an incredibly sticky product that people want to use, traffic and driving updates obviously, weather information, finance and stock information. So, obviously things that are near and dear to people's hearts, sports, for example, and betting. Where the laws are somewhat different around the world, betting is an incredibly popular application on these devices. And then entertainment. So, as we'll see moving forward, one of the key facets of mobile devices will be entertainment. This is incredibly important if you look at some of the demographics of cell phone use around the world, and it's a very high penetration of teens and people within their -- in their low twenties. Why is this important? Because those people are on the device for two reasons. One, to talk with their friends, and two, to entertain themselves, and that is something that will not only drive the penetration of wireless here, when gaming and chat and all those other things that you think of on your PC move to your device, whether it's, you know, a Palm device, whether it's a phone or whatever, but it will just be in a very different way. So, mobile applications road map, where are we and where are we going? The number one product and service for most mobile carriers in Europe in terms of data is SMS. There's about a billion SMS messages sent in Europe every day. So, most people in the U.S. have never received an SMS message, they don't know what an SMS message is, they don't care; however, if any of you have been in a train in Europe or seen teenagers or school kids in a classroom, the number one thing that they're doing is they're sitting on a phone and typing in messages to their friends or they're receiving messages about updates. Now, this may seem insane to a lot of you, and it seems insane to me a lot of times, but really what it's about is communicating, and what we tend to do is find the easiest way to communicate with people, and whether that's an SMS message, whether that's voice, as we've kind of talked about here, it's finding the right application, it's finding the right device and it's finding the right means to get that information to them. How does this kind of change, though, as technology changes? We talked a little bit about next-generation networks. What does that mean? At the end of the day, it means how much faster can I get data to the user? So, whether that's -- you know, obviously a roll-out that is, you know, in the future or today, in Europe right now we're looking at roll-outs in several countries of what's called GPRS, and that's basically a data network on top of GSM, their current standard. Is the bandwidth that that's providing, an incredibly huge jump, does it make it compelling to play peer-to-peer video games or download the video of the Supreme Court hearing? No; however, what it does do is provide a mechanism for us to allow users to have different experiences, and that's what it's all about. As you'll see, you know, some of the things that will continue to come, you know, device location, something that I'm sure will be a heated battle not only today but for years to come in terms of privacy and getting that information to you. Gaming, as we talked about. Video, but not video in terms of, you know, 15-minute clips of ABC News, but more importantly, small clips of information. For example, you know, I'm driving to Logan Airport. For those of you that live in Boston, that could be a 15-minute trip or that could be a three-hour trip. So, I'd like to see, you know, what -- one of the -- you know, what 93 looks like at the current time. So, I want to get a snapshot of that. Now, is that a -- you know, is that a 15-minute process? No, it's probably a three-minute, two-minute, one-minute application that says, show me the best route to Logan Airport and show me what the traffic looks like. So, that's what we're talking about, designing applications to use the best technology. Obviously local advertising, emergency services, things that you've seen already come out, like OnStar. So, there are a bunch of applications that continue to come out and be driven by new technology and consumer needs. So, mobile revenue streams and why is this important? Because in order to understand what people are doing, you need to understand in some ways what people are willing to pay for. Information services, this is really about connecting people with information that they may want. It will be a tiered system. There will be services that are free, there will be services that are premium, just like there are today. Mobile advertising, the same thing, a tiered system. Peoplewill either pay for services or they won't. They will be able to opt out of those services if we, you know, create the correct mechanism to allow them to do that in a compelling way. Mobile services that connect you to e-mail and PIM and unified messaging and mobile commerce, so the question is how broad is mobile commerce? It's the same way when you take an example of calling a call center. You know, when I am -- instead of calling Tiffany's or going to a Tiffany's store for -- for a, you know, a diamond earring for my girlfriend for Christmas, is that Internet commerce? Is that, you know, brick and mortar commerce? I don't know, but the question is, you know, how do I get -- how do I make it as easy as possible for people to buy in the way that they prefer to buy? And whether it's defined as mobile commerce, whether it's defined as e-commerce or whether it's traditional, you know, brick and mortar commerce, it doesn't really matter, and it's about providing them with the best service. Mobile distribution, providing mobile ISPs, whether it's through products like Ricochet, whether it's through products like a traditional carrier would provide, it's getting them the type of ISP they need. And then mobile enablement, really allowing people to move across different areas of the world and have the same access, and that's going to be an incredibly important piece. One of the relationship pieces that we talked about earlier that started off to be a heated battle was location and billing. So, one of the keys that will drive this, and I'll talk a little bit about this when I talk about DoCoMo in Japan, is the location of the user, obviously that can drive an easy product or it can be a nightmare for privacy, and then finally the billing relationship, how easy is it for me to pay for something, which is an incredibly important piece of mobile commerce and mobile moving forward. So, North American mobile consumers are different. I think that this is a statement that I often hear within the industry, and I don't always buy it, because at the end of the day, cultures are always different, and the applications that are within those cultures rea
correct_subsidiary_00108
FactBench
3
22
https://www.zdnet.com/home-and-office/networking/terra-lycos-long-lonely-road-to-recovery/
en
Terra Lycos: Long, lonely road to recovery
https://www.zdnet.com/a/…t=675&width=1200
https://www.zdnet.com/a/…t=675&width=1200
[ "https://www.zdnet.com/a/img/resize/4d389b757d7fcd52d2657343072a9514052edcad/2014/12/04/2bd9348f-7b63-11e4-9a74-d4ae52e95e57/zd-defaultauthor-bernhard-warner.jpg?auto=webp&fit=crop&frame=1&height=192&width=192" ]
[]
[]
[ "" ]
null
[ "Bernhard Warner" ]
2002-07-29T00:00:00+00:00
LONDON (Reuters) -- U.S.-Spanish Internet media companyTerra Lycos faces an uphill struggle to turnthe company around, a campaign it may be forced to undertakewithout help from its biggest ally, Bertelsmann.
en
https://www.zdnet.com/a/…-logo-yellow.png
ZDNET
https://www.zdnet.com/home-and-office/networking/terra-lycos-long-lonely-road-to-recovery/
LONDON (Reuters) -- U.S.-Spanish Internet media company Terra Lycos faces an uphill struggle to turn the company around, a campaign it may be forced to undertake without help from its biggest ally, Bertelsmann. The loss-making company, 37 percent-owned by Spanish telecoms group Telefonica, has been hit by a brutal online advertising slump, while its broadband business is in the doldrums. To make matters worse, it is likely to lose all or part of a $675 million advertising contract with Bertelsmann. The German media giant accounts for 40 percent of Terra's advertising revenue, the legacy of a $1 billion ad pact struck between the two companies at the height of the dot-com boom in May, 2000. The deal stipulates that Bertelsmann pay up to $675 million over a three-year period beginning in October. In other words, Bertelsmann could pay $1 or the whole tranche. Analysts say that unless Terra is able to jumpstart its moribund access business, any loss of Bertelsmann's advertising business could force management to consider selling off properties. "If they lose Bertelsmann it could be the end for Lycos," one Internet analyst speculated. On Wednesday, Terra Lycos reported second-quarter revenues of $162.2 million, 5 percent below the analysts' consensus forecast, and a loss before interest, tax, amortization and depreciation of $31.8 million. It also trimmed $108.5 million from the top end of its full-year revenue guidance, now saying the business will bring in $661 million to $720 million in 2002. And that's with Bertelsmann still on board. In April, Bertelsmann said it would like to renegotiate the deal, seeking to pay an amount that more reflects today's depressed online advertising prices. A Bertelsmann spokesman said on Thursday that no decision has been reached on the contract. Stephen Killeen, Terra Lycos's U.S. head, told Reuters on Wednesday, "there hasn't been any material change in the relationship." Telefonica is contractually obliged to stump up the difference should Bertelsmann pull out, but analysts are growing exceedingly pessimistic that the indebted telecoms firm will honor the deal. As part of a turn-around plan, Terra has begun charging consumers for such features as extra e-mail storage and music downloads. Analysts though are not convinced it will improve revenues, which have remained static for the past five quarters. Terra shares closed down 0.7 percent at $5.63, outperforming the Dow Jones technology stock index, which dropped 2.2 percent. Broadband, dollar concerns On Thursday, analysts weighed in with pessimistic notes about Terra's future. Goldman Sachs cut Terra to "underperform" from "market perform," citing a failure to attract new customers and disappointing revenue outlook. In a research note, JP Morgan said the company "now has a valuation floor of 2.80 euros per share." Plus, Terra's inability to build up its Spanish broadband business has confounded analysts. "In our opinion, the Spanish access market has high growth potential, yet Terra added 1,000 new ADSL (broadband) subscribers in the last quarter," BNP Paribas Internet analyst Alexandra Lord said in a research note on Thursday. By comparison, Terra's rivals, including France's Wanadoo and Germany's T-Online, are experiencing double-digit quarterly growth levels for higher-margin broadband services in their home markets.
correct_subsidiary_00108
FactBench
2
81
https://security.stackexchange.com/questions/16361/how-to-prevent-my-website-from-getting-malware-injection-attacks
en
How to prevent my website from getting malware injection attacks?
https://cdn.sstatic.net/…g?v=497726d850f9
https://cdn.sstatic.net/…g?v=497726d850f9
[ "https://cdn.sstatic.net/Sites/security/Img/logo.svg?v=f9d04c44487b", "https://cdn.sstatic.net/Img/teams/overflowai.svg?v=d706fa76cdae", "https://i.sstatic.net/H0VuQ.jpg?s=64", "https://www.gravatar.com/avatar/88f9161851d3959dd251da7e3bd33eb9?s=64&d=identicon&r=PG", "https://i.sstatic.net/c0hE1.png?s=64", "https://www.gravatar.com/avatar/4c1796810a014e9ed1f3198c3edd103e?s=64&d=identicon&r=PG", "https://www.gravatar.com/avatar/9c660cac6e83acb28b1f0905672cccb4?s=64&d=identicon&r=PG", "https://www.gravatar.com/avatar/86e807609686105ad38c03e0c3f65a6e?s=64&d=identicon&r=PG", "https://security.stackexchange.com/posts/16361/ivc/71cc?prg=73bd25ea-af6f-4d8a-8e1f-2c2eb6c203d4" ]
[]
[]
[ "" ]
null
[]
2012-06-22T08:17:01
My website was banned as a malware website by Google. When I checked the code, I found out that some code injected many files on my server. I cleaned everything manually, edited all files on my ser...
en
https://cdn.sstatic.net/Sites/security/Img/favicon.ico?v=54cb45853e3c
Information Security Stack Exchange
https://security.stackexchange.com/questions/16361/how-to-prevent-my-website-from-getting-malware-injection-attacks
A late comment, but I suspect that you use FileZilla as your FTP client. Did you know that FileZilla stores your FTP site credentials (site/user/pass) in a plain text file in the %APPDATA% folder? And I also suspect there is a hidden malware on your computer. It grabbed your FileZilla credential files, and used them to change your header.php file in your theme folder. In fact, I suspect that you will find changed header.php in all of your themes folders. And if you are technical enough to look at your FTP log files, you will find the access to those files: a download, then an upload of the changed files. You might also find some random file names that were uploaded to your root ('home') folder, although those files were deleted by the hacker. And, you will find that the IP address in the FTP log of the hacker was from China.
correct_subsidiary_00108
FactBench
0
98
https://www.mytotalretail.com/article/special-report-search-engine-marketing-22135/all/
en
Special Report: Search Engine Marketing
https://www.mytotalretail.com/wp-content/themes/tr/images/favicon.ico?x12491
https://www.mytotalretail.com/wp-content/themes/tr/images/favicon.ico?x12491
[ "https://www.mytotalretail.com/wp-content/themes/tr/images/fb_circle.png?x12491", "https://www.mytotalretail.com/wp-content/themes/tr/images/linkedin_circle.png?x12491", "https://www.mytotalretail.com/wp-content/themes/tr/images/logo-totalretail-x2.png?x12491", "https://www.mytotalretail.com/wp-content/themes/tr/images/logo-totalretail-white-x2.png?x12491", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2014/04/Paul_Miller.jpg&w=51&h=51&c=true", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2014/04/Paul_Miller.jpg&w=51&h=51&c=true", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2024/06/pd-hero-image-1.jpg&w=142&h=80", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2022/10/GettyImages-1283427261-e1721220240942.jpg&w=142&h=80", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2024/06/pd-hero-image-1.jpg&w=142&h=80", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2018/11/gift-card-stock-photo.jpg&w=142&h=80", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2024/07/0624_PRCOMMS_LI_Roblox_Commerce_Hero_1920x1080.jpg&w=142&h=80", "https://www.mytotalretail.com/thumb/?src=/wp-content/uploads/sites/14/2019/08/GettyImages-507746811.jpg&w=142&h=80", "https://www.mytotalretail.com/wp-content/themes/napco-editorial/images/trans.gif?x12491", "https://www.mytotalretail.com/wp-content/themes/tr/images/icn-soc-fb-x2.png?x12491", "https://www.mytotalretail.com/wp-content/themes/tr/images/icn-soc-li-x2.png?x12491", "https://www.mytotalretail.com/wp-content/themes/tr/images/logo-totalretail-x2.png?x12491" ]
[]
[]
[ "" ]
null
[ "Paul Miller" ]
2005-03-01T00:00:00+00:00
Introduction Like other forms of e-commerce, the possibilities in search engine marketing (SEM) are only just beginning to be fully explored. In its own way, the craft of SEM is a lot like other methods of direct marketing: It requires a steady dose of testing, and in the end, a favorable return on investment. According to a recent SEM survey by New York-based Jupiter Research, just 25 percent of search marketers use “sophisticated SEM tactics. Marketers must cultivate sophistication to remain successful,” the survey states. What’s more, the search numbers continue to multiply. For instance, in November, Google doubled, to more than
en
https://www.mytotalretail.com/wp-content/themes/tr/images/favicon.ico?x12491
Total Retail
https://www.mytotalretail.com/article/special-report-search-engine-marketing-22135/
Introduction Like other forms of e-commerce, the possibilities in search engine marketing (SEM) are only just beginning to be fully explored. In its own way, the craft of SEM is a lot like other methods of direct marketing: It requires a steady dose of testing, and in the end, a favorable return on investment. According to a recent SEM survey by New York-based Jupiter Research, just 25 percent of search marketers use “sophisticated SEM tactics. Marketers must cultivate sophistication to remain successful,” the survey states. What’s more, the search numbers continue to multiply. For instance, in November, Google doubled, to more than 8 billion, the number of Web pages its search engine can locate. MSN recently introduced a search engine that can find 5 billion, and Yahoo! can locate 4.2 billion. This special report is meant to help you navigate your way through the SEM landscape. In it, you’ll find articles on keyword selection, the bidding process, paid vs. organic search, business-to-business SEM and more. Find the Right Approach for You There’s certainly no one right way to attract prospects to your Web site through search engine marketing (SEM). But for most catalogers looking to better optimize SEM, or for those about to enter the SEM world for the first time, the following three action steps must be thoroughly evaluated. 1. Select Your Keywords When it comes to setting keywords, a niche marketer selling, say, men’s dress shirts can attract more convertible prospects to its site by using keywords such as “men’s cotton Oxford dress shirts” than simply “shirts.” Or a cataloger who is hoping to sell digital cameras online amidst the torrent of competition in consumer electronics can draw prospects who are ready to purchase by using product-specific keywords such as “Kodak EasyShare LS755” rather than simply “digital cameras.” “Being more specific gets you better return on investment,” says David Fischer, director of Mountain View, Calif.-based Google’s AdWords advertising program. “Think like your catalog customers: What would they search for? They might start with a simple keyword, but then use SKUs, product names or numbers.” The most heavily trafficked search engines offer easy keyword builder tools for their respective pay-per-click SEM programs. Google’s AdWords (https://adwords.google.com/select/) and Yahoo!-Overture’s Precision Match (www.content.overture.com/d/USm/ays/index.jhtml) give you simple tools to help choose appropriate keywords. Catalogers sometimes don’t realize they think of their products a lot differently from their customers, says Heather Lloyd-Martin, president/CEO of Bellingham, Wash.-based search marketing consulting firm SuccessWorks Search Marketing Solutions. So to get a better sense of how consumers search, she suggests you consult outside sources such as Wordtracker.com, a compiled database of terms for which people search. The tool can tell you how often people search for specific keywords, as well as how many competing sites use those keywords. What’s more, don’t forget about seasonality when it comes to keyword choices, says Diane Rinaldo, director of strategic alliances for the Pasadena, Calif.-based Overture. “You want to capture spring season keywords when you mail your spring catalogs and are introducing spring fashion, for instance,” she says. Another way to stand out from other marketers is to use deliberate misspellings in keyword searches. “Using common misspellings can perform very well and often get large volumes of searches,” says Google’s Fischer, “because it’s something competitors might not have thought of.” For example, try keywords such as “ladeys apparel” or “telefones.” Another tip: Attract customers to your site by using call-to-action keywords, such as “free shipping,” “overnight shipping” and “online specials.” 2. Compare Paid vs. Organic/Free Search Even if you come up with the most effective keywords, your efforts could go to waste if they aren’t steering browsers to the most appropriate landing pages, points out Overture’s Rinaldo. If consumers find a site by searching for “cashmere sweaters” and “sale” or “discount,” they should click through to a page from a marketer’s site that shows such sweaters up front. If you prefer to have browsers land on your homepage, that’s OK, as long as there’s a clear path to find cashmere sweaters, she says. “Be sure there’s promotional language on the homepage to convert those visitors to buyers.” While the right keywords will attract potential buyers to your site with organic SEM, a presence in paid search can qualify prospects better because they’re more inclined to click on paid search ads if they’re more serious about buying. And the higher the clickthrough rate you can get, the higher your ads will rank. Most marketers taking an aggressive approach to SEM incorporate a combination of paid and organic search. “The best thing is having a site that search engine spiders can [access] to get those free listings on the search engines,” Lloyd-Martin says. “But if a cataloger doesn’t use pay-per-click as well, it’s missing the boat. By using both, your prospects will see your name everywhere.” 3. Select the Best Possible Bidding Procedures On Google, the order of pay-per-click search ads is based on cost-per-click and the ad’s clickthrough rate — the number of clicks divided by the number of impressions, Fischer says. Let’s say Cataloger A bids on “men’s dress shirts” at $1 per click and Cataloger B bids on the same key phrase at $0.50 per click. If Cataloger B’s clickthrough ad rate is more than twice the rate of Cataloger A’s, the $0.50-per-click ad will get ranked higher on Google. In paid search bidding, not all marketers necessarily want to bid a rate that will be high enough to put them at the top. In determining how large a bid to place on a keyword, Rinaldo suggests ensuring you have an effective analytics tool. “Without that, you don’t know how any of your marketing is performing, and you wind up bidding in the dark,” she notes. Consider bidding to push your site higher up on the list when it’s the right time, she continues. For instance, if you sell chocolates or flowers, bid high enough to push your listing to the top in the weeks leading up to Valentine’s Day. While you must take into account how high up you want your listing to appear when placing your bids, you also should set bid limits. Says Fischer, if you know that $0.50 per keyword is what you’re willing to pay, set that as a maximum amount. “You also can set a daily budget,” he says. “Based on your keywords and cost-per-click, we have an idea of how much traffic you’ll get.” Challenges and Rewards of B-to-B SEM In comparison to consumer catalogers, search engine marketing (SEM) for business-to-business (b-to-b) mailers requires a considerably more technical approach. But the rewards for effectively tackling those technical challenges can be worthwhile. “B-to-b catalogers need to understand that SEM is not just a matter of managing SEM traffic around a shopping cart,” cautions Kevin Lee, CEO of New York-based search engine campaign management firm Did-it.com. “You have more customer disconnects [with b-to-b SEM]. “For instance, an engineer may do online research for a particular product looking at specifications for, say, a router. Then the engineer may tell a purchasing agent to go to the site and buy the product. So you have one person doing the research, and another giving the purchase order,” Lee recounts. Following are some ways you can better use SEM for b-to-b marketing: 1. To track the effectiveness of SEM campaigns, Lee says, b-to-b catalogers should use a customer relationship management system to match individual buyers to their companies. “You need some sort of sales tracking system to close the loop and understand what’s working,” he says. 2. The job of tracking b-to-b customers from their initial Web searchs to actual purchase orders often is made more challenging by the fact that they tend to place large orders. “Your most valuable bulk orders don’t come through the ‘traditional’ Web shopping cart,” Lee says. “Businesses tend to place orders on the phone for, say, 75 laptops — after they do online research.” Therefore, map your telephone orders back to search marketing if you can. 3. Use search engine optimization (SEO) services. Because you undoubtedly carry many SKUs, you have key SEM-related decisions to make when it comes to managing your b-to-b product databases. Chicago-based electronic components cataloger Newark InOne, which carries 1.2 million products, has been focusing more on SEO of its most heavily searched product categories, says Mike Yantis, director of Web site sales and marketing. Newark InOne’s catalog is comprised of 10 major product sections, but the company puts a heavy emphasis on the two sections that give it the best return on investment (ROI) in paid search. The two categories, passive components and semiconductors, which combined make up 15 to 20 percent of Newark InOne’s offerings, play best on the search sites, says Yantis. “For the other eight categories, we have pretty good placement in organic search results. But we don’t feel we need to heavily invest in those, because the two top product sections are where we have our strongest reputation.” What’s more, the two segments Newark InOne focuses on cater primarily to design engineers, a younger segment of its customer base and people who “grew up using the Web,” he says. Newark InOne’s efforts have coincided with Google’s and Yahoo!’s ability during the past couple of years to use their algorithms to look through and break up PDF files from its site. “They can pull out manufacturer’s part numbers, which a lot of our customers look for in their searches,” Yantis says. 4. Partner with content aggregators. Newark InOne’s SEM activities aren’t solely focused on Yahoo! or Google. The cataloger also uses content aggregators, such as OEMsTrade.com and PartMiner.com, that serve its industry. Newark InOne gets its catalog displayed on these sites for free. “These sites send real-time queries into our database for manufacturer part matches and delivers them back on our site,” Yantis recounts. “Unlike Google and Yahoo! — which once or twice a week index our site, collect data on it and build it into their indexes — these aggregators look at our site in real time, giving our customers part numbers, product descriptions, inventory availability, and pricing.” 5. Try a keyword advertising program, which is what Newark InOne rolled out last year with Google’s AdWords. “We prefer to be in organic [free] results, but we need a careful balance in both organic and paid search,” Yantis notes. Lee of Did-it.com says all catalogers should understand what the true ROI of paid search will be. “Every keyword will have a different value — bidding higher to get more volume vs. bidding less and getting a lower search ranking to gain a higher ROI but lower volume.” But b-to-b catalogers using sites such as Google and Yahoo! naturally prefer to attract b-to-b customers buying in bulk rather than consumers searching for single items. For example: B-to-b buyers are more likely to type in “corrugated,” an industry term, while consumers may use “cardboard.” B-to-b mailers don’t want to sell just two boxes, Lee says. “Not that they won’t do it, but they want to sell 1,000 or 50,000 boxes. And it’s not always clear when someone types in keywords if they want bulk quantities or smaller amounts.” Marketers, he says, can’t control whether they’re attracting consumers or business buyers. And with paid search, you don’t necessarily want consumers to click through, because you’re paying per click, he says. Adding ‘wholesale’ to ‘cardboard boxes’ might turn away consumers and thus help you prequalify b-to-b buyers. Search Engines: Who Does What? Just about everyone “Googles” these days. And catalogers not only use Google for SEM, but they also use Yahoo!, Overture, MSN, and others. But the different search engines use different technology, and for that matter, have an entirely different reason for being. Confused? Below, with the help of Moorpark, Calif.-based SEM consulting firm Bruce Clay Inc., we’ve compiled brief descriptions of what each of the major search engines do: Google, which offers both organic/free search and paid search, spiders (or crawls) the Web to maintain its index, with emphasis on content and link popularity. The factors that determine your ranking on Google include: - the number of links that point to your site; - the quality (popularity) of the sites that link to your site; - what other sites you link to; and - the text in and around the links that point to your site. Overture Services, which was acquired by Yahoo! in July 2003, is the leader in paid search services on the Internet, says Clay. Overture’s mission is to offer new and powerful methods of helping businesses reach consumers on the Internet. Overture auctions its rankings, so the more you pay per click on your results link, the higher your ranking. For each keyword phrase, advertisers bid against one another for placement within the search results. Below the paid listings, Overture gives additional results that are provided by Inktomi. The main Yahoo! index results are compiled by spidering the Web or participating in the Yahoo! paid-inclusion program. Yahoo! has both a human-edited directory and a spider-based index provided by Google. Sponsored results are provided by Overture and appear at the top of the results page. Additional entries are presented at the bottom of the page. MSN gets its search results from myriad sources. The Web Pages section is provided by Yahoo!, although MSN drops all Yahoo! paid-inclusion results from the provided list. Although you can submit directly to MSN, Clay says, you’d be better off directly submitting to Yahoo! for the time being. Using a crawler to spider the Web, Ask Jeeves uses Teoma 2.0 to employ its branded Subject-Specific Popularity. This ranks a site based on the number of same-subject pages in other ranked sites that link to it (not just general popularity such as page count) to determine a site’s level of authority. As with Inktomi, Teoma has no free “add URL” page. But also like Inktomi, Teoma crawls the Web, so if you have links pointing at your site, you may get included naturally. Last June, Ask Jeeves discontinued its paid-inclusion program. Netscape gets its primary search results and sponsored section from Google. The Open Directory provides the results for Netscape’s Web sites categories and reviewed Web sites. If the Open Directory provides the reviewed Web sites section, you still can view Google results by simply clicking on “Search Again Using Google” in the left-hand column. You can no longer submit your site to Netscape. To be listed on it, submit your site to Google or the Open Directory, Clay advises. To rank well in HotBot, use keywords in the title and meta tags, Clay says. Use high-frequency keywords in the body, and use longer documents. Offering both organic and paid search, Lycos, one of the oldest search engines, changed from a straightforward search engine to a Yahoo!-type directory that includes news, shopping, personalized sections and a search engine component, Clay says. Lycos has partnered with Terra.com, a Spanish- and Portuguese-speaking online company. Terra Lycos now owns several Web properties, including HotBot.com, Matchmaker.com, Quote.com, Rumbo.com, Webmonkey.com and WhoWhere.com. Case Study: SEM Proponent Shares His Trade Secrets Sea Eagle, a $10 million multichannel merchant of inflatable boats and accessories, got into search engine marketing (SEM) before most marketers knew what SEM was — or for that matter, what the Web was. “We set up our Web site in 1996, and very early on we realized the site wouldn’t do us much good unless people could find it,” says John Hoge, vice president. “So we designed it to appeal to both customers and search engine spiders.” Founded as a print cataloger in 1967, today 75 percent of the Port Jefferson, N.Y.-based Sea Eagle’s sales come from customers who’ve had some interaction with its Web site. Paid and organic/free SEM combined account for one-third of the company’s sales. Hoge says Sea Eagle doesn’t have a specific budget for paid SEM. “We play everything by ear,” he says. “It’s so easy to measure — we just look to see if we’re getting sales and making money from SEM.” Hoge looks at each search bid; if it’s tracking in the black or is breaking even, he continues using it. If it’s in the red, “we cancel immediately.” Sea Eagle spent $11,000 in paid search on Overture in 2001, and $35,000 per year since then. It began advertising on Google in 2002, and spent $2,000 in 2002, and $37,000 in 2003 and ‘04. Hoge says the money is well spent. “For a good search term, we can sometimes make $10 to $20 per Web visitor,” he says. “But for random, untargeted traffic, that might be only 20 to 30 cents.” From the beginning, SEM went hand-in-hand with the company’s e-commerce efforts. “I never saw SEM as something different from Web marketing,” he notes. The company’s first SEM efforts came through Go2. As Overture built its pay-per-click model, Sea Eagle switched to that company and then to Google. “For a small company like ours in a niche market, pay-per-click is ideal,” says Hoge. “People looking for inflatable kayaks [often] can’t find them in their local malls. And we’re very targeted, so we can outbid most other companies and get in most of the top listings,” he adds. Tricks of the Trade Four of Sea Eagle’s proven tactics: 1. Repeat the keyword on your site for higher ranking. In addition to using keywords such as “inflatable boats” as often as it can, Sea Eagle puts its keywords in several places on its site so spiders can catch them. 2. Avoid links to lengthy Web addresses. Take all product descriptions and build them into Web pages. “If you link to pages with URLs that are a long, convoluted mess with all the symbols, that means the page is being generated dynamically from the database, and search engines may say there’s no point and not index that page,” Hoge cautions. “My theory is that if the search engines don’t think a page is dynamically generated, they don’t have a predictable way of knowing when visitors will visit the actual page. So we create pages in advance or recreate them periodically from a database, keeping the URLs basically static and simple.” 3. Stick to a few keywords germane to your business. “We tend to have the best of a very narrow range of products — inflatable boats and kayaks,” Hoge says. “If we try to use more general keywords, such as boating or kayaking, it’s harder for us to make money on pay-per-click.” 4. Try misspelled keywords. “It may not generate a lot of traffic,” Hoge says, “but it’s targeted traffic. And we can bid 15 cents instead of $1 for a misspelled word like ‘inflatible.’ It’s not huge, but it’s pure profit.” 5-minute Interview Michael Aronowitz, executive director of The Direct Marketing Association’s online wing, the Association for Interactive Marketing (AIM), shares his thoughts and suggestions on SEM. Catalog Success: Can you name some SEM best practices? Aronowitz: A lot of direct marketers ensure that the content management systems being used on their sites can export content in order to allow search engine spiders to crawl their sites. CS: How is SEM impacting merchants’ bottom lines? Aronowitz: It’s allowed smaller catalogers to compete where they haven’t been able to before. It can level the playing field for competing brands. CS: What mistakes do you see merchants making with SEM, and what’s the remedy? Aronowitz: Most often I see a failure to ensure that natural listings are highly ranked. On the flip side, when a company’s ranking is high, it may not [see the need to] purchase the keywords. Another big mistake: not tracking results, which might prevent merchants from reinvesting in a successful campaign. CS: For those catalogers not engaged in SEM, what are the compelling reasons to get started? Aronowitz: Traffic, sales and getting new customers at a cost you can control. CS: What does the future hold for SEM? Aronowitz: I see other areas of search — such as contextual and behavioral targeting — growing, since many have a pay-for-performance model. That allows for testing at a very low risk. I think the next big step is the migration of companies to the pay-for-performance model.
correct_subsidiary_00108
FactBench
3
59
https://www.computerworld.com/article/1419467/terra-lycos-signs-ibm-for-exclusive-hardware-support.html
en
Terra Lycos signs IBM for exclusive hardware, support
https://www.computerworl…avicon-32x32.png
https://www.computerworl…avicon-32x32.png
[ "https://www.computerworld.com/wp-content/uploads/2024/07/482799-0-06396700-1721682924-author_photo_JR-Raphael_1712087894.jpg?quality=50&strip=all&w=350", "https://www.computerworld.com/wp-content/uploads/2024/07/johnny-evan-150-100412881-orig-26.png?w=150", "https://www.computerworld.com/wp-content/uploads/2024/07/3475880-0-17595600-1721657018-Screenshot-2024-07-22-at-10.00.54 AM.png?w=444", "https://www.computerworld.com/wp-content/uploads/2024/07/2520794-0-15985200-1721648268-5-tips-for-securing-your-remote-workspace.png?w=444", "https://www.computerworld.com/wp-content/uploads/2024/07/2889660-0-26296200-1721477379-System-Failure.jpg?quality=50&strip=all&w=375", "https://www.computerworld.com/wp-content/uploads/2024/07/cw-thumb-vision-pro-flop-1.jpg?quality=50&strip=all&w=320", "https://www.computerworld.com/wp-content/uploads/2024/07/cw-thumb-tiktok-ban-craig-albert-1.jpg?quality=50&strip=all&w=320", "https://www.computerworld.com/wp-content/uploads/2024/07/cw-thumb-abba-music-ai-1.jpg?quality=50&strip=all&w=320", "https://www.computerworld.com/wp-content/uploads/2024/07/2520726-0-72628300-1721326442-CW-thumb-ai-for-everyone-ryan-cox.jpg?quality=50&strip=all&w=320", "https://www.computerworld.com/wp-content/uploads/2024/07/cw-thumb-vision-pro-flop.jpg?quality=50&strip=all&w=320", "https://www.computerworld.com/wp-content/uploads/2024/07/cw-thumb-spatial-tech-mike-bechtel.jpg?quality=50&strip=all&w=320" ]
[]
[]
[ "" ]
null
[ "Todd Weiss", "JR Raphael Contributing Editor", "Jonny Evans", "Gyana Swain" ]
2001-07-27T13:28:57-04:00
en
https://www.computerworl…e-touch-icon.png
Computerworld
https://www.computerworld.com/article/1419467/terra-lycos-signs-ibm-for-exclusive-hardware-support.html
Computerworld online – Global Internet company Terra Lycos SA is shifting to a one-vendor IT strategy, signing on with IBM Corp. for hardware and support for the next two years and replacing equipment and software now supplied by Sun Microsystems Inc. and Compaq Computer Corp. The deal that was announced today is worth tens of millions of dollars, but exact figures weren’t released. Under the agreement, IBM will provide eServer pSeries Unix servers; eServer xSeries Intel Corp.-based servers; and software, including the WebSphere e-business software, Tivoli Systems Inc.’s enterprise management software and Lotus Development Corp.’s Lotus Notes. IBM will also provide all necessary support for new and existing systems and software, said Tim Wright, chief technical officer (CTO) at Terra Lycos. Terra Lycos had been using Compaq Intel-based hardware running Linux or Windows 2000 and Sun hardware running Solaris. The reason for the switch, Wright said, is that he’ll now have one vendor to deal with for his entire system. “I don’t want five vendors in my data center,” he said. “I’m into making my life simple.” Barcelona-based Terra Lycos, which has its U.S. headquarters in Waltham, Massachusetts, already uses IBM software, including Tivoli, Lotus Notes and Informix Corp.’s database, Wright said. So when the company sought proposals for additional support, it made sense to seek one vendor in an effort to streamline operations, he said. Terra Lycos’ Web infrastructure serves about 99 million Web visitors each month, according to the company. Under the deal, IBM will tie together the company’s Web-serving, enterprise resource planning and customer relationship management and systems. Terra Lycos will also collaborate with scientists at the IBM Research Lab in Almaden, Calif., to develop new technologies and applications. Arch Currid, a spokesman for Compaq, said “it’s a normal day-to-day thing in the industry” when one IT vendor gets a major new customer at the expense of another. “We compete for those accounts all the time,” he said. While Compaq lost this round, Currid said, his company has recently reached deals with other users, including Charlotte, N.C.-based Bank of America Corp. A Sun spokeswoman declined to comment on the move.
correct_subsidiary_00108
FactBench
2
15
https://www.slideshare.net/slideshow/terra-lycos-go-get-itin-a-few-short-years-the-internet-hasdocx/255471114
en
Terra Lycos Go Get It! In a few short years, the Internet has.docx
https://cdn.slidesharecd…t=640&fit=bounds
https://cdn.slidesharecd…t=640&fit=bounds
[ "https://public.slidesharecdn.com/images/next/logo-slideshare-scribd-company.svg?w=128&q=75 1x, https://public.slidesharecdn.com/images/next/logo-slideshare-scribd-company.svg?w=256&q=75 2x", "https://image.slidesharecdn.com/terralycosgogetitinafewshortyearstheinternethas-230123041644-3c52f0c9/85/Terra-Lycos-Go-Get-It-In-a-few-short-years-the-Internet-has-docx-1-320.jpg 320w, https://image.slidesharecdn.com/terralycosgogetitinafewshortyearstheinternethas-230123041644-3c52f0c9/85/Terra-Lycos-Go-Get-It-In-a-few-short-years-the-Internet-has-docx-1-638.jpg 638w, https://image.slidesharecdn.com/terralycosgogetitinafewshortyearstheinternethas-230123041644-3c52f0c9/75/Terra-Lycos-Go-Get-It-In-a-few-short-years-the-Internet-has-docx-1-2048.jpg 2048w" ]
[]
[]
[ "" ]
null
[]
2023-01-23T04:16:44+00:00
Terra Lycos Go Get It!   In a few short years, the Internet has.docx - Download as a PDF or view online for free
en
https://public.slidesharecdn.com/_next/static/media/favicon.7bc3d920.ico
SlideShare
https://www.slideshare.net/slideshow/terra-lycos-go-get-itin-a-few-short-years-the-internet-hasdocx/255471114
1. Terra Lycos: Go Get It! In a few short years, the Internet has revolutionized the way companies do business. Of course, there have been huge successes as well as painful failures among the companies that have embraced the Internet-particularly those that have relied on the Internet for their very survival. But overall, the Internet offers global opportunities for a variety of individuals and organizations. One of those is Lycos Inc. Founded in 1995, Lycos Network was initially an Internet portal-an entryway much like its larger competitors Yahoo! and America Online. Within a few years, experts predicted that the company would capsize in the Web, swamped by its giant competitors. "We were in danger of being an afterthought in early 1998," recalls Lycos chief financial officer Edward Philip. But a series of changes has turned Lycos around. Today, according to industry watcher Media Matrix, the company's collection of sites is the fourth-largest destination for people using the Web. "We had less funding and were late to market, yet we beat the odds and have flourished," boasts CEO Bob Davis. The company also has a new name: Terra Lycos. More on that later. Lycos saved itself largely through a series of alliances and acquisitions, along with the introduction of new tools and services that benefit both consumers and business customers. One service, the "Lycos Daily 50 Report," helps marketers follow emerging consumer trends by tracking the topics that typical users search the Internet for. The report is simply a list of the fifty most popular search terms of the past seven days. It removes company names, porn sites, and Internet utility terms such as "chat room" and comes up with the fifty most useful words and phrases. "Our goal is to create an up-to-date list of the people, places, and things that Internet users are interested 2. in," explains Jonathan Levine, director of content development. "It's a great way for people to stay current. For marketers, this tool can be used to get an idea about emerging consumer trends." This is just one way that the Lycos site helps create opportunities for other businesses. During the past few years, Lycos has allied with or acquired companies such as Tripod Inc. and HotBot. Lycos and Bell Canada created a now company called Sympatico-Lycos, which would provide Canadians with expanded Internet resources for the business-to-business market. In the fall of 2000, Lycos became the " exclusive community provider for the Olympic Games," hosting and managing all Olympic athlete chats, message boards, and fan clubs for the Sydney Olympics. McDonald's joined the party as a sponsor of the Lycos Olympic site, in exchange for featured advertising. " This is a powerful combination linking two global leaders in support of the Sydney Olympic Games, and we look forward to continuing to work with McDonald's to further leverage the strengths of both companies," stated Jeff Bennett, senior vice president of corporate development at Lycos. Later, Lycos Asia received a license from the Chinese government to operate one of China's first foreign-owned Web sites. Previously, foreign-owned Web companies could function only through partnerships with Chinese institutions that would exert control over operations. While all of these alliances are potential opportunities, they also increased the complexity of the company-and the complexity of its problems. So, Lycos hired its first chief information officer, Tim Wright. "They were looking for someone with experience in acquisitions, someone who knew how to handle multiple staffs of skilled people and knew how to blend disparate pieces together," Wright explains. In other words, Wright's job was to figure out how to weave technology and people together in a way that allowed workers and managers in the acquired companies to continue to do what they do best. 3. He also showed them how their relationship with Lycos could actually increase their business. "We let [acquired companies] know right away that we can help them by redirecting our traffic to their site and re-circulating traffic back their way," says Wright. But the biggest deal for Lycos was still to come. The company agreed to be acquired by Spanish Internet service provider Terra Networks in a stock swap that valued Lycos at around $12.5 billion, with the idea that the merger would begin to create a megaportal to the Internet that would dominate Europe and Latin America. Pep Valles, the founder of Terra, views the deal as the global opportunity of a lifetime. "Who hits first hits twice," he remarks, repeating an old Spanish saying. "On the Internet, who hits first hits ten times." He sounds a bit like the first Lycos television commercial, which brought Lycos to the attention of many American consumers. The ad featured a black lab retriever named Lycos who streaked back and forth from the edge of the world to his owner, finding anything that his owner asked for. "Go get it!" the voice of Lycos's owner commanded. And Lycos did. QUESTIONS 1. Using information in the Text, outline three ways that you think Terra Lycos could help other businesses create opportunities for themselves using the Internet. 2. What methods might Terra Lycos use to measure the effectiveness of the various Web sites of its affiliates and subsidiaries? 3. Identify three challenges that managers of Terra Networks and Lycos will likely face as they merge the two organizations.
correct_subsidiary_00108
FactBench
2
42
https://corporate.beyond.com/corporate/bios
en
Beyond
https://ak1.ostkcdn.com/…avicon-32x32.png
https://ak1.ostkcdn.com/…avicon-32x32.png
[ "https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06212024-012524-Corporate-Pages-Beyond-Logo-3.svg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-1400x100.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06202024-Corpbanner-whtbkg-767x100.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06202024-about-executiveteam-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-MarcusLemonis.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-AdrianneLee.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-DaveNielsen.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-CarlishaRobinson.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-desktop-1400x100-2.svg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/06202024-about-boardofdirectors-headlineprimary-xsmall-mobile-767x100-1.svg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/JoannaBurkey_2024_1026x660.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-BarclayCorbus.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-WilliamNettles.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-RobertShapiro.jpg?imwidth=3840 3840w", "https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=16 16w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=32 32w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=48 48w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=64 64w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=96 96w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=120 120w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=128 128w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=200 200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=256 256w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=320 320w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=384 384w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=400 400w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=480 480w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=640 640w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=750 750w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=828 828w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=1080 1080w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=1200 1200w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=1920 1920w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=2048 2048w, https://ak1.ostkcdn.com/img/mxc/100523-ESG-About-JosephTabacco.jpg?imwidth=3840 3840w" ]
[]
[]
[ "" ]
null
[]
null
Welcome to Beyond, an innovative and technology-driven leader in online retail.
https://ak1.ostkcdn.com/…e-touch-icon.png
https://www.beyond.com/
Marcus Lemonis Executive Chairman In a world where business leaders are lauded for being cutthroat, Marcus does it all with heart and compassion for the people, process, and product he aspires to elevate. But as with many success stories, his road has not always been an easy one. At four days old, Marcus was left at the steps of a Lebanese orphanage by his birth mother. He was soon adopted and brought to Miami, Florida where he was raised by his loving adoptive parents, Sophia and Leo Lemonis. Although he struggled with weight issues, bullying and a lack of self-confidence as a child, it was his mother, Sophia, who taught him to embrace his unique qualities, allowing him to discover that he had a head for business and a gift for helping others. As a young man, Marcus honed his entrepreneurial spirit while working at his family’s automotive dealership. By the age of 25, Marcus seized upon an opportunity to reshape the way recreational vehicles and outdoor equipment were sold. Under his leadership and vision, Camping World would grow to become the Nation’s largest RV retailer and would make Marcus Lemonis one of the most successful businessmen in America. As a result, he has also become an in-demand motivational speaker. Next, Marcus set out to prove himself in another arena, as host of his own TV show. His hit series The Profit, in which Marcus helped turn around and improve businesses, ran for eight seasons on CNBC. It spawned spinoffs including the award-nominated Streets of Dreams, featuring Marcus educating himself and viewers about the financial workings of key industries, as well as special episodes of The Profit. With each great success for Marcus, the lessons of humility imparted by his mother have helped keep him grounded, reminding him that the true riches of life are found in giving back. With tens of millions of dollars donated to charitable organizations and invested in independent businesses, he is an advocate for the underdog and finds his strongest inspiration by investing in people. Marcus Lemonis may have been born in Lebanon, but he is 100% American-made. Adrianne Lee Chief Financial & Administrative Officer Adrianne Lee is the Chief Financial & Administrative Officer of Beyond, Inc., responsible for all financial-related matters for the leading furniture and home furnishings ecommerce company. In this role, she oversees financial planning and analysis, accounting and reporting, tax, treasury, internal audit, investor relations, help desk, IT security, and Corp Systems. Lee was instrumental in Overstock.com’s acquisition and subsequent relaunch of the Bed Bath & Beyond brand in 2023. Lee joined the company in 2020 when it was Overstock.com. Previously, she was Senior Vice President and CFO of Hertz's North American Rental Car unit, leading a team of 50+ finance professionals responsible for a $7 billion business unit that spanned the U.S. and Canada. Before joining The Hertz Corporation, Lee led financial planning and analysis for Best Buy's multi-billion-dollar ecommerce business, and held several roles in finance, strategic planning, accounting and financial reporting, investor relations and audit at PepsiCo, Allianz Life and PricewaterhouseCoopers. She is a current board member of CNO Financial Group (NYSE: CNO). Lee attended the University of St. Thomas in St. Paul, Minnesota, and received cum laude honors while earning a Bachelor of Arts degree in business administration with a focus on accounting. Dave Nielsen President Dave Nielsen is the President of Beyond, Inc., where he oversees company operations including merchandising, marketing, supply chain and customer service, digital product, technology and algorithms for the leading furniture and home furnishings online retailer. Nielsen played a pivotal role in Overstock.com’s acquisition and subsequent relaunch of Bed Bath & Beyond in 2023, breathing new life into the iconic retail brand. Prior to this role, Nielsen served as Overstock's Chief Sourcing and Operations Officer, responsible for leading the company’s merchandising, partner operations, category management, supply chain and logistics teams. Nielsen previously was co-president of Overstock, leading the company's merchandising marketing, supply chain, analytics and pricing operations. He left Overstock for a brief time in 2015 to assume the role of CEO at Global Access, a leading global provider of logistics technology and cross border expertise for domestic and international brands. Nielsen also held several leadership positions with Payless ShoeSource, Inc., eventually rising to the role of vice president of merchandise allocation, where he was responsible for the assortment planning and allocation of merchandise across 4,500 stores in the US, Canada, and Puerto Rico. Additionally, Nielsen served as president and CEO of Old Town Imports, LLC, where he created a product development, sourcing and omni-channel supply chain organization that sourced product to clients such as Costco, Target and regional restaurant and catering companies. Nielsen received his bachelor's Degree in Business Management with an emphasis in Marketing from Brigham Young University. Nielsen currently sits on the International Housewares Association Retail Advisory Council. Carlisha Robinson Chief Customer Officer Carlisha Robinson is the Chief Customer Officer of Beyond, Inc., responsible for the strategic roadmap of all product management and user experience (UX) functions across the company’s digital experience. In this role, she oversees the end-to-end customer experience including e-commerce platforms (mobile app, mobile web, and desktop), the customer file, loyalty programs, customer service, and all Beyond+ product and service offerings. Robinson played a pivotal role in Overstock.com’s acquisition of the Bed Bath & Beyond brand and corporate transition to Beyond, Inc. in 2023, and the recent acquisition of the Zulily brand in 2024. She joined the company when it was Overstock.com in 2022. Previously, Robinson held leadership roles at IBM, BMC Software, CPA Global (Innography), and Volusion. She has more than 30 years of experience in various computer software industries – from Information Technology (IT) to Intellectual Property (IP) to ecommerce – and is known for her ability to successfully distil the needs of customers into innovative solutions, transforming technology organizations to deliver award-winning products that delight customers and drive value. Robinson earned her “tiger stripes” from an HBCU - Grambling State University - with a Bachelor of Computer Science & Mathematics degree. She is passionate about advancing minority women in technology, education, and in providing justice for all. Robinson advocates and serves as a life-time member of Delta Sigma Theta Sorority, Inc. and as a founding board member of the Excellence and Advancement Foundation, an organization focused on prevention and intervention programs to break the school to prison pipeline for minority youth. Joanna Burkey Director Joanna Burkey was appointed to the board of directors of Beyond, Inc. (formerly Overstock.com, Inc.) in March 2023. She is currently an independent director at Beyond, Inc, serving on both the audit committee and compensation committee. Ms. Burkey also sits on the Board of ReliabilityFirst Corporation. Throughout her career, she has focused on cybersecurity, strategy, and engineering, most recently as former Chief Information Security Officer and HP, Inc. She is passionate about increasing diversity within technology and using her decades of experience in security and risk to serve others. Throughout her career, Ms. Burkey has focused on leading functions and organizations that need turnaround and development to function at their best. This has provided multiple opportunities for Ms. Burkey to refine her strategic leadership skills to recognize unique situations, design appropriate outcomes, and enable the optimal path forward in each individual circumstance. Ms. Burkey is also a publicly recognized thought leader. She was named one of the Top Global CISOs of 2022 by Cyber Defense Awards and has been published in multiple articles as well as the book Tribe of Hackers: Security Leaders. She has a Computer Science and Mathematics background from both Angelo State University and the University of Texas Austin. She is also an independent director of ReliabilityFirst based in Cleveland OH (where she serves as Chair of the compliance and risk committee) and holds both Directorship Certification from NACD (National Association of Corporate Directors) and Qualified Technology Executive certification from DDN (Digital Directors Network). Barclay F. Corbus Director Barclay F. Corbus has served as a director on the board of Beyond, Inc. (formerly Overstock.com, Inc.) since March 2007. He currently is a member of the nominating & corporate governance committee and the chairman of the compensation committee. Mr. Corbus is the Senior Vice President of Strategic Development and Head of Renewable Fuels for Clean Energy Fuels Corp., a provider of renewable fuels for heavy duty fleet vehicles. Mr. Corbus has been with Clean Energy since 2007. Before his time with Clean Energy, he was Co-CEO of WR Hambrecht + Co., an investment banking firm, which he joined in 1999. Prior to that Mr. Corbus was in the investment banking group at Donaldson, Lufkin and Jenrette where he started in 1989. Mr. Corbus is currently a Trustee of the College of the Atlantic, and has previously served on the boards of Alaska Energy and Resources Co, Niman Ranch, WR Hambrecht + Co. and Goodwill of San Francisco. Mr. Corbus graduated from Dartmouth College with a bachelor of arts degree in government and has a master of business administration degree in finance from Columbia Business School. William Nettles Director William B. Nettles, Jr. was appointed to the board of directors of Beyond, Inc. (formerly Overstock.com, Inc.) in June 2020. He currently serves as chairman of the audit committee. Mr. Nettles is currently a co-founder and managing partner of Invictus Growth Partners. He previously was the executive vice president of Sungevity, based in Oakland, California, where he led the company out of bankruptcy and turned it around into a profitable business in less than a year. Before this, he was the director of investments at Pan African Investments (PIC), a New York City-based private investment firm, whose mission was to make an impact in Africa by identifying and investing in technology companies that promote growth and development in the region. Prior to PIC, Mr. Nettles worked at VeriFone for over ten years where he initially served as vice president and head of corporate development and investor relations and later as the general manager of the Middle East and Africa. He was a corporate development executive at Lycos prior to this and helped lead the successful sale of Lycos to Terra Networks. Mr. Nettles began his career at Credit Suisse, where he was an investment banker, focused on mergers, acquisitions, equity and debt financings. Mr. Nettles is also a founder and serves on the board of directors of Advanced Mobile Payments, a payment technology solutions company located in Newport Beach, California. Mr. Nettles is an active mentor with the Sponsors for Education Opportunities career program, a non-profit organization that provides underrepresented minorities with access to internship opportunities on Wall Street. Mr. Nettles is a graduate of the University of California at Berkeley where he holds a bachelor of science degree in business administration. Robert Shapiro Director Robert J. Shapiro was appointed to the board of directors of Beyond, Inc. (formerly Overstock.com, Inc.) in February 2020. He currently is a member of both the audit committee and the nominating and corporate governance committee. Dr. Shapiro is the chairman and founder of Sonecon, LLC, a private firm that advises U.S. and foreign businesses, governments and non-profit organizations on economic matters. He has advised, among others, U.S. President Bill Clinton, British Prime Minister Tony Blair, Vice President Al Gore, Jr., U.K. Foreign Minister David Miliband, Secretary of State Hillary Clinton, Treasury Secretaries Robert Rubin and Timothy Geithner, and many senior members of the Obama administration, the U.S. Senate and the House of Representatives. He and Sonecon also have advised senior officials of the Departments of Defense and Energy; senior executives at private firms including AT&T, Elliot Management, Exxon-Mobil, Gilead Sciences; and Google, and non-profit organizations including the International Monetary Fund, the Johns Hopkins University Applied Physics Laboratory, the U.S. Chamber of Commerce, and the Center for American Progress. Dr. Shapiro is also a senior fellow of the Georgetown University Center for Business and Public Policy, director of the NDN Center on Globalization, and a member of the advisory boards of Cote Capital and Civil Rights Defenders. From 1997 to 2001, he was U.S. Under Secretary of Commerce for Economic Affairs. Prior to that, he was co-founder and vice president of the Progressive Policy Institute and, before that, the legislative director and economic counsel to Senator Daniel P. Moynihan. Dr. Shapiro also served as the principal economic advisor to Bill Clinton in his 1991 to 1992 presidential campaign, and as a senior economic advisor to Hillary Rodham Clinton in 2016 and the presidential campaign of Joseph Biden in 2020. Before that, he advised the campaigns of Barack Obama, John Kerry and Al Gore, Jr. Dr. Shapiro has been a fellow of Harvard University, the Brookings Institution, the National Bureau of Economic Research, and the Fugitsu Institute. Dr. Shapiro holds a bachelor of arts degree from the University of Chicago, a master of science degree from the London School of Economics and Political Science, and a doctor of philosophy degree and master of arts degree from Harvard University.
correct_subsidiary_00108
FactBench
1
2
https://money.cnn.com/2000/05/16/europe/terra/
en
Lycos in $12.5B deal
[ "https://i.cdn.turner.com/money/images/dot.gif", "https://money.cnn.com/images/logo/cnnmoney_logo.gif", "https://i.cdn.turner.com/money/images/newhome/advertisement_120.gif", "https://i.cdn.turner.com/money/images/corner.gif", "https://i.cdn.turner.com/money/images/dot.gif", "https://money.cnn.com/2000/05/16/europe/terra/internet_deals.01.jpg", "https://i.cdn.turner.com/money/images/dot.gif", "https://i.cdn.turner.com/money/images/dot.gif", "https://i.cdn.turner.com/money/images/dot.gif", "https://i.cdn.turner.com/money/images/dot.gif", "https://money.cnn.com/images/camera.gif", "https://money.cnn.com/2000/05/16/europe/terra/telefonica.jpg", "https://money.cnn.com/2000/05/16/europe/terra/lycos2.jpg", "https://money.cnn.com/images/bug.gif", "https://i.cdn.turner.com/money/images/dot.gif", "https://money.cnn.com/images/advertisement.120.gif", "https://i.cdn.turner.com/money/.element/cnnm-3.0/img/logo/cnnmoney_blue.svg", "https://pixel.quantserve.com/pixel/p-D1yc5zQgjmqr5.gif", "https://i.cdn.turner.com/money/.element/cnnm-3.0/img/logo/cnnmoney_blue.svg", "https://pixel.quantserve.com/pixel/p-D1yc5zQgjmqr5.gif", "https://money.cnn.com/cookie.crumb" ]
[]
[]
[ "" ]
null
[]
2000-05-16T00:00:00
Spanish Internet service provider Terra Networks SA agreed to pay about $12.5 billion in stock for the U.S. Web portal Lycos Inc., giving Terra a valuable source of online content and ending more than a year of acquisition discussions surrounding Lycos.�
null
NEW YORK (CNNfn) - Spanish Internet service provider Terra Networks SA agreed to pay about $12.5 billion in stock for the U.S. Web portal Lycos Inc., giving Terra a valuable source of online content and ending more than a year of acquisition discussions surrounding Lycos.� Terra Networks is a fast-growing but money-losing Internet access provider that is majority-owned by Telefonica SA, the parent company of the largest telecommunications group in Spain and Latin America. Terra said Tuesday that it will acquire Lycos in a stock transaction valued at $97.55 per Lycos share, about 34 percent above Tuesday's closing price and 80 percent above last Friday's closing price for Lycos (LCOS: Research, Estimates). News reports of the likely deal had circulated since last Friday, when Terra confirmed that the two sides were in talks about a possible alliance. The transaction comes almost exactly one year after Barry Diller's USA Networks Inc. abandoned a bid for Lycos. The Web portal's shareholders, most notably the Internet venture company CMGI Inc. (CMGI: Research, Estimates), objected to the proposed transaction with USA Networks because they felt it assigned too low a value to Lycos compared with the premiums that had been paid for other Web properties. This time around, analysts said that Lycos brings much more to the table than Terra (TRRA: Research, Estimates), and that the $12.5 billion price tag Terra is offering is one of the only compelling reasons for Lycos to agree to the transaction. "It's a good move for Terra because it gives them the number four Web player in the U.S. and a top-10 player in some of the major European markets," said Michael Wallace, an analyst at UBS Warburg. "From the Lycos perspective, it helps them in Spain and Latin America and that's about it."� Waltham, Mass.-based Lycos already has a significant presence in Europe through a joint venture with Bertelsmann, the third-largest media company in the world. Last March, Lycos completed an initial public offering of its European joint venture, raising $649 million. Terra and Lycos said they expect to report combined revenue of $500 million this year and currently have an estimated 50 million unique users and 175 million page views per day. The combined company, to be called Terra Lycos, will have operations in 37 countries in North America, Latin America, Asia, and Europe. Downside risk of the transaction Because Terra is purchasing Lycos for stock, rather than cash, Lycos shareholders are exposed to the risk that Terra's stock will decline before the transaction is completed. The way the transaction is structured, Lycos shareholders will receive a maximum of 2.15 Terra shares, which means the amount they will receive will be less than $97.55 if Terra stock drops below $45.37 per share. That risk is not entirely theoretical - Terra's stock has plunged to 53-9/16 from 135 last February.� In after-hours trading Tuesday, Lycos stock rose 2-1/8 to 74-3/4, well below the price Terra is offering, suggesting investors are skeptical about the final dollar value of the deal. "I think people are trying to work their way through the deal, and there probably are some questions about the value of Terra's stock as currency," said Paul Noglows, an analyst at Chase H&Q. Nevertheless, analysts said that Terra was offering a good price for Lycos. "I think it's a fair price, when you consider that Lycos' all-time high was 93-5/8 and that there are Internet stocks trading 50 percent-to-90 percent below their all-time highs," Noglows said. "It's also more than double the price that was contemplated last year during the USA Networks transaction." The CMGI reaction The boards of Terra and Lycos have unanimously approved the transaction, as has the board of Telefonica SA, which owns about 67 percent of Terra's stock. Terra said that it expects to close the transaction in the third quarter of this year, following votes by both companies' shareholders. CMGI declined to comment about the transaction to CNNfn television. However, Bob Davis, president and CEO of Lycos, said on CNNfn's Moneyline that he had spoken about the deal with CMGI Chief Executive David Wetherell. "David was very enthusiastic about the transaction," Davis said. As part of the agreement, Terra Lycos entered into an agreement with Bertelsmann, under which Bertelsmann will purchase $1 billion of advertising and placement on Terra Lycos over a five-year period. In addition, the combined company will gain access to Bertelsmann's catalog of books, movies, music, and other media content on preferred terms. "The combination of Terra and Lycos, supported by the strategic relationships with Telefonica and Bertelsmann, creates a global Internet and new media powerhouse with a scale and global footprint unmatched by any other media company in the world," said Juan Villalonga, chairman and CEO of Telefonica and chairman of Terra.� When the merger is completed, Villalonga will serve as chairman of Terra Lycos. Lycos' Davis will be the CEO of the combined company. Abel Linares, Terra's CEO, will be chief operating officer. $2 billion stock offering As part of the transaction, Terra will sell $2 billion of new stock to its existing shareholders at today's Madrid closing price of $56.13 per Terra share. After this stock sale, the combined company will have about $3.5 billion in cash, Terra said.�� The terms of the transaction call for Lycos shareholders to receive $97.55 worth of Terra ordinary shares, or their equivalent in Terra's American depository receipts (ADRs). However, Lycos shareholders will not receive less than 1.433 or more than 2.15 Terra shares. When the transaction is completed, Terra shareholders will own between 54 percent and 63 percent of the combined company, depending on the final share exchange ratio. While Terra has a larger market capitalization than Lycos, Lycos has much greater revenue and is profitable, while analysts had expected Terra to lose about $200 million on revenue of $150 million this year. Terra's main revenue source, Internet access fees, is being threatened by the spread of free Internet access in parts of Europe and Latin America. "Based on our strategic plan and the experience of other Internet portal companies and Internet service providers, we expect that our expenses will continue to exceed our revenues for at least the next three years," Terra said in a filing made at the Securities and Exchange Commission late last year. "After that, depending upon competitive conditions and the dynamics of the industry, we may continue to pursue a strategy that emphasizes strength of market share and market presence at the expense of profitability." Lycos, by contrast, reported a profit of $3 million, or 3 cents per share, on revenue of $68.6 million in the quarter ended January 31, 2000. Securities analysts expect the company's revenue to total more than $260 million this fiscal year, which ends in July, with net income rising to an estimated $15 million.
correct_subsidiary_00108
FactBench
1
56
https://publishing.insead.edu/case/terra-lycos-profiting-information-products
en
Terra Lycos: Profiting from Information Products
https://publishing.insea…s-2022-small.jpg
https://publishing.insea…s-2022-small.jpg
[ "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/png/icon_cart.png", "https://publishing.insead.edu/themes/custom/case_publishing/logo.svg", "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/png/icon_cart.png", "https://publishing.insead.edu/sites/publishing/files/Theodoros-Evgeniou-13201_52.jpg", "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/free@3x.png", "https://publishing.insead.edu/sites/publishing/files/logo-sorbonne-white_0.png", "https://publishing.insead.edu/sites/publishing/files/wharton-logo.png", "https://publishing.insead.edu/sites/publishing/files/tshinghua-logo.png" ]
[]
[]
[ "Information products", "Information economy", "Versioning", "Customer relationship management", "Portals", "Media", "Dynamic pricing. AR2003", "AR0203", "RD0503" ]
null
[]
null
The case discusses how Terra Lycos is reshaping itself during the economic slowdown by diversifying its revenue stream and attempting to start selling products once offered for free. As a portal, Terra Lycos' products were, by nature, mainly information based. Can Terra Lycos succeed in transforming its products from free to pay? This case can be used for MBA and executive education programmes to discuss key ideas on information economics and strategies for information products. This can be part of a class on managing IT, e-commerce, or strategy in the information economy. The case is also suitable for a session in an economics class on versioning and dynamic pricing, and for classes on media and/or portals.
en
/sites/publishing/files/favicon-16x16_2.png
https://publishing.insead.edu/case/terra-lycos-profiting-information-products
The case discusses how Terra Lycos is reshaping itself during the economic slowdown by diversifying its revenue stream and attempting to start selling products once offered for free. As a portal, Terra Lycos' products were, by nature, mainly information based. Can Terra Lycos succeed in transforming its products from free to pay? This case can be used for MBA and executive education programmes to discuss key ideas on information economics and strategies for information products. This can be part of a class on managing IT, e-commerce, or strategy in the information economy. The case is also suitable for a session in an economics class on versioning and dynamic pricing, and for classes on media and/or portals. This case can be used for MBA and Executive Education programs to discuss key ideas on information economics and strategies for information products. This can be part of a class on managing IT, e-commerce, or strategy in the information economy. The case is also suitable for a session in an economics class on versioning and dynamic pricing, and for classes on media and/or portals.
correct_subsidiary_00108
FactBench
3
19
https://www.economist.com/unknown/2001/08/07/learning-from-lycos
en
Learning from Lycos
https://www.economist.co…llback-image.png
https://www.economist.co…llback-image.png
[ "https://www.economist.com/cdn-cgi/image/width=256,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 256w, https://www.economist.com/cdn-cgi/image/width=360,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 360w, https://www.economist.com/cdn-cgi/image/width=384,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 384w, https://www.economist.com/cdn-cgi/image/width=480,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 480w, https://www.economist.com/cdn-cgi/image/width=600,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 600w, https://www.economist.com/cdn-cgi/image/width=834,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 834w, https://www.economist.com/cdn-cgi/image/width=960,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 960w, https://www.economist.com/cdn-cgi/image/width=1096,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 1096w, https://www.economist.com/cdn-cgi/image/width=1280,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 1280w, https://www.economist.com/cdn-cgi/image/width=1424,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 1424w", "https://www.economist.com/cdn-cgi/image/width=256,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 256w, https://www.economist.com/cdn-cgi/image/width=360,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 360w, https://www.economist.com/cdn-cgi/image/width=384,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 384w, https://www.economist.com/cdn-cgi/image/width=480,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 480w, https://www.economist.com/cdn-cgi/image/width=600,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 600w, https://www.economist.com/cdn-cgi/image/width=834,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 834w, https://www.economist.com/cdn-cgi/image/width=960,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 960w, https://www.economist.com/cdn-cgi/image/width=1096,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 1096w, https://www.economist.com/cdn-cgi/image/width=1280,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 1280w, https://www.economist.com/cdn-cgi/image/width=1424,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 1424w", "https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 256w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 360w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 384w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 480w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 600w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 834w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 960w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 1096w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 1280w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 1424w", "https://www.economist.com/cdn-cgi/image/width=256,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 256w, https://www.economist.com/cdn-cgi/image/width=360,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 360w, https://www.economist.com/cdn-cgi/image/width=384,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 384w, https://www.economist.com/cdn-cgi/image/width=480,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 480w, https://www.economist.com/cdn-cgi/image/width=600,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 600w, https://www.economist.com/cdn-cgi/image/width=834,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 834w, https://www.economist.com/cdn-cgi/image/width=960,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 960w, https://www.economist.com/cdn-cgi/image/width=1096,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 1096w, https://www.economist.com/cdn-cgi/image/width=1280,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 1280w, https://www.economist.com/cdn-cgi/image/width=1424,quality=80,format=auto/sites/default/files/images/2016/05/articles/main/20160507_blp519.jpg 1424w", "https://www.economist.com/cdn-cgi/image/width=256,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 256w, https://www.economist.com/cdn-cgi/image/width=360,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 360w, https://www.economist.com/cdn-cgi/image/width=384,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 384w, https://www.economist.com/cdn-cgi/image/width=480,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 480w, https://www.economist.com/cdn-cgi/image/width=600,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 600w, https://www.economist.com/cdn-cgi/image/width=834,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 834w, https://www.economist.com/cdn-cgi/image/width=960,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 960w, https://www.economist.com/cdn-cgi/image/width=1096,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 1096w, https://www.economist.com/cdn-cgi/image/width=1280,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 1280w, https://www.economist.com/cdn-cgi/image/width=1424,quality=80,format=auto/sites/default/files/images/2017/07/articles/main/econlogo.png 1424w", "https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 256w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 360w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 384w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 480w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 600w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 834w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 960w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 1096w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 1280w, https://cdn.design-system.economist.com/assets/latest/common/static/images/image/image-placeholder.svg 1424w" ]
[]
[]
[ "" ]
null
[ "The Economist" ]
2001-08-07T00:00:00
The co-founder and former CEO sums up what he learned on the journey from plucky startup to international merger | Unknown
en
/favicon.ico
The Economist
https://www.economist.com/unknown/2001/08/07/learning-from-lycos
The co-founder and former CEO sums up what he learned on the journey from plucky startup to international merger Aug 7th 2001| SPEED IS LIFE: Street Smart Lessons from the Front Lines of Business. More from Unknown Job listing: Social Video Producer/Editor The Economist seeks a producer/editor for its Films unit More from Unknown Job listing: Social Video Producer/Editor The Economist seeks a producer/editor for its Films unit The World Bank hires a famous contrarian Paul Romer is the bank’s new chief economist
correct_subsidiary_00108
FactBench
2
43
https://knowledge.hubspot.com/forms/what-domains-are-blocked-when-using-the-forms-email-domains-to-block-feature
en
Domains blocked from form submissions
https://knowledge.hubspo…ge_base_logo.jpg
https://knowledge.hubspo…ge_base_logo.jpg
[ "https://knowledge.hubspot.com/hubfs/assets/hubspot.com/global/Sprocket.svg", "https://knowledge.hubspot.com/hubfs/HubSpot_Logos/HSLogo_color.svg", "https://knowledge.hubspot.com/hubfs/Knowledge_Base_2023/subscription_key_icons/marketing_icon.svg", "https://knowledge.hubspot.com/hubfs/Knowledge_Base_2023_2024/subscription_key_icons/content_icon.svg", "https://no-cache.hubspot.com/cta/default/53/c10b047e-c9fb-4957-b0ed-7f3960efe752.png", "https://no-cache.hubspot.com/cta/default/53/21472a0f-6f98-4d81-907c-6ac264c03136.png", "https://knowledge.hubspot.com/hubfs/HubSpot_Logos/HSLogo_gray.svg?t=1477504449039" ]
[]
[]
[ "" ]
null
[]
2019-12-13T00:00:00
This article is a general list of all free email domains that are blocked when you turn on the 'Email Domains to Block' feature in your HubSpot form.
en
https://knowledge.hubspo…rsed-Favicon.png
https://knowledge.hubspot.com/forms/what-domains-are-blocked-when-using-the-forms-email-domains-to-block-feature
To block form submissions that contain an email address with a free domain, you can turn on the Block free email providers feature in a HubSpot form. This helps you maintain a healthy sending reputation by mitigating potential hard bounces to expired email addresses. You can review a list of all blocked email domain providers below. HubSpot regularly monitors free email domains. However, there may be domains that have been blocked, but haven't been added to the list yet. If you encounter this, please try submitting the form with different email address. You can also download a CSV file that includes all the domains in the list. If you need to make any manual modifications to the list, you can make these changes, then paste the resulting list into the Email domains to block field instead. Please note: by default, some domains below will be blocked regardless of whether the Block free email providers checkbox is selected. These are disposable email domains that become invalid after a short period of time. Keeping these emails can lead to hard bounces which will damage your sending reputation and make it harder for legitimate mail to reach your recipient's inboxes.
correct_subsidiary_00108
FactBench
1
16
https://www.infoworld.com/article/2205325/terra-lycos-finds-buyer-for-u-s-unit.html
en
Terra Lycos finds buyer for U.S. unit
https://www.infoworld.co…avicon-32x32.png
https://www.infoworld.co…avicon-32x32.png
[ "https://www.infoworld.com/wp-content/uploads/2024/07/3477228-0-09432400-1721855303-openai-100943386-orig.jpg?quality=50&strip=all&w=375", "https://www.infoworld.com/wp-content/uploads/2024/07/3475989-0-54023300-1721811749-shutterstock_2003176028.jpg?quality=50&strip=all&w=444", "https://www.infoworld.com/wp-content/uploads/2024/07/2267992-0-65443600-1721811675-shutterstock_2275464667.jpg?quality=50&strip=all&w=392", "https://www.infoworld.com/wp-content/uploads/2024/07/Youtube-Thumbnails_Template-OLD-1.png?w=444", "https://www.infoworld.com/wp-content/uploads/2024/06/youtube-thumbnails_template-old-100963211-orig.jpg?quality=50&strip=all&w=444", "https://www.infoworld.com/wp-content/uploads/2024/06/youtube-thumbnails_template-old-100963201-orig.jpg?quality=50&strip=all&w=444" ]
[]
[]
[ "" ]
null
[ "Juan Carlos Perez" ]
2004-07-28T15:52:07-04:00
Company could sell for $95-$115 million, far short of the $12.5 billion Terra paid for Lycos in May 2000
en
https://www.infoworld.com/wp-content/themes/iw-b2b-child-theme/src/static/img/favicon.ico
InfoWorld
https://www.infoworld.com/article/2205325/terra-lycos-finds-buyer-for-u-s-unit.html
Terra Lycos has chosen a potential buyer for its U.S. Lycos subsidiary, the Barcelona provider of Internet access, services and content disclosed Wednesday in a filing with Spain’s Comisión Nacional del Mercado de Valores, the agency in charge of supervising the country’s stock markets and related activities. The sale price will be in the range of $95 million to $115 million, the filing states. This is a far cry from the $12.5 billion Terra paid for the Waltham, Massachusetts-based Lycos in the heady days of the Internet frenzy in May 2000. Terra Lycos, with its largest base of operations in Madrid, will disclose the potential buyer’s name when the two parties reach a definitive agreement, according to Wednesday’s filing. Prior to the sale, some Lycos Inc. assets will be transferred over to Terra Lycos, including Lycos Inc.’s stakes in the Terra Networks USA and Lycos Europe units. Terra Networks USA’s most visible activity is the operation of the Terra.com portal for U.S. Hispanics, a Terra Lycos spokeswoman said. Meanwhile, Lycos Europe operates a network of European Web sites in various languages and provides a variety of Internet services and content, according to information found on its Web site (www.lycos-europe.com).
correct_subsidiary_00108
FactBench
3
58
https://www.dmnews.com/lycos-names-mktg-services-as-list-manager/
en
Lycos Names MKTG Services as List Manager
https://images.dmnews.co…/default-fea.jpg
https://images.dmnews.co…/default-fea.jpg
[ "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2024/07/Underpayment-Correction.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Underpayment-Correction.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Digital-Services.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Digital-Services.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-1-110x110.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-1-110x110.jpg", "https://images.dmnews.com/wp-content/uploads/2022/07/ban-03.png", "https://images.dmnews.com/wp-content/uploads/2022/07/ban-03.png", "https://secure.gravatar.com/avatar/44d76c21879b6a0ae577a554819219a2?s=40&d=mm&r=g", "https://secure.gravatar.com/avatar/44d76c21879b6a0ae577a554819219a2?s=40&d=mm&r=g", "https://images.dmnews.com/wp-content/uploads/2021/11/default-fea.jpg", "https://images.dmnews.com/wp-content/uploads/2021/11/default-fea.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Underpayment-Correction.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Underpayment-Correction.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Digital-Services.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Digital-Services.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-1-80x80.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-1-80x80.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-2.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-2.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Alimony-Calculation.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Alimony-Calculation.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Underpayment-Correction.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Underpayment-Correction.jpg", "https://secure.gravatar.com/avatar/e26ae011b410ba56611425a3ab86e692?s=40&d=mm&r=g", "https://secure.gravatar.com/avatar/e26ae011b410ba56611425a3ab86e692?s=40&d=mm&r=g", "https://images.dmnews.com/wp-content/uploads/2024/07/Digital-Services.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/Digital-Services.jpg", "https://secure.gravatar.com/avatar/e26ae011b410ba56611425a3ab86e692?s=40&d=mm&r=g", "https://secure.gravatar.com/avatar/e26ae011b410ba56611425a3ab86e692?s=40&d=mm&r=g", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-1-380x250.jpg", "https://images.dmnews.com/wp-content/uploads/2024/07/SEO-Adaptation-1-380x250.jpg", "https://secure.gravatar.com/avatar/b404ad41b61d791f749cd44ece944bbf?s=40&d=mm&r=g", "https://secure.gravatar.com/avatar/b404ad41b61d791f749cd44ece944bbf?s=40&d=mm&r=g", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2023/08/DigitalMarketingNews_TransparentBackground.png", "https://images.dmnews.com/wp-content/uploads/2024/05/Digital-Marketing-Trends-in-2024-E-Book-3-e1716389138414.png", "https://images.dmnews.com/wp-content/uploads/2024/05/Digital-Marketing-Trends-in-2024-E-Book-3-e1716389138414.png" ]
[]
[]
[ "" ]
null
[ "Abby Miller" ]
2002-07-29T14:07:00+00:00
MKTG Services Inc. today announced its appointment as list manager of the multimillion-record, permission-based e-mail and postal database…
en
https://images.dmnews.co…icon-32x32-1.png
DMNews
https://www.dmnews.com/lycos-names-mktg-services-as-list-manager/
MKTG Services Inc. today announced its appointment as list manager of the multimillion-record, permission-based e-mail and postal database files from Lycos, a unit of global Internet network Terra Lycos. MKTG will manage Lycos' 5.2 million-name permission-based e-mail list as well as its 3.3 million permission-based postal records database. The data come from Web sites within the Lycos Network such as Gamesville.com, Tripod.com, Angelfire.com, Lycos Finance, FoxSports.com and HtmlGEAR.com. Selection criteria include unique user information from the individual sites as well as basic demographic information voluntarily provided by Lycos users during registration. MKTG, New York, said it would enhance the file with demographic, lifestyle and ethnic data overlays. “The addition of an enhanced Lycos customer list will add the opportunity for all advertisers to follow through on advertising campaigns through customized direct relationship marketing,” said Dean Macchi, director of direct marketing services of Barcelona, Spain-based Terra Lycos in a statement.
correct_subsidiary_00108
FactBench
2
80
https://www.cnet.com/home/empires-pay-billions-for-more-visitors/
en
Empires pay billions for more visitors
https://www.cnet.com/a/n…s/logos/cnet.png
https://www.cnet.com/a/n…s/logos/cnet.png
[ "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_quote7.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/deathtoll.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_video.gif", "https://www.cnet.com/i/ne/bb/2001/02/0227tv_hagel.jpg", "https://www.cnet.com/i/ne/pre/bump.gif", "https://www.cnet.com/i/gl/vid-w.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_quote8.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/arrow.gif", "https://www.cnet.com/i/ne/pre/bump.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_bargain.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/DeutscheB_ad.jpg", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_quote10.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_highfliers.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_quote9.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_video.gif", "https://www.cnet.com/i/ne/bb/2001/02/0228tv_miller.jpg", "https://www.cnet.com/i/ne/pre/bump.gif", "https://www.cnet.com/i/gl/vid-w.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/arrow.gif", "https://www.cnet.com/i/ne/pre/bump.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_collusion.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_video.gif", "https://www.cnet.com/i/ne/bb/2001/02/0208tv_davis.jpg", "https://www.cnet.com/i/ne/pre/bump.gif", "https://www.cnet.com/i/gl/vid-w.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_quote3.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_quote4.gif", "https://www.cnet.com/i/ne/pre/Graphics/end.gif", "https://www.cnet.com/i/ne/pre/bump.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/freeweb_cashingout.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/Jain.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/Greenberg.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/Krach.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/Bell.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/i/ne/me/2001/05/Freeweb/Goldman.gif", "https://www.cnet.com/b.gif", "https://www.cnet.com/b.gif" ]
[]
[]
[ "" ]
null
[ "CNET News staff" ]
2002-01-03T00:43:56+00:00
The unprecedented speed, number and price of Internet combinations has redefined the corporate merger--and, critics say, contributed to the decline of the industry.
en
/apple-touch-icon-v3.png
CNET
https://www.cnet.com/tech/services-and-software/empires-pay-billions-for-more-visitors/
By Jim Hu and Mike Yamamoto Staff Writers, CNET News.com June 5, 2001, 4:00 a.m. PT Excite@Home executives knew it might be a tough sell to investors. In October 1999, the high-speed Internet service agreed to pay as much as $1 billion in cash and stock for Blue Mountain Arts, an online greeting-card company that made no money. To help justify the purchase, Excite@Home issued a press release touting Blue Mountain's "strong differentiated content." But executives knew the primary reason was one of sheer numbers: Excite@Home was engaged in a bitter contest to claim the most visitors, and archrivals Yahoo, Lycos and AltaVista had made major traffic-boosting acquisitions that threatened to knock the company off the A-list of Web portals. "It was a market share play," acknowledged one Excite@Home source who requested anonymity. To many, the deal illustrates how far companies were willing to go to buy traffic at the time, even though the real value of those numbers remained unclear. Until the Internet economy began its steep descent a year ago, Web portals and other online companies were engaged in a kind of arms race through acquisition that produced multimillion-dollar deals seemingly every few days. Growth by acquisition is a fact of life in any industry, but the unprecedented pace and price of Internet deals redefined the corporate merger in America. Yet as today's investors seethe over their dwindling portfolios, critics from Washington to Silicon Valley have denounced many deals as foolish decisions that backfired on companies and arguably contributed to the decline of the overall industry by squandering resources. Driving this merger mania was the assumption--or hope--that raw traffic would eventually be converted to profits. Leading the charge were portals frenetically building empires throughout cyberspace in the belief that they who had the highest numbers would win all the spoils. The fatal flaw in that strategy was an unrealistic reliance on advertising dollars, which companies hoped would increase indefinitely along with the number of people exposed to banner ads on Web pages. Even if the economy had not slowed, it is doubtful that ad revenue could have come close to supporting the inflated costs of megamergers--forcing companies to begin charging for their services. "It was the fundamental faith that if the audience was gathered in sufficient numbers it would be monetized," said Marty Yudkovitz of NBC Digital Media, who worked on the development of the NBC Internet portal. "But in fact, it was jumping the gun considerably because there was no truly rational business model that was supporting the cost of acquiring that audience." Let's go shopping Of all the portals, Excite@Home is often attributed with starting the Internet buying spree that would make 1999 the year of the merger. The company was created by the estimated $6.7 billion combination of cable Internet service provider @Home and Web portal Excite.com in January of that year, marking one of the first major marriages of Net leaders. Just nine days after that deal was announced, Yahoo responded in kind. Worried that Excite@Home and Lycos were creeping up on it in the all-important traffic rankings, the leading Web portal announced plans to buy online community GeoCities for about $3 billion. Two months later, in March, Yahoo raised the stakes again with a deal to buy Web streaming media company Broadcast.com, a purchase that later closed at $5 billion. Yahoo's concerns were not without substance. The month after it announced plans to purchase Broadcast.com, rival Lycos issued a press release announcing that it surpassed Yahoo in "reach"--industry jargon meaning that more people had visited Lycos than Yahoo (51.8 percent to 50.8 percent of the total online population in the United States, respectively). By July, Internet investment company CMGI decided that it needed to get into the acquisitions business as well. The company, one of the best performers on the Nasdaq Stock Market that year, announced its intention to buy AltaVista in a deal estimated to be worth $2.3 billion at the time of the agreement. A pioneer of search engines, AltaVista had become a sort of Don Quixote of Web companies. It failed to evolve into a full-fledged portal like Yahoo, Excite.com or Lycos, in no small part because of corporate confusion with former owner Digital Equipment. CMGI launched a $120 million advertising campaign in hopes of turning AltaVista into a major Web portal to take on Yahoo, hiring multiple-Grammy winner Lauren Hill to perform at its relaunch party. True to its misfortunate self, however, AltaVista pinned the date of its initial public offering on the week after the market crash in April 2000. Since then, the company has shelved its plans to go public, undergone rounds of layoffs, repositioned itself as a search company, and lost its CEO. Lycos, too, played heavily in the traffic game. The portal bought companies such as home-page community Tripod, financial service Quote.com, online gaming site Gamesville, tech information site Wired Digital, and Web yellow page service WhoWhere. The acquisitions "were meant to drive audience," said Bob Davis, former chief executive of Terra Lycos. That, in turn, was closely related to another goal at the time known within the industry as "stickiness": the ability to keep surfers on the site once they visited by enticing them with content and services. "Audience was meant to drive stickiness, stickiness was meant to drive the network at large, and the network at large was meant to drive earnings," said Davis, who has parlayed his entrepreneurial experiences into a career in publishing and will soon release a new book titled "Speed is Life." Those acquisitions, most of which were paid for in stock, helped keep the company in the highest of Media Metrix rankings through the dot-com crash last year. Lycos was then acquired by Terra Networks, a Spanish ISP looking Reassessing the madness John Hagel, author, "Net Gain" and "Net Worth" for a portal partner, for $6.5 billion. But between the day the merger was announced in May 2000 and its closure that October, the price of the deal was halved by the sagging stock market. Enticing the offline giants The misguided urge to merge was not limited to the pure Internet companies. Frightened of being outmaneuvered by scrappy Web start-ups, Walt Disney in 1998 purchased a 43 percent stake in search engine Infoseek for $465 million. That laid the groundwork for Disney's ill-fated Go.com portal, which the entertainment giant shuttered in January after reporting a $790 million loss and laying off most of the staff. General Electric's NBC has encountered similar difficulties since taking a stake in the Snap.com portal created by CNET Networks, publisher of News.com, and combining it with online community Xoom.com and its flagship NBC.com to create NBC Internet. The company initially went public and fared well, but its heavy reliance on advertising took a toll. In April, the network bought back all outstanding shares of NBCi, laid off most of its staff, and recast its Internet strategy to tie it closer to TV programming. In many of these cases, the last link in the business-building chain--earnings--remained missing. And once evolutionary development of such Net ventures as Go.com and NBCi was stopped short, the value of traffic and the acquisitions made to increase it fell under wide criticism. Because of its stature and recent financial problems, Yahoo has been the subject of much scrutiny for its acquisitions and other business strategies. Under this microscope, the decision to pay 21.5 million shares for GeoCities appears questionable to some, especially if the main objective was to increase traffic. Home-page building "was a business that wasn't proved viable from an advertising standpoint," said Patrick Keane, an analyst at Jupiter Media Metrix. "The community sector is completely bankrupt as a revenue opportunity. It was a reach play." Yahoo declined to comment on its previous acquisitions. But even Tom Evans, the CEO of GeoCities at the time of the acquisition, criticized the strategy, saying the Web portal failed to follow through with effective use of his company's strengths. "Can you turn those eyeballs into dollars and those users into customers?" Evans asked. "I don't think Yahoo maintained fully the GeoCities model and all the things we were doing in GeoCities. What they determined to do was to integrate it into the Yahoo network." The Broadcast.com deal has been criticized as well. The multimedia company's core business of providing streaming technology for internal corporate Webcasts is a shell of its former self, according to people close to the company, and it remains unclear whether the division is drawing any significant advertising. The portals defend their actions as necessary to compete in a world turned upside-down by unrelenting pressure to expand at virtually any cost. Many executives acknowledged the flaws in acquisition strategies but said they were trying to keep up with an insatiable demand by investors to raise stock prices. "By acquiring you were able to add more tonnage into the network, keep your ranking high in Media Metrix, and that was a nice virtuous circle and it supported your stock price," said George Bell, former chief executive of Excite@Home. "It was a silly cycle in a sense that it had no basis in reality." That is a troubling observation, especially if applied to Excite@Home's acquisition of Blue Mountain, for which it agreed to pay $780 million in cash and stock and another $270 million if the site met goals largely measured in traffic gain. The company reasoned that the deal was a way to enlist paid subscribers for its high-speed Net service from the millions of people who sent Web greeting cards through Blue Mountain's site. The result has not been pretty. Excite@Home's stock traded around $40 a share at the time of the Blue Mountain deal but is around $4 a share this week. In January, the company wrote off $4.6 billion in intangible assets for the depreciation of value for both Blue Mountain and Excite.com. The company is rumored to be seeking a buyer for the two entities, though no obvious takers have emerged. Only two years separate 2001 and 1999, but in Internet time, it might as well be a lifetime. "It was shortsightedness," said Joshua Sinel, chief executive of Blue Barn Interactive, a New York community and chat company. "What they bought were eyeballs and traffic. But what they failed to realize was that buying people doesn't do much--it's what you do with them." Were underwriters really undertakers? By Sandeep Junnarkar Staff Writer, CNET News.com June 5, 2001, 4:00 a.m. PT As an Internet analyst at investment bank PaineWebber, James Preissler witnessed the birth of traffic as the currency that would fuel the Internet's early commercial history. He was part of the land rush of entrepreneurs, venture capitalists and investment bankers who latched onto traffic--the number of people visiting Web sites--as the measurement to gauge success in the absence of any established business precedents. Their thinking was simple, if not simplistic: Why bother with difficult matters such as revenue and earnings when big traffic was all that was needed for a successful initial public offering? "There was no experience--we were all shooting in the dark," Preissler, now an executive at HelloAsia, an Internet direct-marketing firm, said in an unusually frank interview. "Everyone was making a very tenuous connection between basic metrics they didn't fully understand and some nebulous projections that it would become revenue." Such was the dubious foundation for the house of cards that was to become the digital economy. Start-ups manufactured from business plans drawn on the backs of envelopes were rushed through the IPO process by banks and other institutions in the complicated procedure known as "underwriting." In shepherding a company's stock to the open market, underwriters buy the new securities in preparation for selling them to institutional and retail investors. In the past, underwriting a company meant taking on some risk. But as the Internet bubble grew, the process became akin to minting money at a time when companies routinely expected their stock prices to at least double or triple on their first day out. For the underwriters, the payoff comes from selling the stock at a higher price to the public than what they paid to the company--a practice that led Wall Street firms to reap record revenues from the IPO boom of the late '90s. "Undoubtedly there was hype, and lots of money was made," said one investment banker who requested anonymity. "It is really hard to tell people to make less money: 'Come into work every day and make less money.'" A lesson in objectivity That hype is what others find troubling. Most, if not all, of these underwriters were part of larger financial institutions charged with providing impartial advice on stocks to individual investors. The ability of these institutions to remain objective while benefiting from underwriting certain stocks has long raised questions involving potential conflict of interest. Some investment bankers say they served as the voice of reason, telling prospective companies to cut their projections in half and to create realistic goals. But others say such warnings were the rare exception at the height of the merger frenzy that gripped the industry. "I don't understand what (underwriters) mean by 'being the voice of reason,'" said Fred Taylor Isquith, an attorney at Wolf Haldenstein Adler Freeman & Herz, a law firm that specializes in securities class-action suits. "Underwriters are salesmen; they are committed to selling the stock." Although the issue is not new, the practice of underwriting has fallen under unprecedented scrutiny in no small part because so many investors lost such large amounts of money in the free fall of Internet stock prices. Take the case of TheGlobe.com, an online community site, which soared about 606 percent the first day it traded back in November 1998, pumped up by its exuberant traffic numbers--the steroid of choice. The stock has plunged more than 96 percent from its offer price as traffic figures have failed to produce promised revenue. Wall Street's role in this kind of debacle has drawn the attention of Congress. Rep. Richard Baker, R-La., a member of the House Committee on Financial Services, on May 16 announced a hearing tentatively scheduled for mid-June to examine the possible conflict of interest between the investment banks' underwriting branches and their analysts, who purportedly provide unbiased opinions on stocks often held by their own banks. "While the agenda for the hearing has not been set, when you examine possible conflict-of-interest issues in the investment banking business, the IPO question is likely to come up," said Michael DiResto, Baker's press secretary. The voice of reason Not everyone, of course, is willing to indict the entire underwriting industry. Some Web executives defended the banks that helped bring their companies to the public market. "When we first met, (Morgan Stanley) told us we weren't ready to go public and set realistic goals for us," said Mark Cuban, the founder of Broadcast.com. "When we hit (the goals), we pushed forward on the IPO. They did a great job." Cuban's experience, however, may be colored by his eventual financial success: He was one of a handful of executives to cash out before the bubble burst, making him an instant billionaire. Most executives and venture capitalists expressed frustration at any delays, believing the time was ripe to collect a windfall from their investments. "The underwriters tried to keep some semblance of a financial model, but they underwent tremendous criticism for not pricing stocks higher," said David Menlow, president of the IPO Financial Network. Pricing remains a controversial issue that is moving from Wall Street to the courts. A growing number of companies are facing class-action lawsuits filed on behalf of shareholders alleging that preferential deals with underwriters led to artificial demand and pricing. "They decided that the best way to create a hot market is to make it look like a hot market--by creating great expectations of demand and excitement," Isquith said of the underwriters. "Whether they exercise their responsibilities in this market to people they were selling stocks for is a question of some import." Many underwriters said their actions were dictated by the companies they represented. For example, setting the share price of an IPO based on projected traffic growth was always a point of contention. If an investment Where angel investors fear to fly Steve Miller, angel investor, Origin Ventures bank pushed too hard to scale back a company's projections, the start-up could just as easily approach a rival bank during the IPO mania. "From a research point of view, you are trying to make sure the company can meet its projections. So you are trying to cut back on their projections, and that cuts down on their valuation," Preissler said. "That is where the biggest battle would lie--between the companies, the banks, the venture capitalists. That is where all the tension rose." Others explain the phenomenon in more basic terms, as a function of human nature. Few professions are as competitive as the financial world, they note, so the blind rush toward going public was just a matter of survival. "You are judged against your peers," Andrea Williams Rice of Deutsche Banc Alex Brown said with a heavy sigh. "If your peers have coverage of a promising sector or company that is generating enormous profits for them and you don't, you are putting yourself at a disadvantage." Executives benefit from Street smarts By Larry Dignan Special to CNET News.com June 5, 2001, 4:00 a.m. PT Would you rather be Mark Cuban or Toby Lenk? For those who follow the dot-com world, the answer to that question is easy. Most would choose Cuban, the former Broadcast.com honcho who sold his When cashing out makes sense Bob Davis, vice chairman, Terra Lycos Web streaming site to Yahoo for about $5 billion two years ago and cashed in before the bottom fell out of the dot-com stock market. Cuban went from paper billionaire to the real thing after he swapped his stock certificates for greenbacks. He bought the Dallas Mavericks and now has so much money that he doesn't sweat the $500,000 in fines he owes the National Basketball Association for bad behavior. Lenk, on the other hand, did the opposite. The eToys CEO believed in his company so much that he hardly cashed in any stock options. His optimism cost him about $600 million in paper wealth as his online retail company descended into bankruptcy a little more than a year after its market peak. Although some people seem disappointed with Internet CEOs like Cuban for selling their companies and cashing out, financial planners say these executives are simply following the kind of prudent strategies recommended for any investor, large or small. In the old days, stockbrokers advised clients to sell if they made 20 percent on an investment, a fraction of the exponential gains seen at the height of the New Economy boom. Many top executives realized that the dot-com euphoria of 1999 and early 2000 couldn't last forever. They diversified some holdings to avoid keeping all their eggs in one basket--and to steer clear of the path taken by Lenk, who could not be reached for comment on his investment strategies. "When you garner financial independence, it makes no sense to put it at risk again," said David Diesslin, a financial planner in Fort Worth, Texas, who says it's senseless to begrudge those who took some profits. "Dot-com executives weren't the ones who bid up the stock prices," he said, alluding to day traders and individual investors who fed the frenzy. Perfect timing Lost in the relentless headlines about the dot-com meltdown is a notable list of winners who did fine, often benefiting from the merger mania that seized Web companies trying to grab as much traffic as possible at virtually any cost. Former Lycos CEO Bob Davis sold more than 3.45 million shares worth about $72 million late last year, just months after his search engine was bought by Spanish Internet company Terra Networks, according to regulatory filings. Also in 2000, Eric Greenberg, founder and director for Scient, sold nearly 3.2 million shares with a value of $168.5 million, a sum well above the Internet services company's market capitalization today. And chances are you'd do the same if you suddenly found $100 million in paper profits sitting in your lap. Why risk your future on one stock? As Diesslin said, "At some point you have to protect yourself." Cuban, who witnessed the software, networking and PC stock bubbles in the '80s and '90s, said he knew the dot-com euphoria couldn't last forever. Shortly after the Yahoo-Broadcast.com deal closed in July 1999, he used hedging techniques to minimize losses from his options and sell his shares. Contrary to popular belief, Cuban didn't cash out anywhere near the all-time highs. When he sold his stock, Yahoo shares were trading around $90, well short of the $250 high they hit a few months later. But Cuban's not shedding any tears. "It didn't take any genius to figure out what I needed to do," he said in an e-mail interview. "It wasn't so much a diversification strategy as an 'avoiding-the-crash' strategy." Harold Evensky, a financial planner in Coral Gables, Fla., said such first-hand experience with previous market busts is what saved some people who might otherwise have gambled their fortunes on the future. Many workers in Silicon Valley, especially younger ones, had never seen a recession and lacked the kind of risk meter that might have compelled them to sell at least some of their holdings before they collapsed. "The idea is to put a big chunk aside so if it all falls apart, you'll be set," said Evensky, who added that economic manias usually have ugly endings. Evensky recommends that executives not have more than 5 percent of their personal net worth riding on company stock. Financial planners acknowledge that not all executives can cash out at once. That can create political problems within corporations and can hurt a company's standing on Wall Street, as well as raise speculation about insider trading violations. Thus, many executives, such as Microsoft Chairman Bill Gates and eBay CEO Meg Whitman, exercise and sell stock options at regular intervals to minimize scrutiny and speculation. The key for financial planners is flexibility. Executives who took profits ahead of the dot-com train wreck now have other kinds of options--options to send their kids to college, options to choose a new career, and options to fund a new business venture. "Taking care of your family is far more important," Cuban said, adding that he would have second-guessed himself forever if he hadn't cashed out. "As someone who has traded stocks for a long time, I'm a big believer that no one ever got in trouble for taking a profit."
correct_subsidiary_00108
FactBench
3
23
https://www.wired.com/2000/05/lycos-terra-merge/
en
Lycos, Terra Merge
https://media.wired.com/photos/669a5840ea323ec07ffe3042/1:1/w_350%2Ch_350%2Cc_limit/undefined
[ "https://www.wired.com/verso/static/wired/assets/logo-header.svg", "https://media.wired.com/photos/669a5840ea323ec07ffe3042/1:1/w_350%2Ch_350%2Cc_limit/undefined", "https://media.wired.com/photos/669afdd90f33b021ab3d7a68/1:1/w_350%2Ch_350%2Cc_limit/undefined", "https://media.wired.com/photos/668d8cb66e7a6e1090cf3509/1:1/w_350%2Ch_350%2Cc_limit/undefined", "https://media.wired.com/photos/66984bf6ff3e60a7d8e06457/1:1/w_350%2Ch_350%2Cc_limit/undefined", "https://media.wired.com/photos/66984bf6ff3e60a7d8e06457/16:9/w_800%2Ch_450%2Cc_limit/undefined", "https://media.wired.com/photos/66958889b908de73c50ff0ff/16:9/w_800%2Ch_450%2Cc_limit/undefined", "https://media.wired.com/photos/6675620b977914a7eef833b9/16:9/w_800%2Ch_450%2Cc_limit/undefined", "https://media.wired.com/photos/6699293c2081787c6288c8a5/16:9/w_800%2Ch_450%2Cc_limit/undefined", "https://media.wired.com/photos/668e9afe53da6047f098c514/16:9/w_800%2Ch_450%2Cc_limit/undefined", "https://www.wired.com/verso/static/wired/assets/logo-reverse.svg" ]
[]
[]
[ "" ]
null
[ "WIRED Staff", "Lily Hay Newman", "Julian Chokkattu", "Lauren Smiley", "Steven Levy", "Annie Gilbertson", "Nicola Twilley", "Will Knight", "Amanda Hoover", "Condé Nast" ]
2000-05-16T13:45:00-04:00
The Spanish online company acquires Lycos, one of the last remaining independent portals. Also: Online voter registration.... and more.
en
https://www.wired.com/verso/static/wired/assets/favicon.ico
WIRED
https://www.wired.com/2000/05/lycos-terra-merge/
NEW YORK -- Spanish Internet group Terra Networks SA agreed Tuesday to buy U.S. Internet search company Lycos Inc. for $12.5 billion in stock, or $97.55 a share, in a move to create one of the world's largest Internet companies and broaden its geographic reach. The combined company, which will be called Terra Lycos Inc., will have pro forma 2000 revenues of about $500 million and together currently have an estimated 50 million unique users and 175 million page views per day. Juan Villalonga, who is chairman of both Telefonica and Terra, will head the merged Lycos-Terra. Robert Davis, currently Lycos president and chief executive, would be the chief executive, the source said. As part of the agreement Germany's Bertelsmann AG, the third-largest media company in the world, agreed to purchase $1 billion of advertising, placement and integration services from Terra Lycos worldwide over five years. Terra-Lycos, meanwhile, will gain access to Bertelsmann's books, music, television, film, and other media content, on preferred terms. This alliance builds on the existing Lycos-Bertelsmann joint venture in Europe, Lycos Europe, of which Bertelsmann will remain a significant shareholder. Lycos is the parent company of Wired News. - - - Register to vote: ABCNEWS.com has reached an agreement with OnlineDemocracy.com that makes ABCNEWS.com the first news site to offer voter registration over the Internet. The deal, effective immediately, makes it easier for citizens in 46 states to either register for the first time or to change their voter registration information. Registration forms can be obtained at ABCNEWS.com's politics section. - - - Scotched: The maker of Scotchgard waterproofing products will stop manufacturing part of its line after tests revealed that the chemical compounds involved can linger in the environment and in the human body for several years. Minnesota Mining and Manufacturing, better known as 3M, said Tuesday that it would phase out product lines that contain perfluorooctanyl, including Scotchgard that is commercially applied to carpets. The affected product lines account for about $300 million in annual sales, or about 2 percent of total 3M sales. - - - They sing, they talk: GetMusic said Tuesday that it will host a weekly, hour-long talk show featuring sit-down interviews with band members and music personalities. The show, with Rolling Stone magazine contributor Anthony DeCurtis doing the interviews, debuts on May 24 at 8 p.m. EDT. The band Phish will be first up for the webcast. Reuters contributed to this report.
correct_subsidiary_00108
FactBench
0
21
https://scripophily.net/lycos-inc-early-internet-search-engine-company-rare-delaware-1998/
en
Lycos, Inc. (Early internet search engine company) RARE - Delaware 1998
https://cdn11.bigcommerc….386.513.jpg?c=1
https://cdn11.bigcommerc….386.513.jpg?c=1
[ "https://www.facebook.com/tr?id=601854419985613&ev=PageView&noscript=1&a=plbigcommerce1.2&eid=store-10-prd-us-central1-144068179216", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/150x150/scripophily_com_logo_1628181008__88296.original.png", "https://cdn11.bigcommerce.com/s-gvae5krt3k/product_images/uploaded_images/rmsmythe-logo.jpg", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/original/image-manager/design-1720637128638.jpg?t=1720637334", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/6659/5606/lycos-inc-delaware-43__33137.1624995174.jpg?c=1", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/50x50/products/6659/5606/lycos-inc-delaware-43__33137.1624995174.jpg?c=1", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/6659/5606/lycos-inc-delaware-43__33137.1624995174.jpg?c=1", "https://www.scripophily.com/webcart/vigs/lycosincvig.jpg", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/3468/10591/ask-jeeves-inc-famous-search-engine-company-50__49279.1624995284.jpg?c=1", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/3557/8648/nbc-internet-inc-nbci-delaware-53__19817.1624995238.jpg?c=1", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/7845/10448/penske-motorsports-inc-delaware-1998-50__43417.1624995281.jpg?c=1", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/10185/1668/paraffine-companies-inc-specimen-delaware-50__30966.1624995063.jpg?c=1", "https://cdn11.bigcommerce.com/s-gvae5krt3k/images/stencil/500x659/products/3761/6708/daylin-inc-delaware-drug-company-50__81825.1624995198.jpg?c=1" ]
[]
[]
[ "scripophily", "stock certificates", "rm smythe", "antique stocks", "gifts", "old bonds", "old stocks", "old stock research", "liberty loan bonds", "autographs", "old stock exchange", "wall street", "paper money", "mining stocks", "aviation stocks", "war bonds", "gold mines", "mining", "confederate bonds", "" ]
null
[]
null
Lycos, Inc. (Early internet search engine company) RARE - Delaware 1998
en
https://cdn11.bigcommerc…png?t=1627328091
Scripophily.com | Collect Stocks and Bonds | Old Stock Certificates for Sale | Old Stock Research | RM Smythe |
https://scripophily.net/lycos-inc-early-internet-search-engine-company-rare-delaware-1998/
MSRP: $250.00 Was: Now: $195.00 (You save $55.00 ) (No reviews yet) Write a Review Write a Review Lycos, Inc. (Early internet search engine company) RARE - Delaware 1998 Rating Required Name Required Email Required Review Subject Required Comments Required SKU: newitem91822682drbs UPC: Gift wrapping: Options available in Checkout Current Stock: Adding to cart… The item has been added Add to Wish List Create New Wish List
correct_subsidiary_00108
FactBench
3
74
https://www.ciol.com/ibm-terra-lycos-sign-pact-2
en
IBM, Terra Lycos sign pact
https://img-cdn.thepubli…6GM3VmrXNw5G.png
https://img-cdn.thepubli…6GM3VmrXNw5G.png
[ "https://www.facebook.com/tr?id=1620658645380866&ev=PageView&noscript=1", "https://img-cdn.thepublive.com/fit-in/580x326/filters:format(webp)/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png", "https://www.ciol.com/static/images/svg%20icons/google.svg", "https://www.ciol.com/static/images/svg%20icons/facebook-logo.svg", "https://www.ciol.com/static/images/svg%20icons/cross_svg.svg", "https://www.ciol.com/static/images/svg%20icons/user-avatar.svg", "https://www.ciol.com/static/images/google_news.png", "https://www.ciol.com/static/images/svg%20icons/newsletter_new_icon.svg" ]
[]
[]
[ "" ]
null
[ "CIOL Bureau" ]
2001-07-27T00:54:13+05:30
en
https://img-cdn.thepubli…atars/ciol 2.png
CIOL
https://www.ciol.com/ibm-terra-lycos-sign-pact-2/
NEW YORK: IBM Corp. and Spain's Terra Lycos said on Thursday they struck a two-year agreement that makes the world's largest computer maker the main provider of servers, enterprise software and services for the Internet media company. In a separate deal, the companies said in a statement that IBM will advertise on Terra Lycos' network of sites. Terra Lycos was created by the merger of Spanish telecom giant Telefonica SA's Internet arm and US-based Internet company Lycos last year. The deal comes at a time when Internet media companies are aggressively trying to sign-up blue-chip clients in the wake of a sharp decline in advertising spending. Financial terms of both deals were not disclosed but the company said the hardware and services component of the deal will amount to "tens of millions of dollars" over the term of the pact. As part of the technology and marketing deal, IBM will provide the products and services for Terra Lycos' worldwide e-business infrastructure, helping the company service its Web surfers and handle more customers while cutting operating costs. The IBM systems will tie together Terra Lycos' Web-serving, enterprise resource planning, and customer relationship management and systems. The companies have also agreed to explore marketing opportunities in the future, as well as collaborate on research. "Today's announcement creates a powerful relationship between IBM and Terra Lycos," said Joaquim Agut, executive chairman of Terra Lycos, in a statement. "The pairing of the two global leaders provides a platform for technology exchanges and collaboration on strategic projects." It is similar to deal struck by Terra Lycos rival AOL Time Warner Inc., who has become known for its myriad of marketing deals this year struck with different partners, including some of its technology suppliers. The deal comes ahead of Terra Lycos' quarterly earnings report next week. The company, like many of its peers, has been hit by the ad slump created by the economic slowdown and dot-com bust, and investors will gauge the company on its cost-cutting efforts and progress toward profitability.
correct_subsidiary_00108
FactBench
3
1
https://www.cnet.com/tech/services-and-software/lycos-bought-in-first-foreign-portal-deal/
en
Lycos bought in first foreign portal deal
https://www.cnet.com/a/n…s/logos/cnet.png
https://www.cnet.com/a/n…s/logos/cnet.png
[ "https://www.cnet.com/i/ne/pre/Emed/2000/05/0516lcosstockchart.gif" ]
[]
[]
[ "" ]
null
[ "Jim Hu" ]
2002-01-03T00:43:56+00:00
Terra Networks buys Lycos in a deal valued at $12.5 billion, marking the first time a U.S. portal has been acquired by a foreign company.
en
/apple-touch-icon-v3.png
CNET
https://www.cnet.com/tech/services-and-software/lycos-bought-in-first-foreign-portal-deal/
Terra Networks said today it would buy Lycos in an all-stock deal valued at $12.5 billion, marking the first time a U.S. portal has been acquired by a foreign company. Terra, a Spanish Internet service provider, is offering a stock swap that values Lycos around $97.55 a share, assuming Terra's stock does not fall more than 20 percent. Bertelsmann, the European media giant, is going to buy advertising and services from the combined company valued at $1 billion over five years and will offer preferential access to its content. The combined company will be based in Waltham, Mass., with Lycos chief executive Bob Davis and chief financial officer Ted Philip remaining in their posts. Juan Villalonga, the chairman of Terra Networks' telecommunications parent, Telefonica, will be chairman. Called Terra Lycos, the company will have operations in 37 countries, including in high-growth markets in North America, Latin America, Asia and Europe. The deal is expected to close in the third quarter of this year, provided it receives shareholder and regulatory approval. "Overnight in one fell swoop, this company has jumped from strong Internet competitor to a global powerhouse," said Lycos' Davis. "In one transaction, we have truly transformed the industry." Added Villalonga: "The combination of Terra and Lycos, supported by strategic relationships with Telefonica and Bertelsmann, creates a new media powerhouse with a scale unmatched by any other Internet or media company." Despite the run-up in Lycos' shares in anticipation of the deal today, they are still trading well below the announced acquisition price. "It's too early to read into the price gap between the stocks," said Arthur Newman, an Internet analyst at ABN Amro. "There was a gap between Time Warner and AOL when that deal was announced, but it closed in a few days." Newman says that it takes some time for the news to filter into the markets, and that people prefer to mull over the terms of a deal before moving in either direction. He believes investors that specialize in arbitrage will hold back initially. "They are the people who will close the gap," he said. "The massive trades the arbitrage investors make, in this case shorting Terra Networks and hoarding Lycos, will eventually close the difference, but the deal-makers like to know the landscape before they dive in." Although a shakeout in the portal market has been expected for some time, the deal raises the stakes for global Internet competition. Terra Lycos also will own 49 percent of a new wireless joint venture being formed in partnership with Telefonica, the telecommunications giant that is Terra Networks' parent. Telefonica will underwrite a $2 billion rights offering by Terra Networks before the close of the deal. As a result, Terra Lycos is expected to have more than $3 billion in cash after the offering, making it "one of the most highly capitalized Internet companies in the world," Terra Lycos said in a statement. The Terra Lycos board will have 14 members, including Villalonga and 10 other Terra Networks designees, as well as Davis, Philip and one other Lycos designee. Davis also will join the board of Telefonica Media, the company's media subsidiary. Lazard Freres served as financial adviser to Terra Networks, and Credit Suisse First Boston served as financial adviser to Lycos. Terra and Lycos said they expect pro forma 2000 revenues of approximately $500 million; the companies said that together they have an estimated 50 million unique users and 175 million page views per day. Both companies lost money last year. Lycos lost $52 million on $135.5 million in sales, and Terra lost 173 million euros ($157 million) on sales of $78.5 million. Serious competition The Spanish- and Portuguese-speaking market is considered a choice field. In Latin America alone, analysts predict the number of Internet users to jump sevenfold by 2003. That potential has sparked intense competition among Internet companies, which are planting their flags all across Latin America. A host of smaller sites, such as New York-based StarMedia Network and Argentine El Sitio, aim to gain a foothold in this market, alongside major Latin media companies. Grupo Televisa, the biggest Spanish-language media group in the world and Mexico's dominant broadcaster, is set to launch its own Web portal, Esmas.com. The trend has even drawn major Latin stars to back some of these ventures. Earlier this year, Julio Iglesias and Latin American TV star Don Francisco invested in one of the new Latin America-focused Internet firms, Aplauso.com, which will go live this summer. These companies face increasing competition from U.S.-based giants such as Microsoft, Yahoo and America Online. AOL plans to take its Latin American unit public and to launch subscription-based online services and Internet portals throughout the region. In October, Microsoft said it would join with Telefonos de Mexico, the biggest Mexican telephone company, to develop a Spanish-language Internet portal for the Americas. Brazil is already the scene of a turf war between the region's largest Internet provider, Universo Online, and AOL's Latin American subsidiary. Last month, Universo, which has exclusive rights to many of the country's magazines, slashed prices to compete with AOL, which began flooding the country with free Internet access offers last month in a bid to win some 700,000 Brazilian customers.
correct_subsidiary_00108
FactBench
3
62
https://hispanicad.com/news/terra-lycos-hosted-havana-film-festival-2001/
en
Terra Lycos Hosted Havana Film Festival 2001.
https://hispanicad.com/hispanicad.jpg
https://hispanicad.com/hispanicad.jpg
[ "https://hispanicad.com/wp-content/uploads/2021/06/hispanicad-logo.png", "https://hispanicad.com/wp-content/uploads/2021/08/hispanicad-logo-mobile-sm.png", "https://hispanicad.com/wp-content/uploads/2024/06/JULY-2024-JEFES-NN-728-x-180-px.gif", "https://hispanicad.com/wp-content/uploads/2021/07/Dex_driving_economy_970x250_LG.gif", "https://hispanicad.com/wp-content/uploads/2021/07/aimm-logo.gif", "https://hispanicad.com/wp-content/uploads/2024/06/JULY-2024-JEFES-NN-180-x-150-px.gif", "https://hispanicad.com/wp-content/uploads/2024/01/25celebration-2024-180.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/Hispanic-Ad-button.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/Dex_HispanicAd_Census_180x150.gif", "https://hispanicad.com/wp-content/uploads/2021/07/estrella-logo-2021_1.gif", "https://hispanicad.com/wp-content/uploads/2021/07/lopeznegrete.gif", "https://hispanicad.com/wp-content/uploads/2021/07/alma-2020_0.gif", "https://hispanicad.com/wp-content/uploads/2021/03/hmc-2021-logo-1.jpg", "https://hispanicad.com/wp-content/uploads/2024/02/MEL-2024.jpg", "https://hispanicad.com/wp-content/uploads/2024/04/DANGERTV_MULTICULTI_180.gif", "https://hispanicad.com/wp-content/uploads/2024/06/Los-40-Hipanic-Ad-180-x-150-px.gif", "https://hispanicad.com/wp-content/uploads/2021/08/unanimo-2022-1.gif", "https://hispanicad.com/wp-content/uploads/2021/12/report-cover-hcds-2023-464x600.jpg", "https://hispanicad.com/wp-content/uploads/2024/03/TL-HispanicAd-Button-C-1.gif", "https://hispanicad.com/wp-content/uploads/2024/01/25celebration-2024-180.jpg", "https://hispanicad.com/wp-content/uploads/2021/03/hmc-2021-logo-1.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/Video-gif-Entravision-is.gif", "https://hispanicad.com/wp-content/uploads/2021/07/Dex_HispanicAd_Census_180x150.gif", "https://hispanicad.com/wp-content/uploads/2021/12/hispanic-market-thought-leaders-2023-471x600.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/lopeznegrete.gif", "https://hispanicad.com/wp-content/uploads/2021/07/estrella-logo-2021_1.gif", "https://hispanicad.com/wp-content/uploads/2021/07/alma-2020_0.gif", "https://hispanicad.com/wp-content/uploads/2021/07/aimm-logo.gif", "https://hispanicad.com/wp-content/uploads/2023/01/ana-2023-logo.jpg", "https://hispanicad.com/wp-content/uploads/2021/06/HispanicAd-Button-copy.jpg", "https://hispanicad.com/wp-content/uploads/2023/08/CTV-HispanicAd-Button-180x150-1.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/estrella-logo-2021_1.gif", "https://hispanicad.com/wp-content/uploads/2024/06/report-cover-hispanic-market-guide-2024-464x600.jpg", "https://hispanicad.com/wp-content/uploads/2024/03/TL-HispanicAd-Button-C-1.gif", "https://hispanicad.com/wp-content/uploads/2021/07/2022-INFUSION-by-castells-Logo-CMYK-Black.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/Video-gif-Entravision-is.gif", "https://hispanicad.com/wp-content/uploads/2024/01/25celebration-2024-180.jpg", "https://hispanicad.com/wp-content/uploads/2021/12/report-cover-hispanic-tv-programming-2024-464x600.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/lopeznegrete.gif", "https://hispanicad.com/wp-content/uploads/2021/07/RP1290859_24_RH_BRAND_REFRESH_HISPANIC_AD_RTP.jpg", "https://hispanicad.com/wp-content/uploads/2021/03/hmc-2021-logo-1.jpg", "https://hispanicad.com/wp-content/uploads/2024/04/HMO-2024-400.jpg", "https://hispanicad.com/wp-content/uploads/2024/06/Los-40-Hipanic-Ad-180-x-150-px.gif", "https://hispanicad.com/wp-content/uploads/2021/07/2022-INFUSION-by-castells-Logo-CMYK-Black.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/estrella-logo-2021_1.gif", "https://hispanicad.com/wp-content/uploads/2021/07/alma-2020_0.gif", "https://hispanicad.com/wp-content/uploads/2021/06/HispanicAd-Button-copy.jpg", "https://hispanicad.com/wp-content/uploads/2021/07/RP1290859_24_RH_BRAND_REFRESH_HISPANIC_AD_RTP.jpg", "https://hispanicad.com/wp-content/uploads/2023/01/ana-2023-logo.jpg", "https://hispanicad.com/wp-content/uploads/2022/01/ttot-1.gif", "https://hispanicad.com/wp-content/uploads/2022/02/casanova-180-1.gif", "https://hispanicad.com/wp-content/uploads/2021/12/hispanic-market-thought-leaders-2023-471x600.jpg", "https://hispanicad.com/wp-content/uploads/2023/08/Copa-America-2024-Banners-3-1-1.jpg", "https://hispanicad.com/wp-content/uploads/2021/03/hmc-2021-logo-1.jpg", "https://hispanicad.com/wp-content/plugins/cookie-law-info/legacy/public/images/logo-cookieyes.svg" ]
[]
[]
[ "" ]
null
[]
null
en
/apple-touch-icon.png
https://hispanicad.com/news/terra-lycos-hosted-havana-film-festival-2001/
Terra Lycos announced the launch of the Havana Film Festival 2001 site. The Havana Film Festival took place in New York City April 16-23 and in exchange for the site hosting, Terra.com received extensive brand exposure as one of the main sponsors of this event. The site, located at http://www.terra.com/specials/havana/ features descriptions of the 30 Cuban films to be showcased during the festival, including streaming previews of some of the films. The site also includes ticket and special pass information, film schedules, actor and director biographies and an additional 35 movies from 12 other Spanish-language countries. The site, a collaborative effort by the Terra and Lycos portals, is hosted on Terra, while the “Festival Community” links to the Chat, Club and Gallery sites on Lycos Communities http://clubs.lycos.com/live/Directory/CommunityHome.asp?CG=treppc4k3jah3bvh0188nkobks. The Gallery currently features photos from some of the films to be shown at this year’s festival.
correct_subsidiary_00108
FactBench
3
35
https://www.slideshare.net/slideshow/terra-lycos-go-get-itin-a-few-short-years-the-internet-hasdocx/255471114
en
Terra Lycos Go Get It! In a few short years, the Internet has.docx
https://cdn.slidesharecd…t=640&fit=bounds
https://cdn.slidesharecd…t=640&fit=bounds
[ "https://public.slidesharecdn.com/images/next/logo-slideshare-scribd-company.svg?w=128&q=75 1x, https://public.slidesharecdn.com/images/next/logo-slideshare-scribd-company.svg?w=256&q=75 2x", "https://image.slidesharecdn.com/terralycosgogetitinafewshortyearstheinternethas-230123041644-3c52f0c9/85/Terra-Lycos-Go-Get-It-In-a-few-short-years-the-Internet-has-docx-1-320.jpg 320w, https://image.slidesharecdn.com/terralycosgogetitinafewshortyearstheinternethas-230123041644-3c52f0c9/85/Terra-Lycos-Go-Get-It-In-a-few-short-years-the-Internet-has-docx-1-638.jpg 638w, https://image.slidesharecdn.com/terralycosgogetitinafewshortyearstheinternethas-230123041644-3c52f0c9/75/Terra-Lycos-Go-Get-It-In-a-few-short-years-the-Internet-has-docx-1-2048.jpg 2048w" ]
[]
[]
[ "" ]
null
[]
2023-01-23T04:16:44+00:00
Terra Lycos Go Get It!   In a few short years, the Internet has.docx - Download as a PDF or view online for free
en
https://public.slidesharecdn.com/_next/static/media/favicon.7bc3d920.ico
SlideShare
https://www.slideshare.net/slideshow/terra-lycos-go-get-itin-a-few-short-years-the-internet-hasdocx/255471114
1. Terra Lycos: Go Get It! In a few short years, the Internet has revolutionized the way companies do business. Of course, there have been huge successes as well as painful failures among the companies that have embraced the Internet-particularly those that have relied on the Internet for their very survival. But overall, the Internet offers global opportunities for a variety of individuals and organizations. One of those is Lycos Inc. Founded in 1995, Lycos Network was initially an Internet portal-an entryway much like its larger competitors Yahoo! and America Online. Within a few years, experts predicted that the company would capsize in the Web, swamped by its giant competitors. "We were in danger of being an afterthought in early 1998," recalls Lycos chief financial officer Edward Philip. But a series of changes has turned Lycos around. Today, according to industry watcher Media Matrix, the company's collection of sites is the fourth-largest destination for people using the Web. "We had less funding and were late to market, yet we beat the odds and have flourished," boasts CEO Bob Davis. The company also has a new name: Terra Lycos. More on that later. Lycos saved itself largely through a series of alliances and acquisitions, along with the introduction of new tools and services that benefit both consumers and business customers. One service, the "Lycos Daily 50 Report," helps marketers follow emerging consumer trends by tracking the topics that typical users search the Internet for. The report is simply a list of the fifty most popular search terms of the past seven days. It removes company names, porn sites, and Internet utility terms such as "chat room" and comes up with the fifty most useful words and phrases. "Our goal is to create an up-to-date list of the people, places, and things that Internet users are interested 2. in," explains Jonathan Levine, director of content development. "It's a great way for people to stay current. For marketers, this tool can be used to get an idea about emerging consumer trends." This is just one way that the Lycos site helps create opportunities for other businesses. During the past few years, Lycos has allied with or acquired companies such as Tripod Inc. and HotBot. Lycos and Bell Canada created a now company called Sympatico-Lycos, which would provide Canadians with expanded Internet resources for the business-to-business market. In the fall of 2000, Lycos became the " exclusive community provider for the Olympic Games," hosting and managing all Olympic athlete chats, message boards, and fan clubs for the Sydney Olympics. McDonald's joined the party as a sponsor of the Lycos Olympic site, in exchange for featured advertising. " This is a powerful combination linking two global leaders in support of the Sydney Olympic Games, and we look forward to continuing to work with McDonald's to further leverage the strengths of both companies," stated Jeff Bennett, senior vice president of corporate development at Lycos. Later, Lycos Asia received a license from the Chinese government to operate one of China's first foreign-owned Web sites. Previously, foreign-owned Web companies could function only through partnerships with Chinese institutions that would exert control over operations. While all of these alliances are potential opportunities, they also increased the complexity of the company-and the complexity of its problems. So, Lycos hired its first chief information officer, Tim Wright. "They were looking for someone with experience in acquisitions, someone who knew how to handle multiple staffs of skilled people and knew how to blend disparate pieces together," Wright explains. In other words, Wright's job was to figure out how to weave technology and people together in a way that allowed workers and managers in the acquired companies to continue to do what they do best. 3. He also showed them how their relationship with Lycos could actually increase their business. "We let [acquired companies] know right away that we can help them by redirecting our traffic to their site and re-circulating traffic back their way," says Wright. But the biggest deal for Lycos was still to come. The company agreed to be acquired by Spanish Internet service provider Terra Networks in a stock swap that valued Lycos at around $12.5 billion, with the idea that the merger would begin to create a megaportal to the Internet that would dominate Europe and Latin America. Pep Valles, the founder of Terra, views the deal as the global opportunity of a lifetime. "Who hits first hits twice," he remarks, repeating an old Spanish saying. "On the Internet, who hits first hits ten times." He sounds a bit like the first Lycos television commercial, which brought Lycos to the attention of many American consumers. The ad featured a black lab retriever named Lycos who streaked back and forth from the edge of the world to his owner, finding anything that his owner asked for. "Go get it!" the voice of Lycos's owner commanded. And Lycos did. QUESTIONS 1. Using information in the Text, outline three ways that you think Terra Lycos could help other businesses create opportunities for themselves using the Internet. 2. What methods might Terra Lycos use to measure the effectiveness of the various Web sites of its affiliates and subsidiaries? 3. Identify three challenges that managers of Terra Networks and Lycos will likely face as they merge the two organizations.
correct_subsidiary_00108
FactBench
3
15
https://publishing.insead.edu/case/terra-lycos-creating-a-global-and-profitable-integrated-media-company
en
Terra Lycos: Creating a Global and Profitable Integrated Media Company
https://publishing.insea…s-2022-small.jpg
https://publishing.insea…s-2022-small.jpg
[ "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/png/icon_cart.png", "https://publishing.insead.edu/themes/custom/case_publishing/logo.svg", "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/png/icon_cart.png", "https://publishing.insead.edu/sites/publishing/files/Soumitra-Dutta-45_4.jpg", "https://publishing.insead.edu/sites/publishing/files/Theodoros-Evgeniou-13201_52.jpg", "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/popular@3x.png", "https://publishing.insead.edu/themes/custom/case_publishing/dist/images/extra@3x.png", "https://publishing.insead.edu/sites/publishing/files/logo-sorbonne-white_0.png", "https://publishing.insead.edu/sites/publishing/files/wharton-logo.png", "https://publishing.insead.edu/sites/publishing/files/tshinghua-logo.png" ]
[]
[]
[ "Information products", "Information economy", "Versioning", "Customer relationship management", "Portals", "Media", "Dynamic pricing" ]
null
[]
null
This case recounts the strategy of Terra Lycos, an integrated global media company formed by the October 2000 merger of Spain’s Terra Networks and US-based Lycos, to achieve profitability and a leading market position. At the time the case was written (November 2001), Terra Lycos trailed its three heavyweight contenders, AOL-Time Warner, Yahoo! and Microsoft/MSN.
en
/sites/publishing/files/favicon-16x16_2.png
https://publishing.insead.edu/case/terra-lycos-creating-a-global-and-profitable-integrated-media-company
This case recounts the strategy of Terra Lycos, an integrated global media company formed by the October 2000 merger of Spain’s Terra Networks and US-based Lycos, to achieve profitability and a leading market position. At the time the case was written (November 2001), Terra Lycos trailed its three heavyweight contenders, AOL-Time Warner, Yahoo! and Microsoft/MSN. To discuss online media, and the strategy of a major industry player in uncertain and negative market conditions. The case traces Terra Lycos’ creation and its strategy to become an international player during the new economy slowdown through diversifying revenue streams and integrating online and offline media.
correct_subsidiary_00108
FactBench
2
59
https://news.brp.com/corporate-governance/board-of-directors/
en
Board of Directors
https://news.brp.com/sites/g/files/knoqqb88811/files/favicon.ico
https://news.brp.com/sites/g/files/knoqqb88811/files/favicon.ico
[ "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/1477526891938.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/1544647398747.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ecom-sidemenu-opener.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/1544647398747.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ecom-closebutton.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ecom-close.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ICON_Find_Store.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ICON_Find_Store.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ICON_Find_Store.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ecom-assistance.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Elaine_rev.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Pierre%20Beaudoin_headshot_resized.jpg", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/bekenstein.jpg", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Jos%C3%A9%20Boisjoli_headshot_resized.jpg", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Bombardier%2C%20Charles%20headshot.PNG", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Ernesto%20Hern%C3%A1ndez.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Kathy%20Kountze.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/EstelleMetayer-Photo.jpg", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Nomicos_nich_small.jpg", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Edward%20Philip.jpeg", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Michael_Ross.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Barbara%20Samardzich%2C%20Corporate%20Director.jpg", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/ecom-email.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/facebook-icon-hover.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/twitter-icon-hover.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/youtube-icon-hover.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/linkedin-icon-hover.png", "https://news.brp.com/sites/g/files/knoqqb88811/themes/site/nir_pid3035/dist/images/logo-brp_v4.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/ski-doo_0.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/lynx.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/seadoo.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/canam.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/rotax.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/alumacraft_2.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/Manitou-logo.png", "https://news.brp.com/system/files-encrypted/nasdaq_kms/inline-images/QUINTREX-MY23-Logo-REVERSE-RGB.png" ]
[]
[]
[ "" ]
null
[]
null
The Investor Relations website contains information about BRP's business for stockholders, potential investors, and financial analysts.
en
/sites/g/files/knoqqb88811/files/favicon.ico
BRP
https://news.brp.com/corporate-governance/board-of-directors/
Ms. Élaine Beaudoin is Vice-President and director of Beaudier, a private holding company which holds Multiple Voting Shares, since 2019. She is a member of several other boards of directors, including Armtex Inc., Hebdo-litho, Bodycad Inc. and the J.Armand Bombardier Foundation. She also sat on the board of directors of Canam Inc. from 2000 to 2017 and chaired its Human Resources Committee and served as a member of its Audit Committee. From 1989 to 1998, she acted as Chief Executive Officer of Unifix Inc., a company specializing in the manufacturing of light-weight concrete panels. Ms. Élaine Beaudoin is a graduate of McGill University and a member of the Ordre des comptables professionnels agréés du Québec (Québec CPA Order). She holds the ICD.D designation from the Institute of Corporate Directors. Member of the Human Resources and Compensation Committee Member of the Nominating Governance and Social Responsibility Committee Mr. Beaudoin is a corporate director. Mr. Beaudoin joined the Marine Products division of Bombardier Inc. in 1985. In October 1990, he was appointed Vice-President, Product Development of the Sea-Doo/Ski-Doo division. In 1992, he was appointed Executive Vice-President of the Sea-Doo/Ski-Doo division and became President of Bombardier Inc. in January 1994. In April 1996, he was promoted to President and Chief Operating Officer of Bombardier Recreational Products. In February 2001, he was appointed President of Bombardier Aerospace Services Limited, Business Aircraft and he became President and Chief Operating Officer of Bombardier Aerospace Services Limited in October of the same year. On December 13, 2004, in addition to his duties as President and Chief Operating Officer of Bombardier Aerospace Services Limited, he was appointed Executive Vice-President of Bombardier Inc. and became a member of the board of directors of Bombardier Inc. On June 4, 2008, he was appointed President and Chief Executive Officer of Bombardier Inc. and served until 2015. He became Executive Chairman of the board of directors of Bombardier Inc. in February 2015 and Chairman of the board of directors in July 2017. He has also been a member of the board of directors of Power Corporation of Canada since 2005. Mr. Beaudoin studied Business Administration at Collège Jean-de-Brébeuf and Industrial Relations at McGill University in Montreal. Member of the Human Resources and Compensation Committee Member of the Nominating, Governance and Social Responsibility Committee Mr. Bekenstein is a Senior Advisor at Bain Capital Investors, LLC. Prior to joining BCI in 1984, Mr. Bekenstein spent several years at Bain & Company, Inc., where he was involved with companies in a variety of industries. Mr. Bekenstein is a member of the board of directors and the Human Resources and Compensation Committee of Dollarama Inc. He also serves as a director of Bright Horizons Family Solutions Inc., for which he is a member of the Compensation Committee. He was a member of the board of directors and the Nominating and Governance Committee of Canada Goose Holdings Inc. until 2023. Mr. Bekenstein received a Bachelor of Arts from Yale University and a Master of Business Administration (MBA) from Harvard Business School. Chair of the Board of Directors Member of the Investment and Risk Committee Mr. Boisjoli is Chair of the Board of Directors of BRP since 2019 and President and Chief Executive Officer of BRP since December 2003, when BRP became a standalone company. In October 1998, Mr. Boisjoli was named President of the Snowmobile and Watercraft division, the largest division of Bombardier Recreational Products Inc. In April 2001, he was given the added responsibility of managing the ATV division. Mr. Boisjoli joined Bombardier Recreational Products Inc. in 1989, after eight years in the pharmaceutical and road safety equipment industries. Mr. Boisjoli served on the board of directors of McCain Foods Group Inc. from January 2018 to February 2022. In April 2005, Mr. Boisjoli received the prestigious titles of Executive of the Year by Powersports Magazine, the most important powersports magazine in the United States, as Entrepreneur of the Year, Québec, by EY in 2014, was named CEO of the year 2017 by the Canadian business newspaper Les Affaires and was also named Global Visionary of the Year in 2023 by the Globe and Mail. Mr. Boisjoli received a Bachelor of Engineering from the Université de Sherbrooke. Member of the Investment and Risk Committee Mr. Bombardier is a corporate director. He was hired by BRP in 1989, and he later joined the R&D team to develop advanced vehicle concepts (Can-Am, Ski-Doo & Spyder). In 2006, he left the family business and created Jophem Holdings to finance startups, design new vehicle concepts and build prototypes in collaboration with universities. For 10 years, Mr. Bombardier also operated two BRP dealerships in Québec. Between 2017 and 2019, he has worked as a senior consultant for the International Civil Aviation Organization (ICAO). He is also a member of the board of directors of Bombardier Inc. since 2019. Mr. Bombardier is an engineer and holds a bachelor’s and a master’s of science degrees from the École de technologie supérieure and a certificate in board governance from Université Laval. Member of the Investment and Risk Committee Member of the Audit Committee Mr. Hernández is a corporate director who has over 40 years of engineering sales, marketing and operations experience in the automotive industry. After starting his career at General Motors (Mexico) in 1980 as a Development Engineer, he worked in several positions including Engineering Manager, Executive Engineer, and Marketing Director. In 2003, he was appointed Vice-President of General Motors de México and Executive Director of Sales, Service and Marketing, where he successfully led the commercial operations of various brands including Chevrolet, Buick, GMC and Cadillac. In 2011, he took the helm as the first Mexican national to be appointed President and Managing Director. He held this role until September 2019 and retired in January 2020. During his tenure, Ernesto M. Hernández managed both the commercial and manufacturing sides of General Motors’ operations in Mexico, Central America and the Caribbean. He sits on the board of directors of Constellation Brands, Inc. and is a member of its Human Resources Committee and its Governance, Nominating and Responsibility Committee. He also sits on the board of directors of Dana Incorporated and is a member of its Audit Committee as well as its Technology and Sustainability Committee. He currently serves in various Chambers of Commerce and Business Councils. Mr. Hernández was an independent director on the board of directors of Grupo KUO, S.A.B. de C.V., DINE, S.A.B. de C.V., and Corporación Zapata, S.A. de C.V. He obtained a Bachelor of Science from Instituto Politécnico Nacional and he has also completed a Master of Science in Administration and a Master of Science in Management from the Instituto Tecnológico Autónomo de México and the Massachusetts Institute of Technology, respectively. Member of the Audit Committee Ms. Kountze is the Chief Information Officer (CIO) for Bose Corporation, a consumer retail company that develops sound solutions for entertainment, home audio, aviation, and automotive industries. She has held other various senior IT leadership positions across her over 25 years working in the technology field. Before joining Bose Corporation, Ms. Kountze was the Chief Information Officer for DentaQuest, a company that provides oral health care benefits and delivers oral care, from 2021 to 2022. Between 2012 and 2021, Ms. Kountze was also Senior Vice-President and Chief Information Officer (CIO) for Eversource Energy, the largest provider of electric, gas and water services in the New England area of the United States, where she held that position for 11 years and prior to that Ms. Kountze spent 2 years as the Vice-President and CIO for United Illuminating Company, an electric utility company in Connecticut. She is the Chair for the Boston CIO Leadership Council and a member of the Massachusetts Cybersecurity Council, a cybersecurity advisory group for the Governor of Massachusetts. Ms. Kountze serves on the board of The Children’s Place Inc. and is a member of its Audit Committee since November 2021. She has won several awards including 2021 Top Women in Energy, 2021 Diversity Women Elite 100, Most Impactful Black Women in Boston 2021, 2017 CIO of the Year, and 2015 Women Leading Stem Award. Ms. Kountze holds a bachelor’s degree in actuarial Math and Science and a master’s degree in Computer Science. She also received a certification in Risk and Information Security Controls (CRISC) in 2023. Member of the Audit Committee Member of the Nominating, Governance and Social Responsibility Committee Ms. Métayer is the president of EM Strategy Inc. and an adjunct professor at McGill University. Prior to that, she worked at the ING Bank (Netherlands, Poland), Bouygues Group (France, UK), and in Canada at McKinsey & Company, CAE Inc., and Competia Inc. which she founded and sold in 2004. She currently serves on the board of directors, sits on the Governance, Compensation and Human Resources Committee, the Strategy and Innovation Committee and chairs the Audemars Piguet Private Investment Committee of Audemars Piguet Holding S.A. (Switzerland). She also serves on the board of directors of Martur Fompak International (Republic of Türkiye) for which she is a member of the Audit Committee. Ms. Métayer joined the board of directors of Nortera Foods Inc. (Canada, U.S.A.) in December 2022 and chairs its Human Resources and Governance Committee as well as being chair of the board. In the last few years, she served on various advisory boards and boards of directors, including the board of directors of Ivanhoe Cambridge Inc. (Canada) for which she was a member of the Human Resources and Compensation Committee and chaired its Governance and Ethics Committee (Canada), and Agropur Cooperative (Canada) where she was a member of the Technology Committee, the Governance Committee and the Sustainable Development Committee. Ms. Métayer is a certified director of the Institut Français des Administrateurs and attended the High Performing Boards Program at Harvard Business School. She was trained in the Netherlands, where she obtained her MBA and Drs. from the University of Nijenrode. Ms. Métayer has also developed an expertise in ESG, including climate-related issues, notably through having chaired on several board committees overseeing ESG strategy, and having obtained a certificate on Sustainable Real Estate from Cambridge University in 2021. Member of the Audit Committee Member of the Investment and Risk Committee Mr. Nomicos is a Senior Advisor of Nonantum Capital Partners, LLC, a middle market private equity firm that he founded with other executives in 2018. Prior to that, Mr. Nomicos was at Bain Capital Investors, LLC where he worked from 1999 to 2016 as an Operating Partner focused on investments in the manufacturing and consumer product sectors and as a Managing Director of Bain Capital Credit, LP, the credit arm of BCI. Previously, Mr. Nomicos was a senior corporate development and manufacturing executive at Oak Industries Inc., and he spent several years at Bain & Company, Inc. where he was an engagement manager. Mr. Nomicos serves on the board of directors and is a member of the Audit Committee of Dollarama Inc. He received a Master of Business Administration (MBA) from Harvard Business School and a Bachelor of Science in Engineering from Princeton University. Chair of the Human Resources and Compensation Committee Chair of the Nominating, Governance and Social Responsibility Committee Mr. Philip is a corporate director. He served as the Chief Operating Officer of Partners in Health (a non-profit health care organization) from 2013 until 2017. In addition, Mr. Philip was a Special Partner at Highland Consumer Fund (consumer-oriented private equity fund), serving in this role from 2013 until 2017. He served as Managing General Partner at Highland Consumer Fund from 2006 to 2013. Prior thereto, Mr. Philip served as President and Chief Executive Officer of Decision Matrix Group, Inc. (research and consulting firm) from 2004 to 2005. Prior to joining Decision Matrix Group, Inc., he held several positions at Terra Networks, S.A. (global Internet company), Lycos, Inc. (an Internet service provider and search company), The Walt Disney Company, and prior thereto Mr. Philip spent a number of years in investment banking. He recently retired from the board of directors of Hasbro, Inc., of which he was a director from 2002 until 2023. Mr. Philip is also the Non-Executive Chairman of United Airlines Holdings, Inc. and sits on its Audit Committee, and is also Chairman of its Executive Committee and of its Nominating and Governance Committee. In addition, he is on the board of directors, a member of the Compensation Committee and Chairman of the Audit Committee of Blade Air Mobility, Inc., a technology-powered, global air mobility platform. Mr. Philip received a B.S. in Economics and Mathematics from Vanderbilt University and holds a Master of Business Administration from Harvard Business School. Chair of the Audit Committee Mr. Ross is a corporate director. He was Chief Financial Officer of Sesami Cash Management Technologies Corporation (“Sesami”) from 2022 to 2023. In this role, he was responsible for all financial activities, corporate development, and strategic planning. Prior to joining Sesami, Mr. Ross was Chief Financial Officer of Dollarama Inc. for over a decade. Prior to that, Mr. Ross was CFO of Sanimax Industries, a rendering services company, and spent over 20 years in senior financial roles in the television and broadcasting industry. He began his career as an auditor with Ernst & Young. Mr. Ross is a member of the board of directors of Pixcom Inc., the Fondation CHU Saint Justine and FEI – Quebec Chapter. He was previously a member of the board of directors of Investissement Québec, la Fondation Marie-Vincent, Fondation Dr Clown and Muscular Dystrophy Canada. Mr. Ross holds a bachelor’s degree in commerce and a graduate diploma in accounting from Concordia University. He received the Fellow of the Order distinction (FCPA) in 2012.
correct_subsidiary_00108
FactBench
0
56
https://ideamensch.com/gonzo-arzuaga/
en
Startups.com Founder
https://149363979.v2.pre…08/im512icon.png
https://149363979.v2.pre…08/im512icon.png
[ "https://149363979.v2.pressablecdn.com/wp-content/uploads/2024/06/mensch_2.png 1x , https://149363979.v2.pressablecdn.com/wp-content/uploads/2024/06/mensch-retina.png 2x ", "https://149363979.v2.pressablecdn.com/wp-content/uploads/2024/06/mensch_2.png", "https://149363979.v2.pressablecdn.com/wp-content/uploads/2024/06/mensch_2.png", "https://149363979.v2.pressablecdn.com/wp-content/uploads/2024/06/mensch_2.png", "https://149363979.v2.pressablecdn.com/wp-content/uploads/2011/07/Gonzo-Arzuaga.jpg", "https://149363979.v2.pressablecdn.com/wp-content/uploads/2011/07/Gonzo-Arzuaga.jpg" ]
[]
[]
[ "" ]
null
[ "Gonzo Arzuaga - Startups.com Founder", "Mario Schulzke" ]
2011-07-18T05:08:27-06:00
Gonzo Arzuaga is a serial internet entrepreneur, and the Founder of Startups.com.
en
https://149363979.v2.pre…12icon-32x32.png
ideamensch
https://ideamensch.com/gonzo-arzuaga/
[quote style=”boxed”]Determination defeats talent.[/quote] Gonzo Arzuaga is a serial internet entrepreneur. It started in 1996 when he created a search engine in Latin America, that in 1999 was acquired by Terra/Lycos. He then created an internet incubator that failed miserably, going down with the internet bust of 2000. In 2007 founded KillerStartups.com that today gets roughly 1M visitors per month. In the meantime Gonzo lived in 5 countries, learning to speak languages (spanish, english, portuguese, french and mandarin, in that order). He also published 5 books, about the business opportunities of the internet, and motivational quotes. He likes to write, and to give motivational talks to support people’s dream of becoming an entrepreneur. Recently he launched Startups.com where 10,000 site owners get daily deals (50-90% Off) on software, ebooks, web apps, gadgets and more to help them grow their online business. Startups.com’s motto: “entrepreneurs should never pay retail again”. What are you working on right now? Startups.com is taking all my time. We’ve launched 2 months ago, and got an impressive 10,000 site owners that want to get our daily deals keeps me moving. We want to go for more. Also doing customer service and contacting merchants that want to be featured on Startups.com is my main activity and focus. What does your typical day look like? I’ve been doing business online since 1996 and I don’t think there was ever a typical day! Since 2000 I work at home, and I just love it. So I get up, do a routine check of checking stats, doing some emaling, and planning the day. Then I do skype calls all day, pretty much until the evening. I do some spinning 3 times a week, and I walk around when I need to think, and to get away the day to day. I go to bed with my iPad and check Zite and Twitter for all the important news of the day. 3 trends that excite you? The whole “apps” universe (ipad, iphone, android, etc) I think is going to be revolutionary. Virtual currency, although I’m watching this for afar, I’m really excited about how it’ll turn out. The Cost-Per-Deal model, to see how’s gonna evolve to become the next logical step after the different models the online advertising world went thru: CPM/CPC/CPL. How do you bring ideas to life? As entrepreneurs, and particularly serial entrepreneurs, we have the “problem” of coming up with lots of ideas every day. The problem is, you have to focus on what you’re doing right now, the project you have in your hands at this time. My believe is that for any given project to really succeed you have to give it at least 3 to 5 years, so basically the point here is: how to let go of ideas that are not related to your current business. And that’s a hard job to do! Answering your question, when I get hit by an idea (and that can be really at ANY time: when taking a shower, in the elevator, when walking around the park) I try to use pen and paper first, very old school, but help me to think thru the whole thing. Then I use Google Docs to be able to share it with the team. And only then, try to see where it could fit into our roadmap. Easier said than done, because I want to see it implemented NOW. There’s a great satisfaction in seeing ideas, only abstract imagination, brought to life. That still amazes me when it happens, it’s an awesome thing. What inspires you? Short answer: other entrepreneurs. Reading biographies of successful business people and entrepreneurs keep me going. It clearly shows me that it can be done, others have done it! When the road gets rough (and it does more often than not!), reading biographies and interviews help me a lot. That’s one of the reasons I’d like to start doing interviews myself :). What is one mistake you’ve made, and what did you learn from it? Oh boy, only one? I’ve made so many I lost count. Lately it seems I’ve been bad at judging people, and I’ve learned that understanding human behavior is a very difficult craft. What is one business idea that you’re willing to give away to our readers? It’s an idea I had for a long time, not sure how good it is, or how big it’d become. All my life I’ve been fascinated by polls, don’t ask me why because I have to clue. So I came up with this idea (you, dear reader, are more than welcome to take it away if you like it): offer websites a very powerful poll solution to paste in your site. The trick is: in the results page, it could be monetized with ads. Fully customizable, API driven, etc, etc. The publisher decides how “greedy” he wants to be in terms of numbers of ads being displayed. You can also provide websites looking to further monetize their site, with the actual polls themselves. The idea goes on and on, but that’s the basic of it. What do you read every day, and why? TechCrunch and Mashable to keep updated on daily news. I also like to read AVC.com (Fred Wilson’s blog) every week for a more thoughtful read. What is the one book that you recommend our community should read, and why? “Rework” by the people at 37signals. Every entrepreneur should read that one. What is your favorite gadget, app or piece of software that helps you every day? Google Docs, iPad’s Zite, Dropbox. Three people we should follow on Twitter, and why? @davemcclure Dave McClure So disarmingly unassuming, and with a very laid down approach to business. @fredwilson Fred Wilson He’s just down to earth on everything he does, the way he explains himself on his blog and on difficult issues. The personal touch he brings to all he does. He’s humble, friendly, and I could go on and on. @garyveeGary V He brings a fresh air perspective to the entrepreneurial world, he’s awesome. Who would you love to see interviewed on IdeaMensch? David Cancel, CEO of Performable (recently acquired by HubSpot). Dave McClure (500startups). David Hauser (Grasshopper Co-founder). These people, among others, have clear drive in business and in life. When is the last time you laughed out loud? What caused it. It was recently actually. Watching a youtube video of a guy shouting to the TV screen when watching a futbol match. It was hilarious, and now got 5M views. Determination beats talent? Every time. I rather compete against a more talented individual than to compete against a one more determined to win. Do you like motivational quotes? I like them so much that I even write some myself! Heck, I even published a book with motivational quotes and pictures of kids, can you believe it? I have more motivational books written but I wish I had more time to work on publishing them. Connect:
correct_subsidiary_00108
FactBench
1
36
https://variety.com/2000/digital/news/telefonica-likes-lycos-for-12-6-bil-1117781747/
en
Telefonica likes Lycos for $12.6 bil
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://sb.scorecardresearch.com/p?c1=2&c2=6035310&c4=&cv=3.9&cj=1", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://variety.com/wp-content/themes/pmc-variety-2020/assets/public/lazyload-fallback.gif", "https://pixel.quantserve.com/pixel?a.1=&a.2=p-31f3D02tYU8zY", "https://px.ads.linkedin.com/collect/?pid=1429113&fmt=gif" ]
[]
[]
[ "" ]
null
[ "Marc Graser" ]
2000-05-17T08:00:00+00:00
Terra Networks, the Internet arm of Spanish telco Telefonica de Espana, has acquired U.S. Internet search engine Lycos for $12.6 billion in stock, a move expected to create one of the world's largest dot-coms. Deal, announced Tuesday, marks the first time a foreign company has taken over a major U.S. portal.
en
https://variety.com/wp-c…e-touch-icon.png
Variety
https://variety.com/2000/digital/news/telefonica-likes-lycos-for-12-6-bil-1117781747/
Terra Networks, the Internet arm of Spanish telco Telefonica de Espana, has acquired U.S. Internet search engine Lycos for $12.6 billion in stock, a move expected to create one of the world’s largest dot-coms. Deal, announced Tuesday, marks the first time a foreign company has taken over a major U.S. portal. Telefonica has been on an aggressive buying spree as subsid Telefonica Media recently acquired Dutch-based TV producer Endemol Entertainment. The expected move will enable Telefonica to boost the global reach of its Internet efforts beyond Latin America and Europe and will give Lycos access to 30 million Spanish speakers in the U.S. Most important, however, the new Terra-Lycos will have access to books, music, film, television and other media properties owned by German conglom Bertelsmann. Lycos and Bertelsmann jointly operate Lycos Europe, the third-most-popular network of Web sites in Germany. Bertelsmann has said it will buy $1 billion in advertising and services from the combined company over the next five years. Despite Telefonica’s assets, acquisition pits the company against Web giants America Online and Yahoo! Web ‘powerhouse’ Concern’s execs aren’t bothered. “We have created a global Internet powerhouse,” Lycos CEO Bob Davis said. “This merger leads the way in the convergence between the Internet, next-generation forms of connectivity, and both traditional and new-media content.” As part of the deal, Terra plans to offer about 1.7 shares of its stock for each share of Lycos. Deal finally ends Lycos’ search for a partner or buyer after its deal with Barry Diller’s USA Networks fell through last year. That deal was valued at a much lower $6.5 billion. The newly combined Terra-Lycos, to be based in Waltham, Mass., will be headed by Davis and Lycos chief financial officer Ted Philip. Telefonica and Terra chairman Juan Villalonga will be chairman of the new company. Acquisition is expected to close in the third quarter of this year, provided it receives shareholder and regulatory approval. $3 bil in cash Telefonica will underwrite a $2 billion rights offering by Terra Networks before the close of the deal. As a result, Terra-Lycos is expected to have more than $3 billion in cash after the offering. The 14-member Terra-Lycos board will include Villalonga and 10 other Terra Networks designees, as well as Davis, Philip and one other Lycos designee. Davis will also join the board of Telefonica Media. Shares of Lycos rose $11 on Tuesday to close at $72.63 — a gain of nearly 18%.
correct_subsidiary_00108
FactBench
3
97
https://wordlesolver.pro/
en
Find the word or get inspired now!
https://wordlesolver.pro/solver.png
https://wordlesolver.pro/solver.png
[]
[]
[]
[ "" ]
null
[]
null
Solve your Wordle puzzle with five-letter words like a pro! supports all words in American, British, and Australian English!
en
/apple-touch-icon.png
https://wordlesolver.pro
First: It’d be useful to start with a word that has a lot of vowels and don’t have any repeated letters. We have filtered the best 30 starting words for you: ABOUT ADIEU AISLE AROSE AUDIO CLINT CONES CRANE CRONY CRONY HATES JUICY OCEAN OUIJA PIOUS POUTY ROATE ROUND SABER SALET SERAI SNORT SOARE STARE STERN STORE SULLY TALES TRACE TRAIL The above words are not just for the first try. You may pick one of them on your second try. However, try your best not to repeat the letters. For instance, SABER and CLINT - These two words are also my starting guesses for almost every single wordle game.
correct_subsidiary_00108
FactBench
3
78
https://contracts.onecle.com/terra-lycos/telefonica.collab.2003.02.12.shtml
en
Strategic Alliance Framework Agreement
[]
[]
[]
[ "competitive intelligence", "business contract", "business forms", "SEC filings", "SEC EDGAR", "material contracts" ]
null
[]
2003-02-12T00:00:00
Strategic Alliance Framework Agreement - Terra Networks SA and Telefonica SA and Other Business Contracts, Forms and Agreeements. Competitive Intelligence for Investors.
en
null
printer-friendly Sample Business Contracts Strategic Alliance Framework Agreement - Terra Networks SA and Telefonica SA FREE TRANSLATION FROM SPANISH ORIGINAL Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. STRATEGIC ALLIANCE FRAMEWORK AGREEMENT signed by TERRA NETWORKS, S.A. and TELEFÓNICA, S.A. Madrid, February 12, 2003 1 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. In Madrid, on February 12, 2003 BETWEEN For one part, TELEFÓNICA, S.A. (hereinafter, “TELEFÓNICA”), a stock company incorporated by virtue of the deed of incorporation executed by the Notary Public of Madrid, Mr Alejandro Rosselló Pastor, on 19th April 1924, under number 141 of his records, domiciled in Madrid, calle Gran Vía, 28. It is registered at the Business Registry of Madrid, at volume 12534, folio 21, sheet number M-6164. It is assigned Tax Identification Number A-28015865. It is represented at this act by (*), of legal age, with domicile for these purposes in Madrid, Gran Vía, 28, holder of national identity document number (*), acting on behalf of TELEFÓNICA in his capacity as (*) of the company and making use of the faculties (*). And for the other, TERRA NETWORKS, S.A. (hereinafter, “TERRA”), a stock company incorporated under the company name of TELEFÓNICA COMUNICACIONES INTERACTIVAS, S.A., by means of the deed authorised by the Notary Public of Madrid, Mr José Antonio Escartín Ipiens, on 4th December 1998, under number 5,276 of his records, registered at the Business Registry of Madrid, at Volume 13,753, Folio 185, Sheet number M-224,449, Entry 1. it changed its company name to that of TELEFÓNICA INTERACTIVA, S.A. in the deed executed on 16th March 1999, before the Notary Public of Madrid Mr Francisco Arriola Garrote, under number 1269 of his records, leading to entry 9 at the Business Registry of Madrid. It is domiciled in Barcelona, calle Nicaragua 54. It has been assigned Tax Identification Code A-82/196080. TERRA appears in its own name and in name and on behalf of Lycos, Inc., (hereinafter “LYCOS”), acting as the sole partner of the latter. It is represented at this act by Mr Joaquim Agut Bonsfills, of legal age, with domicile for these purposes in Barcelona, calle Nicaragua 54, holder of national identity document number 39.134.364-W. He is acting on behalf of TERRA in his capacity as Executive Chairman of the firm, as set forth in the deed executed by the Notary Public of Madrid Mr Carlos Rives Gracia on 9th October 2001, under number 3,402 of his records. Hereinafter, the expression “Parties” shall be used jointly for TELEFÓNICA and TERRA. The expression “Party” shall refer individually to either of them. FREE TRANSLATION FROM SPANISH ORIGINAL 2 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. The Parties reciprocally recognise their capacity and legitimisation to grant this document. WITNESSETH I.Whereas, on May 16th, 2000, the Parties, along with BERTELSMANN AG, a corporation of German nationality, and LYCOS Inc., a corporation of United States nationality, the sole shareholder of which is TERRA, entered into an agreement called Strategic Alliance Framework Agreement (hereinafter the “Agreement”), which regulated the conditions under which BERTELSMANN and, if appropriate, TELEFÓNICA, would hire TERRA to provide certain products and services. The conditions established in that Agreement were on preferential terms. II.Whereas, a change has taken place in the business models created at the end of the 90s and beginning of 2000 to capture the growth value of the Internet, mainly based (although not exclusively) on income obtained by monetisation of the audience through access and advertising, , and one must also point out the worldwide crisis on the advertising market. On the other hand, the wave of offers in narrow band as well as broad band through the media provided by the telephone access or cable television operators has led to such operators developing their own business lines on the Internet, or having sought alliances with specific Internet access suppliers and/or Internet portals, in which sense one must point out the clearly complementary nature of TELEFÓNICA and TERRA due to their coinciding presence in Spain and Latin America. There is also a clear difference between the catalogue of services and products demanded by BERTELSMANN AG. and TELEFÓNICA, and BERTELSMANN AG. also focuses on the United States market which is more linked to LYCOS, Inc. These circumstances that have arisen have a great effect on articulation of the new phase of the relation, just as has been publicised on the markets since the last month of October. In this new context, the Parties, along with LYCOS, Inc. and BERTELSMANN AG., have considered it more convenient to continue their relations, leaving said Agreement without effect, it being fully replaced by this Strategic Alliance Framework Agreement and a new Memorandum of Understanding entered into with this same date (hereinafter “Agreement II”), which regulate the general terms of the new framework of relations between BERTELSMANN AG., TERRA and TELEFÓNICA, considering as the essential aspect for the purposes of its development the execution of this strategic alliance between TELEFÓNICA and TERRA. III.Whereas, indeed, the trend shown by telephone access operators to seek alliances with specific Internet access providers has shown the convenience of taking advantage of the complementary factors there are between TELEFÓNICA and TERRA on all the markets in which both are present. In this sense, while TELEFÓNICA, as a telecommunications operator, has a clearly competitive position in matters of connectivity and provision of access to the international Internet nodes promoted by the development of broad band, TERRA is now the leader on the Spanish and Portuguese speaking market as a specific supplier of Internet access, being the reference portal in Latin America and Spain (as a leading aggregator and contents manager and provider of value added products and services related to the Internet), having developed a brand image which is unquestionably recognised in the New Technologies and Internet sector that contributes the experience and knowledge required for sustained joint construction of a sophisticated offer of narrow and broad band contents and services. IV.Whereas, by virtue of this complementary nature in its objectives, needs and business plans and identification of an improvement in the opportunities of growth, the Parties intend to establish a long term strategic alliance whose main objective will be to improve the joint development of both companies on the narrow and broad band Internet markets in all the countries in which both Groups are present, preferably in the residential segment, SOHO and, if appropriate, SMEs. V.[***] FREE TRANSLATION FROM SPANISH ORIGINAL 3 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. VI.Whereas, due to all the foregoing, the Parties hereby enter into this Framework Agreement, according to the following: CLAUSES ONE. OBJECT OF THIS FRAMEWORK AGREEMENT. The object of this Framework Agreement is the creation of a strategic alliance between TELEFÓNICA and TERRA (establishing the relevant model of relations between them in each one of the phases, just as mentioned hereunder, that take the maximum advantage of the capacity of TELEFÓNICA as a provider of narrow and broad band connectivity and access, and of TERRA as a portal, aggregator, provider and manager of Internet contents and services, in fixed telephony (including the wireless system used in this sector) and in mobile telephony (only if thus agreed in the Contracts and Subsequent Contracts, just as both terms are defined in Section Second and Fourth paragraph four ), all aimed at the residential and SOHO markets, and if agreed in the Contracts and Subsequent Contracts, SMEs, taking advantage of synergies and creating value for both Parties: (i)aggregation of the offer of products and services in the Internet business; (ii)provision of the offer of products and services in the Internet business, (iii) developmentand construction of new advanced services in the Internet business; (iv)connectivity and access to Internet business. Moreover, the Parties shall use best effort to identify additional elements to those described in Section Two below that allow value to be generated for both Groups, with special emphasis on electronic commerce, and the relationships with the different mobile operators of the TELEFÓNICA Group (as such term is defined in Section Two below). TWO. DEVELOPMENTOF THE MODEL OF RELATIONS IN EACH ONE OF THE PHASES OF BUSINESS FORMING THE OFFER FOR THE END CLIENT IN THE INTERNET BUSINESS. The model of relations referred to in the above Section will be subject to development through the relevant contracts entered into between certain companies in the TELEFÓNICA Group and the TERRA Group, on the terms and conditions established in this Framework Agreement, adapted, if appropriate, to the legal and statutory requisites that may be applicable. For that purpose, each Party hereby undertakes, as controlling shareholder of its respective Group, that each one of the contracts foreseen in this Section Two (hereinafter the “Contracts”) shall be formalised and executed by the companies determined by it among those in its respective Group, this being understood in the sense of article 4 of Act 24/1988 of 24th July, of the Stock Market, and, for the purpose of this Framework Agreement, the TERRA Group and Telefónica Publicidad e Información, S.A. shall not be considered an integral part of Telefónica Group (hereinafter “TELEFÓNICA Group” and “TERRA Group”, as appropriate, and the companies thus determined by each Party among those of its respective Group, the “Companies in the TELEFÓNICA Group” or the “Companies in the TERRA Group”, as appropriate), in order to generate: (x) in financial year 2003, at least, the global value in Euros (78,5MM) set forth in Exhibit I, arising from the breakdown described, simply as estimations, on Exhibits II, III, IV, and V of this Framework Agreement, and, (z) for each one of the following financial years, an equivalent value at least to that established for 2003, as indicated in subparagraph (x) above, and as defined in the Note of the Exhibit I. FREE TRANSLATION FROM SPANISH ORIGINAL 4 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. The Parties hereby agree that, to the extent require by virtue of the object of the relevant Contracts, they must be entered into by companies in the TELEFÓNICA Group and Companies in the TERRA Group with presence in the same geographic territories. 2.1.Exclusivity of the Companies in the TERRA Group as a portal, service aggregator and supplier of added value services. 2.1.1.TELEFÓNICA and TERRA undertake, on the terms and conditions of this Framework Agreement, that the Companies in the TELEFÓNICA Group grant exclusivity to the Companies in the TERRA Group as supplier of the essential elements of the portal and aggregator of the narrow and broad band Internet services targeting the residential segment, SOHO and, if appropriate SMEs, all to the satisfaction of TELEFÓNICA according to the contractual parameters established in the terms set forth in Section 2.1.5 below. TELEFÓNICA hereby undertakes, on the terms and conditions of this Framework Agreement, for the Companies in the TELEFÓNICA Group to provide the Companies in the TERRA Group the exclusive commission to develop all the portals for aggregation of the Internet contents and services on narrow and broad band, targeting the residential segment, SOHO and, if appropriate, SMEs, developed by the Companies in the TERRA Group, whether Internet access portals in connectivity offers and Internet access by Companies in the TELEFÓNICA Group, and include the trademark and other distinctive signs of the Companies in the Terra Group (the use of said trademark and other distinctive signs will comply with the provisions of the agreements to be entered under Section 2.1.5 below). 2.1.2.TELEFÓNICA and TERRA undertake, on the terms and conditions of this Framework Agreement, that the Companies in the TELEFÓNICA Group shall acquire the necessary narrow and broad band Internet value added services exclusively from the Companies in the TERRA Group as required to construct the offer for end users, preferably residential, SOHO and, if appropriate, SMEs, listing, for purely illustrative although not limiting purposes, the following: e-mail, instant message platforms, unified message platforms, chats, etc. 2.1.3. Commercialisationor provision to third parties of packaged services that are identical or substantially similar to those supplied to the Companies in the Telefónica Group as set forth in Sections 2.1.1 and 2.1.2 above shall require prior written consent from TELEFÓNICA, which will not be unjustifiably denied. 2.1.4.TELEFÓNICA undertakes, on the terms and conditions of this Framework Agreement, that the Companies in the TELEFÓNICA Group assign to the Companies in the TERRA Group the exclusive management and operation of the advertising spaces that include third party advertising on the portals developed according to Section 2.1.1. above by the Companies in the TERRA Group. In order to avoid doubt, it is expressly recorded that the decision on existence of such advertising space shall be taken entirely by the Companies in the Telefónica Group. 2.1.5.For the purposes of formalising what is set forth in Sections 2.1.1, 2.1.2, 2.1.3 and 2.1.4 above, the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group shall grant the relevant licence contracts for brands and other distinctive signs and intellectual property, management of the portal, aggregation and access to value added services and use of software and maintenance, within the terms specified in Section 4.1 below, on the terms and conditions foreseen in general terms in this Framework Agreement and, specifically, in Exhibit II. FREE TRANSLATION FROM SPANISH ORIGINAL 5 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 2.2.Acquisition, development and/or commercialisation of contents, service provision agreements and exchanges of shares and/or assets with vertical portals, advertising spaces, provision of on-line integral marketing spaces and provision of auditing, consultancy, management and maintenance services for country portals. 2.2.1.Acquisition, development and commercialisation of contents and agreements to provide services and share and/or asset exchanges with vertical portals. 2.2.1.1.TERRA and TELEFÓNICA hereby undertake, on the terms and conditions of this Framework Agreement, that the Companies in the TERRA Group preferentially focus their demand for acquisition, development and/or commercialisation of content arising from the construction of the offer of products and services foreseen in Section Two through Companies in the TELEFÓNICA Group. For this purposees, “preferential focussing” shall be understood as the option for Companies in the TELEFÓNICA Group to offer the Companies in the TERRA Group the aforementioned contents and the obligation of the Companies in the TERRA Group to commission those contents from the Companies in the TELEFÓNICA Group, as long as the offer presented by the latter to the Companies in the TERRA Group is not objectively worse than the offer provided by any third party. Moreover, TERRA and TELEFÓNICA undertake to make their best effort to ensure that the Companies in the TELEFÓNICA Group distribute their on-line contents preferentially through the Companies in the TERRA Group. For that purposes, on the terms established in Section 4.1 below, the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group shall grant the relevant contracts for acquisition, development and/or commercialisation of content and rendering on-line training services, on the terms and conditions set forth in general terms in this Framework Agreement and specifically in Exhibit II, mentioned in Section 2.1.5 above. 2.2.1.2.The Parties undertake, on the terms and conditions of this Framework Agreement, that the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group enter into exclusive contracts to provide on-line training services to their respective employees through the company Educaterra, S.L. The Parties undertake to use best efforts, on the terms and conditions of this Framework Agreement, that the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group will engage Maptel, S.A. for their needs of products, solutions and services based on on-line localisation. In order to facilitate fulfilment of the commitments undertaken in Section 2.2.1.1 above and to optimise their respective portfolios, the Parties hereby undertake, on the terms and conditions foreseen in general terms under this Framework Agreement that, within the deadlines set in Section 4.1 below, Telefónica de España, S.A. shall transmit to Educaterra, S.A., and the latter shall acquire at market value, which according to the Parties valuations is equivalent to its book value, the platform A+ owned by it, on the terms and conditions established specifically in Exhibit III. FREE TRANSLATION FROM SPANISH ORIGINAL 6 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 2.2.2.Acquisition of advertising spaces. TELEFÓNICA and TERRA hereby undertake, on the terms and conditions of this Framework Agreement, that the annual amount assigned by the TELEFÓNICA Group to acquisition of advertising spaces in the Companies in the TERRA Group shall not be less than [***]% of the total annual budget assigned by that Group to advertising. For that purposes, within the deadlines foreseen in Section 4.1 below, the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group shall grant the relevant contracts to acquire advertising spaces, on the terms and conditions foreseen in general terms in this Framework Agreement and specifically in Exhibit IV. 2.2.3.Provision of integral on-line marketing services. The Parties hereby undertake, on the terms and conditions of this Framework Agreement, for the Companies in the TERRA Group to provide preferentially integral on-line marketing services to the Companies in the TELEFÓNICA Group. To these purposes, “preferential contracting” shall be understood as the option granted to the Companies in the TERRA Group to offer the Companies in the TELEFÓNICA Group the integral on-line marketing services of the Companies in the TERRA Group and the obligation of the Companies in the TELEFÓNICA Group to hire those integral on-line marketing services with the Companies in the TERRA Group, as long as the offer presented by the latter to the Companies in the TELEFÓNICA Group is not objectively worse than the offer presented by any third party. For that purposes, within the terms foreseen in Section 4.1 below, the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group shall grant the relevant service provision contracts for integral on-line marketing services, on the terms and conditions foreseen in general terms in this Framework Agreement and specifically in Exhibit IV, mentioned under Section 2.2.2 above. 2.2.4.Preferential contracting of the auditing, consultancy, management and maintenance services of the country portals of the Companies in the TELEFÓNICA Group by the Companies in the TERRA Group. To these purposes, “preferential contracting” shall be understood to be the option of the Companies in the TERRA Group to offer the Companies in the TELEFÓNICA Group auditing, consultancy, management and maintenance services for its country portals and the obligation of the Companies in the TELEFÓNICA Group to hire those auditing, consultancy, management and maintenance services for country portals from the Companies in the TERRA Group, as long as the offer presented by the latter to the Companies in the Telefónica Group is not objectively worse than the offer presented by any third party. The Parties hereby undertake, on the terms and conditions of this Framework Agreement, that the Companies in the TERRA Group shall provide the Companies in the TELEFÓNICA Group the auditing, consultancy, management and maintenance services of their respective country portals. For that purposes, within the deadlines foreseen in Section 4.1 below, the Companies in the TERRA Group and the Companies in the TELEFÓNICA Group shall grant the relevant auditing, consultancy, management and maintenance contracts for country portals, on the terms and conditions foreseen in general terms in this Framework Agreement and specifically in Exhibit VI. FREE TRANSLATION FROM SPANISH ORIGINAL 7 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 2.3.Wholesale Internet connectivity and access. 2.3.1.The Companies in the TELEFÓNICA Group, as exclusive suppliers of wholesale connectivity and access services. The Parties undertake, respectively, on the terms and conditions of this Framework Agreement, that the Companies in the TERRA Group shall acquire the wholesale Internet connectivity and access services exclusively from the Companies in the Telefónica Group, as long as that acquisition is made in the conditions of the most favoured client allowed by market regulations. For that purposes, within the deadlines provided in Section 4.1 below, the Companies in the TERRA Group and the Companies in the TELEFÓNICA Group shall grant the relevant auditing, consultancy, management and maintenance contracts for country portals, on the terms and conditions foreseen in general terms in this Framework Agreement and specifically in Exhibit II, mentioned in Section 2.1.5 and 2.2.1.1 above. 2.3.2.Outsourcing regime or equivalent in operation of the network access elements. The Parties undertake, on the terms and conditions of this Framework Agreement, that the Companies in the TELEFÓNICA Group shall provide the outsourcing or equivalent service to operate all or part of the services and/or operation of the elements for excess to the network required by the Companies in the TERRA Group to provide the Internet access service to their preferentially residential customers, SOHO and, if appropriate, SMEs, as long as such services may be provided in the conditions of the most favoured client allowed by the regulations. For that purposes, within the deadlines set in Section 4.1 below, the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group shall grant the relevant contracts for operation, hosting and management, on the terms and conditions foreseen in general terms in this Framework Agreement and, specifically, in Exhibit II, mentioned in Section 2.1.5, 2.2.1.1 and 2.3.1 above. 2.3.3.The Companies in the TELEFÓNICA Group, as exclusive providers of the network services required to build up the offer for end users. The Parties undertake, respectively, on the terms and conditions of this Framework Agreement, that the Companies in the TERRA Group shall acquire exclusively from the Companies in the TELEFÓNICA Group the advanced network services and platforms required to build up the offer for preferentially residential customers, SOHO and, if appropriate, SMEs on narrow and broad band, as long as such acquisition is performed in the conditions of the most favoured client that the regulation allows. For that purposes, on the terms and conditions foreseen in Section 4.1 below, the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group shall grant the relevant exclusive supply contracts, on the terms and conditions foreseen in general terms in this Framework Agreement and specifically in Exhibit II, mentioned in Section 2.1.5, 2.2.1.1, 2.3.1, and 2.3.2 above. FREE TRANSLATION FROM SPANISH ORIGINAL 8 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. THREE. GENERAL TERMS APPLICABLE TO THE CONTRACTS. 3.1.Scope of application. The general terms agreed in this Section Three shall be applicable to all the Contracts, as well as to the Subsequent Contracts (as to the latter, on the terms foreseen in Section 4.4 and in Section Six below). However, the content of Section 3.3 below shall only be applicable to the successive term contracts. The Parties hereby undertake to include in the Contracts and, if appropriate, in the Subsequent Contracts, the service level agreements that shall determine and guarantee a quality for the services, in keeping with the standards in the sector. Moreover, the Contracts, and as appropriate, the Subsequent Contracts, shall establish the appropriate protection mechanisms for the Parties in the event of early termination, to allow an adequate, respectful transition as to the daily activities of the Parties under the relevant Contract (or Subsequent Contract). 3.2.Modifications imposed by administrative bodies. In the event of any of the entities or bodies regulating the telecommunications market, the right to competition and/or any other competent bodies, require substantial amendment of any of the Contracts or Subsequent Contracts, the Monitoring Committee will have to decide on the convenience of performing such an amendment or of making the relevant Contract or Subsequent Contract void. The Monitoring Committee must decide within the term of one month from the date on which the matter is submitted to it. However, if the requirement by the relevant entity or body were to establish a lower term for fulfilment thereof, the term of one month shall be reduced appropriately. If the Monitoring Committee were to resolve to make the relevant Contract or Subsequent Contract void, it will comply with the terms set forth in Section 4.3 below and, if appropriate, the terms foreseen in Section Six below. 3.3.Termination. Notwithstanding the cases of extension foreseen in the last paragraph of Sections 4.3.1 and 4.4 below and in the antepenultimate paragraph of Section Six below, the Contracts and Subsequent Contracts shall end and be left without any effect whatsoever in any of the following cases: (i)on 31st December 2008 or, if appropriate, on expiration of any renewal of a Contract or Subsequent Contract in the event that one of the parties had notified the other party in writing with, at least, two months in advance prior written notice, its intention to terminate the relevant Contract or Subsequent Contract. In absence of the relevant prior written notice, the Contract or Subsequent Contract shall be deemed automatically and successively renewed for additional periods of one year, or (ii)on the sixtieth calendar day following the date on which the Change in Control takes place at TERRA in the event of TELEFÓNICA having notified it within the term of its decision to terminate the Framework Agreement, on the terms set forth in Section Seven below; or Termination of any of the Contracts in the cases foreseen in this Section 3.3 shall not give rise to any kind of liability being required among the relevant Parties, notwithstanding those strictly arising from execution thereof and fulfilment prior to the date of termination. FREE TRANSLATION FROM SPANISH ORIGINAL 9 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 3.4.Confidentiality and public announcements. The Contracts and Subsequent Contracts shall be confidential and none of the respective Parties shall reveal their existence and content without the consent of the other. An exception is made in cases in which the obligation to disclose information is imposed by a competent judicial or administrative authority or by Law, in which case the part under obligation shall previously inform the other. The Parties shall agree on the content of any notification or public release related to the relevant Contracts and Subsequent Contracts. 3.5.Expenses and taxes. Each part shall pay the expenses arising for it from formalisation and execution of the Contracts and Subsequent Contracts. The obligations of a tax nature shall be paid by each part pursuant to the applicable laws and provisions. 3.6.Resolution of conflicts. Binding nature of the resolutions by the Monitoring Committee. 3.6.1.Resolutions of conflicts. Any discrepancy in the interpretation as well as the execution of the Contracts and the Subsequent Contracts must be submitted by any of the Parties to the Monitoring Committee and, should a resolution not be provided, by it to the Chairmen of TELEFÓNICA and TERRA, resorting for the relevant judicial or arbitration proceedings only in the event of failure to reach an agreement by the Chairmen. To those ends, the terms of Section 15.1 below shall be applicable, with the particular specification that reference to the Parties must be understood to refer to the relevant Parties. 3.6.2.Binding nature of the resolutions by the Monitoring Committee In any case, all agreements or resolutions by the Monitoring Committee, on the terms of Section Four below, shall be binding upon the relevant Parties, who undertake to fully comply with them, it thus being considered that the final decisions by those Parties may not be impugned. FOUR. PROCEDURES AND SCHEDULE FOR DEVELOPMENT. 4.1.Terms to formalise the Contracts. The Parties undertake that the Contracts, on the terms and conditions of this Framework Agreement and its Exhibits, are formalised between the Companies in the TELEFÓNICA Group and the Companies in the TERRA Group within the maximum term of one month from the date of this Framework Agreement, adapted as appropriate to the applicable legal and statutory requisites, on the terms that are determined for that purpose by the Monitoring Committee foreseen in Section 4.2 below. That Committee shall also set, according to the legal and business circumstances, the exact dates on which each Contract or group of Contracts must be formalised. As an exception, the Monitoring Committee may postpone formalisation of some of the Contracts for proper reasons, so these are granted once the aforementioned term of one month has expired, in which case the terms of Section 4.3 below shall be applicable. FREE TRANSLATION FROM SPANISH ORIGINAL 10 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 4.2.Monitoring Committee. 4.2.1.Composition. The Parties hereby agree to create a Monitoring Committee for execution, development and monitoring of this Framework Agreement, which must be formed by six members, three of which, who shall include the person acting as Chairman, shall be appointed by TELEFÓNICA, and the remaining three shall be appointed by TERRA. Each Party may replace any member or members appointed by it at any time, by serving the relevant written notice to the Monitoring Committee. The Parties hereby resolve that the appointment of the members of the Monitoring Committee and their formal constitution shall take place within the term of fifteen days from signing this Framework Agreement. 4.2.2.Meetings: calling, frequency and minutes. The Chairman shall call the meetings. The Monitoring Committee shall meet at least once every quarter. However, the Chairman must call the Monitoring Committee whenever required to do so in writing by any of its members and, likewise, may call it on any occasion he may deem fit. The Parties agree that the first Monitoring Committee shall meet no later than 28th February 2003. The meetings of the Monitoring Committee may be attended by executives from either of the Parties with the right to speak but not vote, in order to provide information on the progress of the alliance and to facilitate decision making by the Monitoring Committee. The Secretary to the Monitoring Committee, who shall be appointed by TERRA from among the members of the Monitoring Committee appointed by TERRA, shall take the minutes of the content of each meeting, which shall be signed by the Secretary with the approval of the Chairman once it is approved by those attending. 4.2.3.Quorum, passing resolutions and binding nature of these. The Monitoring Committee shall be understood to be validly met as long as it is attended by four of its members and the resolutions are passed unanimously. The agreements and resolutions by the Monitoring Committee shall be binding upon the Parties, who undertake to fully comply with them directly or through the companies in their relevant Group, as appropriate. In this sense, the agreements and resolutions by the Monitoring Committee shall be considered final decisions by the Parties in relation to the matters concerned and, thus, not liable to be impugned. Failing an agreement on any matter of its competence, the Monitoring Committee shall submit the matter to the Chairmen of TELEFÓNICA and TERRA, who shall have a term of fifteen days to issue their finding. If that term expires without an agreement being reached, any of the Parties may initiate arbitration proceedings on the terms foreseen in Section Fifteen below. FREE TRANSLATION FROM SPANISH ORIGINAL 11 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 4.2.4.Duties. The Monitoring Committee is to perform the following duties. (i)to co-ordinate, direct and supervise performance and development of this Framework Agreement, through the process of formalisation of the Contracts, determining the dates of granting and integrating their content, as provided in Section 4.1 above. (ii)resolve on the convenience of extending, modifying or terminating any of the Contracts or Subsequent Contracts, or of formalising Subsequent Contracts, especially in the cases foreseen in Section 3.2 above and 4.4 below. (iii)resolve on possible amendments to be made in the Contracts or Subsequent Contracts, if their performance were to affect the profitability or strategic interests of the Telefónica Group or Terra Group, and this will only be when the circumstance may not compensated in terms of value through the mechanisms foreseen in this Framework Agreement. (iv)resolve on extension of this Framework Agreement or granting preemptive rights, as well as concerning the degree of fulfilment of the alliance and to take the relevant corrective measures, [***] and determining the cases in which the factual cases foreseen in Section Six below arise to apply the TELEFÓNICA Guarantee. (v)resolve on the convenience of modifying the Contracts and Subsequent Contracts according to the evolution of the applicable legislation, on the terms of Section 4.5 below. (vi)resolve on any discrepancy in interpretation as well as performance of this Framework Agreement, of the Contracts and of the Subsequent Contracts; and (vii)any others specifically entrusted to it under this Framework Agreement. (viii)resolve on creation of working subcommittees on any of the areas of specific activity in Spain, Brazil, Rest of Latin America and Relations with Telefónica Data, as well as suppression, modification, substitution or creation of additional subcommittees. (ix)analyze and review the amounts and criteria for allocation referred to in the Note of Annex I of the present Framework Agreement. When performing its duties, the Monitoring Committee shall ensure that its resolutions comply with the principle of tax efficiency for the Parties, as well as for the relevant Parties of the Contracts and Subsequent Contracts. 4.3.Review of the degree of fulfilment of the alliance and extension of the Framework Agreement. 4.3.1.The Monitoring Committee must validate fulfilment of the objectives of the alliance between TELEFÓNICA and TERRA in November every year. For that purposes, it must review the degree of fulfilment of the Contracts and, if appropriate, of the Subsequent Contracts, in order to determine whether the services have been rendered to the satisfaction of the relevant part and check that they have been fulfilled. FREE TRANSLATION FROM SPANISH ORIGINAL 12 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. (x)in financial year 2003, at least, the global value in Euros (78,5MM) set forth in Exhibit I, arising from the breakdown described, simply as estimations, on Exhibits II, III, IV, and V of this Framework Agreement, and, (z)for each one of the following financial years, an equivalent value at least to that established for 2003, as indicated in subparagraph (x) above. In order to facilitate that annual validation, on a quarterly basis, on the second have of the second month of the following quarter, the Monitoring Committee shall review the degree of fulfilment of the objectives of the alliance between TELEFÓNICA and TERRA, the degree of fulfilment of the Contracts and, if appropriate, Subsequent Contracts, adopting the measures it may deem fit in each case. On the other hand, in November 2008, the Monitoring Committee shall resolve on the extension or not, for the term deemed convenient, of this Framework Agreement and, where applicable, of the Contracts and Subsequent Contracts. If it were to decide not to extend them, the Monitoring Committee would determine the services, whether foreseen or not in Section Two above, in relation to which the Companies in the TELEFÓNICA Group would grant the Companies in the TERRA Group with presence in the same geographic territory, or vice-versa, a pre-emptive right to equal offers by third parties. 4.3.2.However, the Monitoring Committee may perform the reviews foreseen in Section 4.3.1 above as an extraordinary measure in any other month of the year. 4.3.3.The Monitoring Committee shall impose the measures it deems fit, [***] to correct the breaches detected during its annual validation duties and, if appropriate, shall determine whether it is appropriate to apply the TELEFÓNICA Guarantee foreseen in Section Six below. 4.4.Adaptation and updating of the services and products. According to the technological advances, the evolution of the specific needs of both Groups, the general market conditions, the evolution of the reference costs, and any other considerations related to achieving the objectives of this alliance, the Monitoring Committee may agree the modifications it may deem appropriate as to the services and products that, by virtue of this Framework Agreement and the Contracts or Subsequent Contracts, must be rendered to the relevant companies in the TELEFÓNICA Group and in the TERRA Group (even inclusion of new catalogues of services and products), making, when appropriate, the appropriate decisions for that purposes in relation to (i) amendment of this Framework Agreement, (ii) extension, amendment or suppression of certain Contracts or Subsequent Contracts, and (iii) formalisation of new contracts (hereinafter, along with the contracts mentioned in the paragraph preceding the penultimate one of Section Six below, the “Subsequent Contracts”) which shall be subject to the terms and conditions foreseen in general within this Framework Agreement, except in the aspects which the Monitoring Committee may consider appropriate to modify. In cases in which the Monitoring Committee may resolve to extend only certain Contracts or Subsequent Contracts, it shall also resolve on the Sections of this Framework Agreement that, by virtue of their applicability, must continue in force, taking full effects as to the Contracts or Subsequent Contracts extended. FREE TRANSLATION FROM SPANISH ORIGINAL 13 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. 4.5.Adaptation of the Contracts and Later Contracts to the evolution of the applicable legislation. According to the evolution of the applicable laws and, in particular, liberalisation of the regulation framework for the telecommunications market and competition rights, the Monitoring Committee may agree the modifications it may deem fit in relation to the Contracts and Subsequent Contracts, in order to overcome the restrictions contained in their clauses due to the limitations imposed by the by-laws in force at the time of granting. FIVE. GOOD FAITH. The Parties undertake to act in good faith, abstaining from performing any action that may prevent or hinder performance of this Framework Agreement, and giving complete fulfilment directly, or through the companies in its respective Group to the obligations arising from this Framework Agreement, especially those related to (i) fulfilment of the resolutions by the Monitoring Committee, and (ii) formalisation of the Contracts and Subsequent Contracts, as well as their execution, by providing the services in accordance with the set quality parameters and payment of the relevant agreed prices. SIX. GUARANTEE BY TELEFÓNICA. TELEFÓNICA hereby guarantees fulfilment by the Companies in the TELEFÓNICA Group of the obligation to formalise and fulfil the Contracts on the terms of this Framework Agreement. In all cases in which, due to reasons other than breach by TERRA and/or the Companies in the TERRA Group, especially for legal or statutory reasons, it were not possible to formalise any of the Contracts, or they should become fully or partially void, or if there were to be any breach by TELEFÓNICA and/or the Companies in the TELEFÓNICA Group of these making it impossible to achieve: (x)in financial year 2003, at least, the global value in Euros (78,5MM) set forth in Exhibit I, arising from the breakdown described, simply as estimations, on Exhibits II, III, IV, and V of this Framework Agreement, and, (z)for each one of the following financial years, a value equivalent at least to that established for 2003, as indicated in subparagraph (x) above (it being duly understood that in the event of the minimum value mentioned not being reached in any financial year, the deficit arising will be considered to be fully compensated by the necessary proportion by the surplus to that minimum figure that may have been obtained in any of the previous financial years). TELEFÓNICA shall propose alternative mechanisms to the satisfaction of TERRA to provide services and acquire products that, within the setting of the complementary nature of both Groups, will allow the relevant compensation to take place. FREE TRANSLATION FROM SPANISH ORIGINAL 14 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. In the presentation of alternative mechanisms by TELEFÓNICA, it shall take the following aspects, among others, into account: (i) contracts to render services or acquire products not foreseen in Section Two above that may have been entered into by the Parties, or between the companies in their respective Groups after the date on which this Framework Agreement came into force, or after that date, as long as these are services to be provided thereafter; (ii) newly available services and products that may arise by virtue of the technological advances in the evolution of the specific needs of both Groups and the market conditions; and (iii) possibilities arising from inclusion of any companies in the TELEFÓNICA Group or the TERRA Group. New contracts which are formalised by virtue of the terms set forth in the preceding paragraph (hereinafter along with the contracts foreseen in Section 4.4 above, the “Subsequent Contracts”) shall be subject to the terms and conditions foreseen in general terms in this Framework Agreement, except in the aspects the Parties find fit to modify. In cases in which the Parties agree to extend only certain Contracts or Subsequent Contracts, they shall also resolve on the Stipulations of this Framework Agreement which, by virtue of its application to these, must continue in force, taking full effect in relation to the Contract or Subsequent Contracts extended. The obligations of TELEFÓNICA within the framework of this Section shall be decreased by the relevant proportional part to show possible partial breaches by TERRA and/or by the Companies in the TERRA Group that may have taken place. [***] SEVEN. TERMINATION. Notwithstanding the cases of extension foreseen in the last paragraph of Sections 4.3.1. and 4.4 above, and in the antepenultimate paragraph of Section Six above, this Framework Agreement shall become void in any of the following cases: (i)on 31st December 2008 or, if appropriate, on expiration of any relevant renewal pursuant to Section 4.3.1 in the event that one of the Parties had notified the other Party in writing with, at least, two months in advance prior written notice, its intention to terminate the Framework Agreement. In absence of the relevant prior written notice, the Framework Agreement shall be deemed automatically and successively renewed for additional periods of one year, or (ii)at the option of TELEFÓNICA in the event of a Change in Control taking place in TERRA (on the terms foreseen in the last two paragraphs of this Section). Termination of the Framework Agreement in the cases foreseen in this Section Seven shall not give rise to any kind of liability between the Parties being demanded, notwithstanding those arising strictly from the actual performance of the Framework Agreement prior to the date of termination. A Change in Control of TERRA shall be understood to be any event or situation except for (i) the transfer by TELEFÓNICA of all or part of its participation in TERRA or (ii) an act or agreement of TELEFÓNICA with a third party which leads to a Change of Control that entitles a shareholder other than TELEFÓNICA to direct the management and administration of that company directly or indirectly, as the holder of the majority voting rights or by virtue of agreements entered into with other shareholders. FREE TRANSLATION FROM SPANISH ORIGINAL 15 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. If there is a Change in Control of TERRA, if TELEFÓNICA were to opt to terminate this Framework Agreement, it will have the term of one month from the date on which this took place to notify TERRA that the Framework Agreement shall terminate, and it shall cease to be effective on the sixtieth calendar day from the date on which that Change of Control of TERRA took place. All the Contracts and Subsequent Contracts shall also terminate on that date. Likewise, should the Change of Control take place before 27 October 2005 and TELEFÓNICA chooses to terminate the present Framework Agreement, TELEFÓNICA must acquire products and services from TERRA during each quarter included between the termination date foreseen in Section 3.3 ii and 26 October 2005, in the following terms: (i) US$ 50 MM for each quarter the first year, (ii) US$ 56.25 MM for each quarter the second year and (iii) US$ 62.5 MM for each quarter the third year. Said amounts will be reduced by the equivalent revenues to the possible non-compensated excess values referred to in Clause Section (z) and shall be paid as an advanced payment in the ten first days of each quarter [***]. EIGHT. CONFIDENTIALITY AND PUBLIC ANNOUNCEMENTS. 8.1.This Framework Agreement and all the documentation or negotiations linked thereto are confidential and neither of the Parties shall reveal their existence and content without the consent of the other, with the exception of cases in which the obligation to disclose information is imposed by a competent judicial or administrative authority or by Law, in which case the Party obliged shall previously inform the other. 8.2. TheParties shall agree the content of any public notice or release related to this Framework Agreement or its performance. NINE. NOTICES. 9.1.All notices between the Parties concerning this Contract may be made either in writing delivered at the domicile of the addressee, or by facsimile, in which case the original must be sent by registered mail with acknowledgement of receipt to the domicile of the party addressed. 9.2.For all the purposes of notice, delivery shall be considered to take place on the same working day on which the letter was delivered, or the facsimile was transmitted to the party addressed. 9.3.For all the purposes related to this document, the Parties hereby provide the following fax numbers and domiciles: TELEFÓNICA, S.A. Attention of the Secretary General Address: Gran Vía, 28, Madrid Fax: 91 521 45 81 TERRA Attention of the Secretary General Address: Paseo de la Castellana, 92—Madrid Fax: 91 452 38 81 TEN. ASSIGNMENT. This Framework Agreement and the obligations assumed by each one of the Parties by virtue thereof may not be transferred, nor may be subject to assignment to any third party without the prior, express, written consent of the other Part. FREE TRANSLATION FROM SPANISH ORIGINAL 16 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. ELEVEN. VOIDNESS AND INEFFECTIVENESS OF THE CLAUSES. In the event that any Section or part thereof were to be declared void or ineffective, that nullity or ineffectiveness would affect only that provision or the relevant part thereof, while the Framework Agreement shall subsist in all other aspects, it being understood that such a Section or part thereof that may be affected shall be omitted. This shall all be considered notwithstanding the obligations of the Parties foreseen in Section Six above, as to it being applicable. TWELVE. EXPENSES AND TAXES. 12.1.Each part shall pay the expenses directly borne by it from formalisation of this Framework Agreement. 12.2.The obligations of a tax nature shall be paid by each Party according to the applicable laws and provisions. THIRTEEN. ENACTMENT AND PREVALENCE OF THE AGREEMENT. 13.1.This Framework Agreement shall be effective 1st January 2003. 13.2.This Framework Agreement substitutes any other agreement, document or previous contract entered into between the Parties, that may contradict it and, specifically, in the event of discrepancy between the content of this Framework Agreement and Agreement II, in all cases the terms of this Framework Agreement shall prevail between the Parties. [***]. FOURTEEN. APPLICABLE LAW. This Framework Agreement shall be governed and interpreted pursuant to the laws of the Kingdom of Spain. FIFTEEN. DISPUTES RESOLUTION. 15.1.Any discrepancy in the interpretation as well as the performance of this Framework Agreement must be submitted by the Parties, prior to initiating eventual arbitration proceedings, as foreseen in Section 15.2 below, to the Monitoring Committee and, if appropriate, by it to the Chairmen of TELEFÓNICA and TERRA. If the Monitoring Committee has not solved the controversy within the maximum term of one month from the date on which its intervention is requested by either of the Parties, the Monitoring Committee itself shall submit the matter to the Chairmen of TELEFÓNICA and TERRA within the 10 calendar days following expiry of the term mentioned. Only if no agreement is reached by the Chairmen, within the fifteen working days following the date on which the controversy is submitted to them, shall the arbitration proceedings foreseen in Section 15.2 be foreseen. 15.2.The Parties hereby grant this Section the status of an arbitration clause, pursuant to all the terms established and recognised in article 6.1 in relation to 5.1, of the current Act 36/1988 of 5th December, and submit all matters that may arise as to the validity, effectiveness, interpretation and/or performance of this Framework Agreement to the finding issued by an Arbitration Tribunal formed by three arbitrators, who shall decide on the matters of litigation subject to Law, and the (Parties) undertake to attend and abide by the finding issued. FREE TRANSLATION FROM SPANISH ORIGINAL 17 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. The Arbitration Tribunal shall be formed by appointing an arbitrator for the plaintiff and another arbitrator for the defendant, both of whom shall then appoint a third arbitrator by reaching an agreement within the term stated in the following paragraph. Should an agreement not be reached within the said term, His Honour the Dean of the Honourable Legal Association of Madrid, or the Lawyer of that Association chosen by the Dean, shall be automatically appointed. In order to constitute the arbitration tribunal and carry out the arbitration, the Parties shall proceed as follows: (i)The Party calling the arbitration shall serve authentic notice to the others stating the matter of litigation and appointing an arbitrator, of whose acceptance it has record. (ii)Within the unextendable term of eight calendar days from the date on which notice is received, the Parties summoned must state whether they accept the matter of litigation raised or extend it, and in all cases must appoint another arbitrator, of whose acceptance it has record. (iii)The Party calling the arbitration shall have a further eight calendar days to accept or reject, fully or partially, the extension of the matter of litigation. If it accepts, the other Party shall be served authentic notice. If it does not accept the extension, the matter of litigation shall be considered just as it was raised, notwithstanding the right the other party has to bring further arbitration. (iv)When the said term for extension of the arbitration has elapsed, with or without agreement between the Parties, both arbitrators shall proceed to appoint a third one, or if they do not agree, shall appoint the Dean of the Honourable Law Association of Madrid, who may, at his discretion, accept the appointment or appoint a third arbitrator. (v)If more than ninety calendar days elapse from the notice calling the arbitration as provided hereabove (i) and if the Arbitration Tribunal has not been formed for any reason, the Parties may resort to judicial formalisation of the arbitration through the jurisdictional intervention foreseen in articles 38 and following of Act 36/1988 of 5th December. (vi)The Party that may proven to blame for it being impossible to constitute the Arbitration Tribunal shall pay the other part, in settlement of an accumulative conventional penalty, that is to say, notwithstanding other liabilities in which it may have incurred, the sum of 6,000 euros per day until the date on which the cause leading to such an impossibility is removed. (vii)Madrid, the capital, is named the venue for the arbitration for all the effects of announcement thereof. The procedure shall be determined by the Arbitration Tribunal according to the applicable imperative regulations and the organisational principles set forth in articles 21 and following of Act 36/1988 of 5th December. In any case, the Parties shall declare Spanish Law by common agreement to be the applicable legislation for resolution of any of the controversies that may arise in all the cases in which this arbitration clause may be applicable. As to the content, from and term of the finding, the terms of articles 30 to 37 of the current Act 36/1988 of 5th December shall be applicable, and in all matters not foreseen by this Arbitration Section, the terms of the said Act shall also be applicable. FREE TRANSLATION FROM SPANISH ORIGINAL 18 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. In all matters that may not be resolved by arbitration, as well as in judicial formalisation of this, if necessary, the Parties expressly submit, renouncing any other Jurisdiction, to the Tribunals of Madrid, the capital. SIXTEEN. EXHIBITS. The five Exhibits to this Framework Agreement, a list of which is provided hereunder, form an integral part hereof to all effects, and it is understood by the Parties that all concepts included therein are equivalent to those included within this main body of the Framework Agreement, and, therefore, do not alter them in any way. List of Exhibits: Exhibit I Exhibit II: Value Added Services, Portal and Related Infrastructure Exhibit III: Corporate Services and Assets Exhibit IV: Advertising Exhibit V: Consulting In witness whereof, the Parties sign double copies of this Framework Agreement, to a sole end, in the place and on the date first written above. TELEFÓNICA, S.A. TERRA NETWORKS, S.A. FREE TRANSLATION FROM SPANISH ORIGINAL 19 Any text removed pursuant to Terra Networks’ confidential treatment request has been separately filed with the U.S. Securities and Exchange Commission and is marked “[***]” herein. EXHIBIT I [***] EXHIBIT II [***] EXHIBIT III [***] EXHIBIT IV [***] EXHIBIT V [***] FREE TRANSLATION FROM SPANISH ORIGINAL 20
correct_subsidiary_00108
FactBench
2
22
https://www.asianinvestor.net/article/new-cfo-for-lycos-asia/29392
en
New CFO for Lycos Asia
https://cdn.i.haymarketmedia.asia/?n=asian-investor%2Fcontent%2Fno-image.png&c=1&h=675&q=75&v=20190520&w=1200
https://cdn.i.haymarketmedia.asia/?n=asian-investor%2Fcontent%2Fno-image.png&c=1&h=675&q=75&v=20190520&w=1200
[ "https://www.facebook.com/tr?id=298747275402724&ev=PageView&noscript=1", "https://www.asianinvestor.net/images/AI-Logo.svg", "https://www.asianinvestor.net/images/AI-Icon.svg", "https://www.asianinvestor.net/images/AI-Logo.svg", "https://www.asianinvestor.net/images/AI-Icon.svg", "https://www.asianinvestor.net/assets/articlePics/June_Wong_v2.JPG", "https://www.asianinvestor.net/images/AI-Logo.svg", "https://www.asianinvestor.net/images/haymarket-Logo.svg" ]
[]
[]
[ "asia", "cfo", "for", "lycos", "new" ]
null
[]
2001-03-21T00:00:00
Internet company Lycos Asia has appointed June Wong as its new chief financial officer.
https://cdn.i.haymarketmedia.asia/?n=%2fasian-investor%2fseo%2fAI-icon.png&h=1024&w=1024&q=100&v=20190520&c=1
AsianInvestor
https://www.asianinvestor.net/article/new-cfo-for-lycos-asia/29392
In her new position, Wong will be responsible for the overall strategic financial planning, accounting, budgeting, treasury and tax functions of Lycos Asia's network which currently has a presence in nine countries: Singapore, Malaysia, Hong Kong, Taiwan, Ching, Indonesia, India, The Philippines and Thailand. Wong will also oversee legal affairs for the company. Before joining Lycos, Wong (pictured right) spent five years with EasyCall International Limited, where she held the position of CFO. Wong also spent 14 years with Esso Singapore and five years with the Inland Revenue Department of Singapore (now known as the Inland Revenue Authority of Singapore). Wong graduated with Honours in Bachelor of Accountancy degree from the University of Singapore in 1977. Lycos Asia is a $50 million joint venture between Terra Lycos and Singapore Telecommunications Ltd formed in September 1999.
8585
dbpedia
2
7
https://github.com/timsutton/python-macadmin-tools
en
based Mac sysadmin tools
https://opengraph.githubassets.com/8c110b8c70282fd1d08d6f744c24513811d6277b3107f7ac6af261e1430ebbd0/timsutton/python-macadmin-tools
https://opengraph.githubassets.com/8c110b8c70282fd1d08d6f744c24513811d6277b3107f7ac6af261e1430ebbd0/timsutton/python-macadmin-tools
[]
[]
[]
[ "" ]
null
[]
null
List of open-source Python-based Mac sysadmin tools - timsutton/python-macadmin-tools
en
https://github.com/fluidicon.png
GitHub
https://github.com/timsutton/python-macadmin-tools
Python Macadmin Tools This repository aims to collect a list of open-source Python-based tools for Mac systems administration tasks. Why is this list limited to Python? Why not include all projects in this space? Python is an especially popular language among Mac sysadmins; this restriction is partly so that those learning Python for Mac-specific tasks have a mostly-complete list of known code and approaches from which to learn. It's also to help those more experienced with Python to discover projects that they may be able to adapt, extend and/or contribute to. See something missing or incorrect? Please feel free to edit or clone this file and submit a pull request. This repo was inspired by R.I. Pienaar's popular free-for-dev repo. Table of Contents Munki Imagr Servers Deployment automation, imaging, packaging Client-side management: utilities Client-side management: libraries and modules Mobile Device Management (MDM) Misc. utilities and modules Scripts and gists Configuration management autopromoter - Automatically promote (or demote) Munki pkginfo catalogs. BananaEndocarp - BananaEndocarp is a scripted GUI for interacting with MunkiWebAdmin2's API, for creating per-machine manifests. BananaPeels - A framework for testing the deployement of packages via Munki wrapped in a CLI tool. Requires VMWare Fusion. CloudFront-Middleware - Securely access a munki repo with Amazon CloudFront. Moscargo - Flask-based Munki repo browser used for listing and downloading current versions of curated packages. MunkiCatalogPromote - Promotes Munki pkginfo catalogs that haven't been promoted in X number of days. MunkiGenericIcons - Copies your own custom Generic.png to any Munki items missing a corresponding icon. MunkiModulePackager - CLI tool for downloading and packaging PyPi module sources for distribution via Munki. Munki Enrollment Server - A server that works in coordination with a GUI client that provides a method of enrolling a Mac with Munki for certificate-based communication and a custom manifest. Munki project - Managed software installations for Mac clients. Supports all popular software distribution formats. This is the de facto project repository. Munki Promote - Another script for promoting items from one catalog to another. Munki Sysadmin Usability Improvement Toolkit - CLI tools for maintaining workflows for managing Munki catalogs along with AutoPkg. MunkiWebAdmin - A Django-based reporting app for Munki - support for licensing, manifest editing. Munki-Do - A fork of MunkiWebAdmin with many new repo-editing features. munki-facts - A framework for "admin-provided conditionals" for Munki. munki-rebrand - Scripts used by University of Oxford IT Services to rebrand Munki. munki-staging - A fork of the munki-trello project with several additional major features. munki-trello - A script that utilises a Trello board to manage the promotion of Munki items through development to testing to production catalogs. OldMunkiPackages - Script to automatically remove older versions of packages that share the same catalogs. PrinterGenerator - Generate specific 'nopkg' pkginfos for printer configurations. printer-pkginfo - Another script for generating specific 'nopkg' pkginfos for printer configurations. Sal - Another Django-based reporting app for Munki, integrates with Facter facts on clients. Simian - Custom Munki service based on GAE, by Google. Spruce for Munki - Generates lists/reports, including orphaned icons or unused products. TweetCatalogUpdates - Python script that watches for catalog changes in your munki respository, and tweets them. Imagr - Mac app that performs imaging and deployment workflows fetched from a remote server, built with PyObjC. Imagr Server - Imagr reporting server built on Django. ImagrConfigCreator - Interactive script for generating or editing Imagr workflow plists. bsdpy - BSDP server with support for multiple netboot images, model/MAC filtering and an API. Crypt - Client and server for a Django-based Filevault key escrow solution. Macnamer - Django-based solution for managing Mac computer names. Margarita - Flask-based web interface for Reposado. pybsdpy - Another BSDP server. Reposado - Replacement for Apple's Software Update Service, supports multiple 'branches' of catalogs and offering cached updates no longer offered by Apple. Deployment automation, imaging, packaging aamporter - Tool for automating the download and importing of Adobe CS/CC updates into Munki. AutoDMG - Mac app to create never-booted, restorable OS X system images, optionally with system updates and additional packages/applications, built with PyObjC. AutoNBI - Tool for automated creation of Netboot image bundles using System Image Utility's automation tools. AutoPkg - Tool and community for automating common deployment tasks using sharable 'recipes', for example: discovering new application updates, preparing them for deployment, importing into popular management platforms. Brigadier - Tool for fetch and install model-specific Boot Camp images, can be used to bootstrap drivers during Windows deployment. can_haz_image - Tool for creating never-booted OS X system images with additional packages. createOSXInstallPkg - Tool for converting an OS X installer app/ESD to a package that can trigger the OS X install on the next boot, optionally with additional packages added in the install. CreateUserPkg - Mac app to create a package that installs or updates a user on an OS X system, built with PyObjC. easy_rider - Automatically create overrides for a list of AutoPkg recipes, using current production Munki pkginfo and templates to override final recipe. first-boot-pkg - Tool for creating a single package that installs a series of packages automatically upon first boot. appleLoops - Utility for downloading essential and optional audio content for Apple GarageBand, Logic Pro X, and MainStage 3. JSSImporter - Framework for connecting AutoPkg to JSS, for administrators running JAMF's Casper Suite. JSSRecipeCreator - Tool that enables Casper administrators to quickly create JSSImporter-compatbile AutoPkg recipes. MacNamer - Combination of a Django web app and a companion script to run on client Macs for automatically setting computer names. make-adobe-cc-license-pkg - Tool for building packages and Munki pkginfos for CC for Teams device and Enterprise serial licenses. make-profile-pkg - Convert a Configuration Profile to an installer package that can be installed to both booted and non-booted volumes. munkipkg - Tool for building packages in a consistent, repeatable manner from source files and scripts in a project directory. quickpkg - Quickly and easily builds a one-off package from an installed application, a disk image, or a zip file. Recipe Robot - A Python script and companion Mac app that is able to automatically generate AutoPkg recipes. stew - Creation of never-booted, restorable OS X system images with additional packages. vfuse - Tool for converting an OS X system DMG to a VMware Fusion VM. Client-side management: utilities auto_logout - PyObjC app to automatically log out users, designed for Mac computer labs. customdisplayprofiles - Programmatic configuration of display ColorSync profiles. dockutil - Programmatic access to a user's dock. Extinguish - Generates profiles that disable Sparkle updates for specified apps. installapplications - A tool for dynamic use of InstallApplication with DEP. LoginLog - Cocoa/PyObjC app that display a log of your choice over the loginwindow, useful during deployment tasks. NCUtil - Programmatic access to Notification Center via direct manipulation of the NC database. Nomadize - Tool to help move a local account or home folder to an Active Directory Mobile account. offset - Script and launchd combo for executing admin-defined scripts at logout (based on Outset). outset - Script and launchd combo for executing admin-defined scripts after logins and startup. OutsetDockProfiler - Script that creates a package to use with Outset that will install a user-level profile for a specific user of your choice. PredicateInstaller - Programmatic invocation of Software Update client tasks such as printer drivers, dictation voices, CLI tools, Boot Camp drivers, via the private SoftwareUpdate framework. Privacy Services Manager - Programmatic access to privacy, location, etc. services via direct manipulation of the TCC database. pyLoginItems - Management of a user's login items list via PyObjC. Service Discovery Tool - CLI diagnostics tool for reporting DHCP and NetBoot services. Client-side management: libraries and modules ChromeBookmarkEditor - Python module for easily adding, removing, and moving positions of bookmarks on the Chrome bookmark menu in the context of the logged in user. DockEditor - Python module for easily adding, removing, and moving positions of Dock items in the context of the logged in user. Facebook IT-CPE - A suite of tools that Facebook uses to manage their fleet of over 10,000 client machines. FinderSidebarEditor - Python module for programmatically editing the Favorites entries of the Finder sidebar. gpymacutil - Vast library of Python modules and tools for client management developed by Google. MacModelShelf - Returns human-readable Mac model names when given a serial number or model code. OSXcollector - A forensic evidence collection & analysis toolkit for OS X, developed by Yelp. pinpoint - A python script for finding Macs using the CoreLocation framework. pyfacts - Returns various 'facts' about a Mac. PyMacAdmin and crankd - Collection of Python utilities for interfacing to directory services and system configuration state changes, Leopard-era, developed at Google. SavingThrow - Returns information on whether a Mac has adware/malware installed, and includes an option for automatic removal. SafariBookmarkEditor - Python module for easily adding, removing, and moving positions of Safari bookmarks in the context of the currently logged in user. Stethoscope - Web application that collects information for a given user's devices and gives them clear and specific recommendations for securing their systems. U. of Utah Marriott Library Management Tools - Python module for client management. Zentral - Framework that allows administrators to configure automatic actions based on changes detected by osquery. Mobile Device Management (MDM) Commandment - MDM server with support for managing iOS and OS X devices implemented in Python. DEPy - Python module for interacting with Apple's DEP service. mdmvendorsign - Create a CSR as a "vendor" of Apple's MDM push notification service. mk_pkg_manifest.py - Script for creating an Apple software distribution manifest for an Apple pkg installer. Misc. utilities and modules APInfo - Obtain information about iOS/macOS applications and optionally output the results to Slack. appleseed - Automate downloading os x seed packages. edify - Stores a customizable library of command line syntax examples, with short descriptions. JSS Import - Pulls data from Casper 9 to a Postgres database for purpose of importing into WebHelpDesk. (Not to be confused with JSSImporter.) JSS Asset Tag Importer - Allows Casper administrators to quickly import asset tags into their JSS inventory. mcxToProfile - Convert preference plists and MCX nodes to Configuration Profiles for OS X management. ProfileSigner - A script that will encrypt and/or sign a .mobileconfig profile. pyMacWarranty - Retrieve warranty information given a Mac's serial number, estimates of manufacture date info and more. pyMASreceipt - Module for parsing Mac App Store receipts files. Python-JSS - Library that allows administrators to interact with a JSS using Python. Included with JSSImporter. Serveralerts - Manage the Server alerts DB of Server.app. Service Discovery Tool - Broadcasts DHCP Request and BSDP Inform packets on the local network and reports reponses for NetBoot/DHCP diagnostics. vserv - Service to monitor one or more vmx path[s] and restart the vmx[s] if necessary. warranty - Another warranty information retrieval script. Xcode Cocoa-Python Templates - Xcode 6 templates for Cocoa-Python development: Also Xcode 5, Xcode 4. py-gsxws - Library for communicating with Apple's GSX API SimplePySSH - Module for executing and reading output from simple shell commands on remote machines via SSH using only built-in modules. precache - Used to cache available Apple updates into an OS X Server running the Caching Service. Scripts and gists Graham Gilbert - Client management, Munki, Puppet server automation Hannes Juutilainen - Collection of client attributes, client management and meta-packaging admin tasks Michael Lynn - Many small scripts and modules demonstrating the use of PyObjC and ctypes for native use of OS X system frameworks within Python. Configuration management salt-osx - SaltStack grains, modules, and states to manage OS X, largely using PyObjC and ctypes. stronghold - Easily configure MacOS security settings from the terminal. U. of Utah Marriott Library Firmware Password Manager - Python script to automate the management of firmware passwords.
8585
dbpedia
0
14
https://www.neverhadtofight.com/blog/2021/12/22/autopkg-storage-on-external-drive/
en
AutoPKG storage on external drive
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://www.neverhadtofight.com/wp-content/uploads/2013/07/cropped-rectangle-logo.png", "https://www.neverhadtofight.com/wp-content/uploads/2013/07/cropped-rectangle-logo.png", "https://secure.gravatar.com/avatar/53902e4b92ddfcab95b30fa51f5653fc?s=50&d=monsterid&r=x" ]
[]
[]
[ "" ]
null
[]
2021-12-22T00:00:00
Ran into a quick problem that I thought I’d quickly blog about. AutoPKG’s data folders are all sitting on an external drive. First off, “Ignore ownership on this volume” was…
en
Never Had To Fight
https://www.neverhadtofight.com/blog/2021/12/22/autopkg-storage-on-external-drive/
Ran into a quick problem that I thought I’d quickly blog about. AutoPKG’s data folders are all sitting on an external drive. First off, “Ignore ownership on this volume” was checked off, and AutoPKG doesn’t like that. That was a first for me, I’ve always had AutoPKG running on the internal drive. I turned that on, gave myself ownership and read & write and then propagated permissions down. On the next run I got: Error in local.munki.Zoom: Processor: PkgCreator: Error: Coudln't copy pkgroot from /Volumes/path/to/Cache/local.munki.Zoom/Zoom to /tmp/pathto/Zoom: ditto /Volumes/path/to/Cache/local.munki.Zoom/Zoom/.: Operation not permitted Got some quick help from MacAdmins #AutoPKG channel. Suggested I give python full disk access. That solved the problem. Python was already in the PPPC panel for Full Disk Access, so I checked it off, but if someone needs to find AutoPKG’s python, at time of writing, it lives at /Library/AutoPkg/Python3/Python.framework/Versions/3.7/bin/python3.7
8585
dbpedia
3
16
https://docs.python.org/3.11/distutils/setupscript.html
en
2. Writing the Setup Script
https://docs.python.org/…tic/og-image.png
https://docs.python.org/…tic/og-image.png
[ "https://docs.python.org/3.11/_static/py.svg", "https://docs.python.org/3.11/_static/py.svg", "https://docs.python.org/3.11/_static/py.svg" ]
[]
[]
[ "" ]
null
[]
null
The setup script is the centre of all activity in building, distributing, and installing modules using the Distutils. The main purpose of the setup script is to describe your module distribution to...
en
../_static/py.svg
Python documentation
https://docs.python.org/3/distutils/setupscript.html
2. Writing the Setup Script¶ Note This document is being retained solely until the setuptools documentation at https://setuptools.readthedocs.io/en/latest/setuptools.html independently covers all of the relevant information currently included here. The setup script is the centre of all activity in building, distributing, and installing modules using the Distutils. The main purpose of the setup script is to describe your module distribution to the Distutils, so that the various commands that operate on your modules do the right thing. As we saw in section A Simple Example above, the setup script consists mainly of a call to setup(), and most information supplied to the Distutils by the module developer is supplied as keyword arguments to setup(). Here’s a slightly more involved example, which we’ll follow for the next couple of sections: the Distutils’ own setup script. (Keep in mind that although the Distutils are included with Python 1.6 and later, they also have an independent existence so that Python 1.5.2 users can use them to install other module distributions. The Distutils’ own setup script, shown here, is used to install the package into Python 1.5.2.) #!/usr/bin/env python from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='gward@python.net', url='https://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) There are only two differences between this and the trivial one-file distribution presented in section A Simple Example: more metadata, and the specification of pure Python modules by package, rather than by module. This is important since the Distutils consist of a couple of dozen modules split into (so far) two packages; an explicit list of every module would be tedious to generate and difficult to maintain. For more information on the additional meta-data, see section Additional meta-data. Note that any pathnames (files or directories) supplied in the setup script should be written using the Unix convention, i.e. slash-separated. The Distutils will take care of converting this platform-neutral representation into whatever is appropriate on your current platform before actually using the pathname. This makes your setup script portable across operating systems, which of course is one of the major goals of the Distutils. In this spirit, all pathnames in this document are slash-separated. This, of course, only applies to pathnames given to Distutils functions. If you, for example, use standard Python functions such as glob.glob() or os.listdir() to specify files, you should be careful to write portable code instead of hardcoding path separators: glob.glob(os.path.join('mydir', 'subdir', '*.html')) os.listdir(os.path.join('mydir', 'subdir')) 2.1. Listing whole packages¶ The packages option tells the Distutils to process (build, distribute, install, etc.) all pure Python modules found in each package mentioned in the packages list. In order to do this, of course, there has to be a correspondence between package names and directories in the filesystem. The default correspondence is the most obvious one, i.e. package distutils is found in the directory distutils relative to the distribution root. Thus, when you say packages = ['foo'] in your setup script, you are promising that the Distutils will find a file foo/__init__.py (which might be spelled differently on your system, but you get the idea) relative to the directory where your setup script lives. If you break this promise, the Distutils will issue a warning but still process the broken package anyway. If you use a different convention to lay out your source directory, that’s no problem: you just have to supply the package_dir option to tell the Distutils about your convention. For example, say you keep all Python source under lib, so that modules in the “root package” (i.e., not in any package at all) are in lib, modules in the foo package are in lib/foo, and so forth. Then you would put package_dir = {'': 'lib'} in your setup script. The keys to this dictionary are package names, and an empty package name stands for the root package. The values are directory names relative to your distribution root. In this case, when you say packages = ['foo'], you are promising that the file lib/foo/__init__.py exists. Another possible convention is to put the foo package right in lib, the foo.bar package in lib/bar, etc. This would be written in the setup script as package_dir = {'foo': 'lib'} A package: dir entry in the package_dir dictionary implicitly applies to all packages below package, so the foo.bar case is automatically handled here. In this example, having packages = ['foo', 'foo.bar'] tells the Distutils to look for lib/__init__.py and lib/bar/__init__.py. (Keep in mind that although package_dir applies recursively, you must explicitly list all packages in packages: the Distutils will not recursively scan your source tree looking for any directory with an __init__.py file.) 2.2. Listing individual modules¶ For a small module distribution, you might prefer to list all modules rather than listing packages—especially the case of a single module that goes in the “root package” (i.e., no package at all). This simplest case was shown in section A Simple Example; here is a slightly more involved example: py_modules = ['mod1', 'pkg.mod2'] This describes two modules, one of them in the “root” package, the other in the pkg package. Again, the default package/directory layout implies that these two modules can be found in mod1.py and pkg/mod2.py, and that pkg/__init__.py exists as well. And again, you can override the package/directory correspondence using the package_dir option. 2.3. Describing extension modules¶ Just as writing Python extension modules is a bit more complicated than writing pure Python modules, describing them to the Distutils is a bit more complicated. Unlike pure modules, it’s not enough just to list modules or packages and expect the Distutils to go out and find the right files; you have to specify the extension name, source file(s), and any compile/link requirements (include directories, libraries to link with, etc.). All of this is done through another keyword argument to setup(), the ext_modules option. ext_modules is just a list of Extension instances, each of which describes a single extension module. Suppose your distribution includes a single extension, called foo and implemented by foo.c. If no additional instructions to the compiler/linker are needed, describing this extension is quite simple: Extension('foo', ['foo.c']) The Extension class can be imported from distutils.core along with setup(). Thus, the setup script for a module distribution that contains only this one extension and nothing else might be: from distutils.core import setup, Extension setup(name='foo', version='1.0', ext_modules=[Extension('foo', ['foo.c'])], ) The Extension class (actually, the underlying extension-building machinery implemented by the build_ext command) supports a great deal of flexibility in describing Python extensions, which is explained in the following sections. 2.3.1. Extension names and packages¶ The first argument to the Extension constructor is always the name of the extension, including any package names. For example, Extension('foo', ['src/foo1.c', 'src/foo2.c']) describes an extension that lives in the root package, while Extension('pkg.foo', ['src/foo1.c', 'src/foo2.c']) describes the same extension in the pkg package. The source files and resulting object code are identical in both cases; the only difference is where in the filesystem (and therefore where in Python’s namespace hierarchy) the resulting extension lives. If you have a number of extensions all in the same package (or all under the same base package), use the ext_package keyword argument to setup(). For example, setup(..., ext_package='pkg', ext_modules=[Extension('foo', ['foo.c']), Extension('subpkg.bar', ['bar.c'])], ) will compile foo.c to the extension pkg.foo, and bar.c to pkg.subpkg.bar. 2.3.2. Extension source files¶ The second argument to the Extension constructor is a list of source files. Since the Distutils currently only support C, C++, and Objective-C extensions, these are normally C/C++/Objective-C source files. (Be sure to use appropriate extensions to distinguish C++ source files: .cc and .cpp seem to be recognized by both Unix and Windows compilers.) However, you can also include SWIG interface (.i) files in the list; the build_ext command knows how to deal with SWIG extensions: it will run SWIG on the interface file and compile the resulting C/C++ file into your extension. This warning notwithstanding, options to SWIG can be currently passed like this: setup(..., ext_modules=[Extension('_foo', ['foo.i'], swig_opts=['-modern', '-I../include'])], py_modules=['foo'], ) Or on the commandline like this: > python setup.py build_ext --swig-opts="-modern -I../include" On some platforms, you can include non-source files that are processed by the compiler and included in your extension. Currently, this just means Windows message text (.mc) files and resource definition (.rc) files for Visual C++. These will be compiled to binary resource (.res) files and linked into the executable. 2.3.3. Preprocessor options¶ Three optional arguments to Extension will help if you need to specify include directories to search or preprocessor macros to define/undefine: include_dirs, define_macros, and undef_macros. For example, if your extension requires header files in the include directory under your distribution root, use the include_dirs option: Extension('foo', ['foo.c'], include_dirs=['include']) You can specify absolute directories there; if you know that your extension will only be built on Unix systems with X11R6 installed to /usr, you can get away with Extension('foo', ['foo.c'], include_dirs=['/usr/include/X11']) You should avoid this sort of non-portable usage if you plan to distribute your code: it’s probably better to write C code like #include <X11/Xlib.h> If you need to include header files from some other Python extension, you can take advantage of the fact that header files are installed in a consistent way by the Distutils install_headers command. For example, the Numerical Python header files are installed (on a standard Unix installation) to /usr/local/include/python1.5/Numerical. (The exact location will differ according to your platform and Python installation.) Since the Python include directory—/usr/local/include/python1.5 in this case—is always included in the search path when building Python extensions, the best approach is to write C code like #include <Numerical/arrayobject.h> If you must put the Numerical include directory right into your header search path, though, you can find that directory using the Distutils distutils.sysconfig module: from distutils.sysconfig import get_python_inc incdir = os.path.join(get_python_inc(plat_specific=1), 'Numerical') setup(..., Extension(..., include_dirs=[incdir]), ) Even though this is quite portable—it will work on any Python installation, regardless of platform—it’s probably easier to just write your C code in the sensible way. You can define and undefine pre-processor macros with the define_macros and undef_macros options. define_macros takes a list of (name, value) tuples, where name is the name of the macro to define (a string) and value is its value: either a string or None. (Defining a macro FOO to None is the equivalent of a bare #define FOO in your C source: with most compilers, this sets FOO to the string 1.) undef_macros is just a list of macros to undefine. For example: Extension(..., define_macros=[('NDEBUG', '1'), ('HAVE_STRFTIME', None)], undef_macros=['HAVE_FOO', 'HAVE_BAR']) is the equivalent of having this at the top of every C source file: #define NDEBUG 1 #define HAVE_STRFTIME #undef HAVE_FOO #undef HAVE_BAR 2.3.4. Library options¶ You can also specify the libraries to link against when building your extension, and the directories to search for those libraries. The libraries option is a list of libraries to link against, library_dirs is a list of directories to search for libraries at link-time, and runtime_library_dirs is a list of directories to search for shared (dynamically loaded) libraries at run-time. For example, if you need to link against libraries known to be in the standard library search path on target systems Extension(..., libraries=['gdbm', 'readline']) If you need to link with libraries in a non-standard location, you’ll have to include the location in library_dirs: Extension(..., library_dirs=['/usr/X11R6/lib'], libraries=['X11', 'Xt']) (Again, this sort of non-portable construct should be avoided if you intend to distribute your code.) 2.3.5. Other options¶ There are still some other options which can be used to handle special cases. The optional option is a boolean; if it is true, a build failure in the extension will not abort the build process, but instead simply not install the failing extension. The extra_objects option is a list of object files to be passed to the linker. These files must not have extensions, as the default extension for the compiler is used. extra_compile_args and extra_link_args can be used to specify additional command line options for the respective compiler and linker command lines. export_symbols is only useful on Windows. It can contain a list of symbols (functions or variables) to be exported. This option is not needed when building compiled extensions: Distutils will automatically add initmodule to the list of exported symbols. The depends option is a list of files that the extension depends on (for example header files). The build command will call the compiler on the sources to rebuild extension if any on this files has been modified since the previous build. 2.4. Relationships between Distributions and Packages¶ A distribution may relate to packages in three specific ways: It can require packages or modules. It can provide packages or modules. It can obsolete packages or modules. These relationships can be specified using keyword arguments to the distutils.core.setup() function. Dependencies on other Python modules and packages can be specified by supplying the requires keyword argument to setup(). The value must be a list of strings. Each string specifies a package that is required, and optionally what versions are sufficient. To specify that any version of a module or package is required, the string should consist entirely of the module or package name. Examples include 'mymodule' and 'xml.parsers.expat'. If specific versions are required, a sequence of qualifiers can be supplied in parentheses. Each qualifier may consist of a comparison operator and a version number. The accepted comparison operators are: < > == <= >= != These can be combined by using multiple qualifiers separated by commas (and optional whitespace). In this case, all of the qualifiers must be matched; a logical AND is used to combine the evaluations. Let’s look at a bunch of examples: Requires Expression Explanation Now that we can specify dependencies, we also need to be able to specify what we provide that other distributions can require. This is done using the provides keyword argument to setup(). The value for this keyword is a list of strings, each of which names a Python module or package, and optionally identifies the version. If the version is not specified, it is assumed to match that of the distribution. Some examples: Provides Expression Explanation A package can declare that it obsoletes other packages using the obsoletes keyword argument. The value for this is similar to that of the requires keyword: a list of strings giving module or package specifiers. Each specifier consists of a module or package name optionally followed by one or more version qualifiers. Version qualifiers are given in parentheses after the module or package name. The versions identified by the qualifiers are those that are obsoleted by the distribution being described. If no qualifiers are given, all versions of the named module or package are understood to be obsoleted. 2.5. Installing Scripts¶ So far we have been dealing with pure and non-pure Python modules, which are usually not run by themselves but imported by scripts. Scripts are files containing Python source code, intended to be started from the command line. Scripts don’t require Distutils to do anything very complicated. The only clever feature is that if the first line of the script starts with #! and contains the word “python”, the Distutils will adjust the first line to refer to the current interpreter location. By default, it is replaced with the current interpreter location. The --executable (or -e) option will allow the interpreter path to be explicitly overridden. The scripts option simply is a list of files to be handled in this way. From the PyXML setup script: setup(..., scripts=['scripts/xmlproc_parse', 'scripts/xmlproc_val'] ) Changed in version 3.1: All the scripts will also be added to the MANIFEST file if no template is provided. See Specifying the files to distribute. 2.6. Installing Package Data¶ Often, additional files need to be installed into a package. These files are often data that’s closely related to the package’s implementation, or text files containing documentation that might be of interest to programmers using the package. These files are called package data. Package data can be added to packages using the package_data keyword argument to the setup() function. The value must be a mapping from package name to a list of relative path names that should be copied into the package. The paths are interpreted as relative to the directory containing the package (information from the package_dir mapping is used if appropriate); that is, the files are expected to be part of the package in the source directories. They may contain glob patterns as well. The path names may contain directory portions; any necessary directories will be created in the installation. For example, if a package should contain a subdirectory with several data files, the files can be arranged like this in the source tree: setup.py src/ mypkg/ __init__.py module.py data/ tables.dat spoons.dat forks.dat The corresponding call to setup() might be: setup(..., packages=['mypkg'], package_dir={'mypkg': 'src/mypkg'}, package_data={'mypkg': ['data/*.dat']}, ) Changed in version 3.1: All the files that match package_data will be added to the MANIFEST file if no template is provided. See Specifying the files to distribute. 2.7. Installing Additional Files¶ The data_files option can be used to specify additional files needed by the module distribution: configuration files, message catalogs, data files, anything which doesn’t fit in the previous categories. data_files specifies a sequence of (directory, files) pairs in the following way: setup(..., data_files=[('bitmaps', ['bm/b1.gif', 'bm/b2.gif']), ('config', ['cfg/data.cfg'])], ) Each (directory, files) pair in the sequence specifies the installation directory and the files to install there. Each file name in files is interpreted relative to the setup.py script at the top of the package source distribution. Note that you can specify the directory where the data files will be installed, but you cannot rename the data files themselves. The directory should be a relative path. It is interpreted relative to the installation prefix (Python’s sys.prefix for system installations; site.USER_BASE for user installations). Distutils allows directory to be an absolute installation path, but this is discouraged since it is incompatible with the wheel packaging format. No directory information from files is used to determine the final location of the installed file; only the name of the file is used. You can specify the data_files options as a simple sequence of files without specifying a target directory, but this is not recommended, and the install command will print a warning in this case. To install data files directly in the target directory, an empty string should be given as the directory. Changed in version 3.1: All the files that match data_files will be added to the MANIFEST file if no template is provided. See Specifying the files to distribute. 2.8. Additional meta-data¶ The setup script may include additional meta-data beyond the name and version. This information includes: Meta-Data Description Value Notes Notes: These fields are required. It is recommended that versions take the form major.minor[.patch[.sub]]. Either the author or the maintainer must be identified. If maintainer is provided, distutils lists it as the author in PKG-INFO. The long_description field is used by PyPI when you publish a package, to build its project page. The license field is a text indicating the license covering the package where the license is not a selection from the “License” Trove classifiers. See the Classifier field. Notice that there’s a licence distribution option which is deprecated but still acts as an alias for license. This field must be a list. The valid classifiers are listed on PyPI. To preserve backward compatibility, this field also accepts a string. If you pass a comma-separated string 'foo, bar', it will be converted to ['foo', 'bar'], Otherwise, it will be converted to a list of one string. ‘short string’ A single line of text, not more than 200 characters. ‘long string’ Multiple lines of plain text in reStructuredText format (see https://docutils.sourceforge.io/). ‘list of strings’ See below. Encoding the version information is an art in itself. Python packages generally adhere to the version format major.minor[.patch][sub]. The major number is 0 for initial, experimental releases of software. It is incremented for releases that represent major milestones in a package. The minor number is incremented when important new features are added to the package. The patch number increments when bug-fix releases are made. Additional trailing version information is sometimes used to indicate sub-releases. These are “a1,a2,…,aN” (for alpha releases, where functionality and API may change), “b1,b2,…,bN” (for beta releases, which only fix bugs) and “pr1,pr2,…,prN” (for final pre-release release testing). Some examples: 0.1.0 the first, experimental release of a package 1.0.1a2 the second alpha release of the first patch version of 1.0 classifiers must be specified in a list: setup(..., classifiers=[ 'Development Status :: 4 - Beta', 'Environment :: Console', 'Environment :: Web Environment', 'Intended Audience :: End Users/Desktop', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'License :: OSI Approved :: Python Software Foundation License', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Microsoft :: Windows', 'Operating System :: POSIX', 'Programming Language :: Python', 'Topic :: Communications :: Email', 'Topic :: Office/Business', 'Topic :: Software Development :: Bug Tracking', ], ) Changed in version 3.7: setup now warns when classifiers, keywords or platforms fields are not specified as a list or a string. 2.9. Debugging the setup script¶ Sometimes things go wrong, and the setup script doesn’t do what the developer wants. Distutils catches any exceptions when running the setup script, and print a simple error message before the script is terminated. The motivation for this behaviour is to not confuse administrators who don’t know much about Python and are trying to install a package. If they get a big long traceback from deep inside the guts of Distutils, they may think the package or the Python installation is broken because they don’t read all the way down to the bottom and see that it’s a permission problem. On the other hand, this doesn’t help the developer to find the cause of the failure. For this purpose, the DISTUTILS_DEBUG environment variable can be set to anything except an empty string, and distutils will now print detailed information about what it is doing, dump the full traceback when an exception occurs, and print the whole command line when an external program (like a C compiler) fails.
8585
dbpedia
0
43
https://macadmins.psu.edu/workshops/
en
Workshops
https://macadmins.psu.ed…dd5c06f66b58.png
https://macadmins.psu.ed…dd5c06f66b58.png
[ "https://macadmins.psu.edu/files/2015/10/dmtbanner3.png" ]
[]
[]
[ "" ]
null
[]
2014-03-03T21:07:57+00:00
The MacAdmins Conference registration includes a day of workshops, held the first day of the conference. In 2024, this will be July 9th!  Breakfast, lunch, light snacks, and dinner will be served d…
en
https://macadmins.psu.ed…f66b58-32x32.png
MacAdmins Conference
https://macadmins.psu.edu/workshops/
The MacAdmins Conference registration includes a day of workshops, held the first day of the conference. In 2024, this will be July 9th! Breakfast, lunch, light snacks, and dinner will be served during the day. These workshops are built to help attendees gain a solid foundation in a focused aspect of system administration. Full Day Workshops: Dive into PowerShell on the Mac Homelab 101: the alchemy, art, and science of a home lab Learn How You Manage Mac Clients with GitOps – a Hands-On Walkthrough Managing Macs for n00bs Half-Day Workshops: (Offered once either in the morning or afternoon, actual times TBD) Create autopkg recipes for software from scratch Get started with deploying Apple devices Git for Mac Admins ITIL 4 – Is It Right For Me? Create AutoPkg recipes for software from scratch – James Stewart Half Day Workshop (150 min) – Intermediate – Hands-on Learn how to create AutoPkg recipes for software from scratch. Autopkg can distribute software updates from vendor websites to your distribution tool of choice, saving thousands of dollars a week. Being comfortable on the command line is recommended, but previous knowledge of AutoPkg is optional. While AutoPkg is written in Python, writing Python code is not required to write recipes and will not be covered. Recipes use YAML text files. Any prerequisites: Yes – but optional. A laptop with Visual Studio Code and Python 3.9+ (Mac or Win or Linux)Watch previous talk ( Using AutoPKG for Windows Software 2.0 ) https://www.youtube.com/watch?v=BDdcXtjv6y4 Dive into PowerShell on the Mac – John Welch Full Day Workshop (300 min) – Intermediate – Hands-on A workshop that will help you learn about Microsoft’s cross-platform scripting language, PowerShell, and how Mac admins can make use of its features to make their day so much easier Any prerequisites: Yes – it’s required. A reasonably current MacBook, Visual Studio Code, and the current version of PowerShell for macOS. Scripting experience, especially in non-Mac environments is a big help. Get started with deploying Apple devices – Apple Education Half Day Workshop (150 min) – Fundamental – Presentation This session is ideal for admins who are new to deploying and managing Apple devices, expanding from a pilot, or needing a refresh in IT best practices. Join us to learn the basics of zero-touch deployment, device configuration, federated authentication, managing software updates, and more. Any prerequisites: None needed. Git for Mac Admins – Weldon Dodd Half Day Workshop (150 min) – Fundamental – Hands-on Git is the most popular distributed version control system on the planet for software development. It also has broad application for Mac admins to use with managing scripts, configuration profiles, AutoPkg recipes, munki configs, and other files that form the backbone of admin tooling. In this workshop you’ll learn the basics of installing git on your computer, using git locally to manage version control, and using git with remote repositories (think GitHub, GitLab, etc.). Any prerequisites: Yes – but optional. Attendees will need a computer with the latest release version of Xcode installed and should be conversant with using Terminal.app. Familiarity with Bash or ZSH will be helpful. Homelab 101: the alchemy, art, and science of a home lab – Adam Wickert, Bryan Heinz Full Day Workshop (300 min) – All Levels – Hands-on In this workshop, we’ll work through setting up a small lab environment for testing out new software and services. We’ll work through different virtualization environments such as XCP-ng, Proxmox, Apple’s hypervisor framework, and containerization. We’ll discuss hardware and networking. Finally, we’ll be working through setting up a service or two using these methods. Any prerequisites: None needed. ITIL 4 – Is It Right For Me? – Pam Lefkowitz Half Day Workshop (150 min) – All Levels – Presentation Because you don’t have enough certifications yet, we give you ITIL 4. Normally taught over 20-ish hours, ITIL is a hefty course of study. In this workshop we will do a high-level overview of the framework. NOTE: this workshop is NOT an official ITIL 4 Foundations training course. Any prerequisites: None needed. Learn How You Manage Mac Clients with GitOps – a Hands-On Walkthrough – Henry Stamerjohann, Éric Falconnier Full Day Workshop (300 min) – Intermediate – Hands-on Are you GitOps curious? In this hands-on workshop we will walk you through your own, fictional rollout of a Mac client based on GitOps workflows: Complete with patch management, software updates and compliance checks. By the end of day, you will have a first-hand impression about what it’s like to enroll and manage Macs with GitOps automations. As we go along, you can learn useful techniques and tools you can apply to your day-to-day, whether GitOps or not. Any prerequisites: Yes – but optional. You do not need to be an expert in Git to follow along, but familiarity with this tool will definitely help. We will edit text files and run some commands in the Terminal, so bring a Mac if you want to participate in the hands-on parts of the workshop. It would be great if you could bring macOS or iOS test devices. Managing Macs for n00bs – Damien Barrett, Robert Hammen, Adam Anklewicz Full Day Workshop (300 min) – Fundamental – Presentation New to managing Macs? This is the workshop for you. We’ll discuss and share basic, intermediate, and some advanced management techniques, tools, and best practices. While there will be info relevant to everyone, this workshop’s audience is for people new to Mac administration. Any prerequisites: None needed.
8585
dbpedia
1
19
https://docs.python.org/3.11/distutils/setupscript.html
en
2. Writing the Setup Script
https://docs.python.org/…tic/og-image.png
https://docs.python.org/…tic/og-image.png
[ "https://docs.python.org/3.11/_static/py.svg", "https://docs.python.org/3.11/_static/py.svg", "https://docs.python.org/3.11/_static/py.svg" ]
[]
[]
[ "" ]
null
[]
null
The setup script is the centre of all activity in building, distributing, and installing modules using the Distutils. The main purpose of the setup script is to describe your module distribution to...
en
../_static/py.svg
Python documentation
https://docs.python.org/3/distutils/setupscript.html
2. Writing the Setup Script¶ Note This document is being retained solely until the setuptools documentation at https://setuptools.readthedocs.io/en/latest/setuptools.html independently covers all of the relevant information currently included here. The setup script is the centre of all activity in building, distributing, and installing modules using the Distutils. The main purpose of the setup script is to describe your module distribution to the Distutils, so that the various commands that operate on your modules do the right thing. As we saw in section A Simple Example above, the setup script consists mainly of a call to setup(), and most information supplied to the Distutils by the module developer is supplied as keyword arguments to setup(). Here’s a slightly more involved example, which we’ll follow for the next couple of sections: the Distutils’ own setup script. (Keep in mind that although the Distutils are included with Python 1.6 and later, they also have an independent existence so that Python 1.5.2 users can use them to install other module distributions. The Distutils’ own setup script, shown here, is used to install the package into Python 1.5.2.) #!/usr/bin/env python from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='gward@python.net', url='https://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) There are only two differences between this and the trivial one-file distribution presented in section A Simple Example: more metadata, and the specification of pure Python modules by package, rather than by module. This is important since the Distutils consist of a couple of dozen modules split into (so far) two packages; an explicit list of every module would be tedious to generate and difficult to maintain. For more information on the additional meta-data, see section Additional meta-data. Note that any pathnames (files or directories) supplied in the setup script should be written using the Unix convention, i.e. slash-separated. The Distutils will take care of converting this platform-neutral representation into whatever is appropriate on your current platform before actually using the pathname. This makes your setup script portable across operating systems, which of course is one of the major goals of the Distutils. In this spirit, all pathnames in this document are slash-separated. This, of course, only applies to pathnames given to Distutils functions. If you, for example, use standard Python functions such as glob.glob() or os.listdir() to specify files, you should be careful to write portable code instead of hardcoding path separators: glob.glob(os.path.join('mydir', 'subdir', '*.html')) os.listdir(os.path.join('mydir', 'subdir')) 2.1. Listing whole packages¶ The packages option tells the Distutils to process (build, distribute, install, etc.) all pure Python modules found in each package mentioned in the packages list. In order to do this, of course, there has to be a correspondence between package names and directories in the filesystem. The default correspondence is the most obvious one, i.e. package distutils is found in the directory distutils relative to the distribution root. Thus, when you say packages = ['foo'] in your setup script, you are promising that the Distutils will find a file foo/__init__.py (which might be spelled differently on your system, but you get the idea) relative to the directory where your setup script lives. If you break this promise, the Distutils will issue a warning but still process the broken package anyway. If you use a different convention to lay out your source directory, that’s no problem: you just have to supply the package_dir option to tell the Distutils about your convention. For example, say you keep all Python source under lib, so that modules in the “root package” (i.e., not in any package at all) are in lib, modules in the foo package are in lib/foo, and so forth. Then you would put package_dir = {'': 'lib'} in your setup script. The keys to this dictionary are package names, and an empty package name stands for the root package. The values are directory names relative to your distribution root. In this case, when you say packages = ['foo'], you are promising that the file lib/foo/__init__.py exists. Another possible convention is to put the foo package right in lib, the foo.bar package in lib/bar, etc. This would be written in the setup script as package_dir = {'foo': 'lib'} A package: dir entry in the package_dir dictionary implicitly applies to all packages below package, so the foo.bar case is automatically handled here. In this example, having packages = ['foo', 'foo.bar'] tells the Distutils to look for lib/__init__.py and lib/bar/__init__.py. (Keep in mind that although package_dir applies recursively, you must explicitly list all packages in packages: the Distutils will not recursively scan your source tree looking for any directory with an __init__.py file.) 2.2. Listing individual modules¶ For a small module distribution, you might prefer to list all modules rather than listing packages—especially the case of a single module that goes in the “root package” (i.e., no package at all). This simplest case was shown in section A Simple Example; here is a slightly more involved example: py_modules = ['mod1', 'pkg.mod2'] This describes two modules, one of them in the “root” package, the other in the pkg package. Again, the default package/directory layout implies that these two modules can be found in mod1.py and pkg/mod2.py, and that pkg/__init__.py exists as well. And again, you can override the package/directory correspondence using the package_dir option. 2.3. Describing extension modules¶ Just as writing Python extension modules is a bit more complicated than writing pure Python modules, describing them to the Distutils is a bit more complicated. Unlike pure modules, it’s not enough just to list modules or packages and expect the Distutils to go out and find the right files; you have to specify the extension name, source file(s), and any compile/link requirements (include directories, libraries to link with, etc.). All of this is done through another keyword argument to setup(), the ext_modules option. ext_modules is just a list of Extension instances, each of which describes a single extension module. Suppose your distribution includes a single extension, called foo and implemented by foo.c. If no additional instructions to the compiler/linker are needed, describing this extension is quite simple: Extension('foo', ['foo.c']) The Extension class can be imported from distutils.core along with setup(). Thus, the setup script for a module distribution that contains only this one extension and nothing else might be: from distutils.core import setup, Extension setup(name='foo', version='1.0', ext_modules=[Extension('foo', ['foo.c'])], ) The Extension class (actually, the underlying extension-building machinery implemented by the build_ext command) supports a great deal of flexibility in describing Python extensions, which is explained in the following sections. 2.3.1. Extension names and packages¶ The first argument to the Extension constructor is always the name of the extension, including any package names. For example, Extension('foo', ['src/foo1.c', 'src/foo2.c']) describes an extension that lives in the root package, while Extension('pkg.foo', ['src/foo1.c', 'src/foo2.c']) describes the same extension in the pkg package. The source files and resulting object code are identical in both cases; the only difference is where in the filesystem (and therefore where in Python’s namespace hierarchy) the resulting extension lives. If you have a number of extensions all in the same package (or all under the same base package), use the ext_package keyword argument to setup(). For example, setup(..., ext_package='pkg', ext_modules=[Extension('foo', ['foo.c']), Extension('subpkg.bar', ['bar.c'])], ) will compile foo.c to the extension pkg.foo, and bar.c to pkg.subpkg.bar. 2.3.2. Extension source files¶ The second argument to the Extension constructor is a list of source files. Since the Distutils currently only support C, C++, and Objective-C extensions, these are normally C/C++/Objective-C source files. (Be sure to use appropriate extensions to distinguish C++ source files: .cc and .cpp seem to be recognized by both Unix and Windows compilers.) However, you can also include SWIG interface (.i) files in the list; the build_ext command knows how to deal with SWIG extensions: it will run SWIG on the interface file and compile the resulting C/C++ file into your extension. This warning notwithstanding, options to SWIG can be currently passed like this: setup(..., ext_modules=[Extension('_foo', ['foo.i'], swig_opts=['-modern', '-I../include'])], py_modules=['foo'], ) Or on the commandline like this: > python setup.py build_ext --swig-opts="-modern -I../include" On some platforms, you can include non-source files that are processed by the compiler and included in your extension. Currently, this just means Windows message text (.mc) files and resource definition (.rc) files for Visual C++. These will be compiled to binary resource (.res) files and linked into the executable. 2.3.3. Preprocessor options¶ Three optional arguments to Extension will help if you need to specify include directories to search or preprocessor macros to define/undefine: include_dirs, define_macros, and undef_macros. For example, if your extension requires header files in the include directory under your distribution root, use the include_dirs option: Extension('foo', ['foo.c'], include_dirs=['include']) You can specify absolute directories there; if you know that your extension will only be built on Unix systems with X11R6 installed to /usr, you can get away with Extension('foo', ['foo.c'], include_dirs=['/usr/include/X11']) You should avoid this sort of non-portable usage if you plan to distribute your code: it’s probably better to write C code like #include <X11/Xlib.h> If you need to include header files from some other Python extension, you can take advantage of the fact that header files are installed in a consistent way by the Distutils install_headers command. For example, the Numerical Python header files are installed (on a standard Unix installation) to /usr/local/include/python1.5/Numerical. (The exact location will differ according to your platform and Python installation.) Since the Python include directory—/usr/local/include/python1.5 in this case—is always included in the search path when building Python extensions, the best approach is to write C code like #include <Numerical/arrayobject.h> If you must put the Numerical include directory right into your header search path, though, you can find that directory using the Distutils distutils.sysconfig module: from distutils.sysconfig import get_python_inc incdir = os.path.join(get_python_inc(plat_specific=1), 'Numerical') setup(..., Extension(..., include_dirs=[incdir]), ) Even though this is quite portable—it will work on any Python installation, regardless of platform—it’s probably easier to just write your C code in the sensible way. You can define and undefine pre-processor macros with the define_macros and undef_macros options. define_macros takes a list of (name, value) tuples, where name is the name of the macro to define (a string) and value is its value: either a string or None. (Defining a macro FOO to None is the equivalent of a bare #define FOO in your C source: with most compilers, this sets FOO to the string 1.) undef_macros is just a list of macros to undefine. For example: Extension(..., define_macros=[('NDEBUG', '1'), ('HAVE_STRFTIME', None)], undef_macros=['HAVE_FOO', 'HAVE_BAR']) is the equivalent of having this at the top of every C source file: #define NDEBUG 1 #define HAVE_STRFTIME #undef HAVE_FOO #undef HAVE_BAR 2.3.4. Library options¶ You can also specify the libraries to link against when building your extension, and the directories to search for those libraries. The libraries option is a list of libraries to link against, library_dirs is a list of directories to search for libraries at link-time, and runtime_library_dirs is a list of directories to search for shared (dynamically loaded) libraries at run-time. For example, if you need to link against libraries known to be in the standard library search path on target systems Extension(..., libraries=['gdbm', 'readline']) If you need to link with libraries in a non-standard location, you’ll have to include the location in library_dirs: Extension(..., library_dirs=['/usr/X11R6/lib'], libraries=['X11', 'Xt']) (Again, this sort of non-portable construct should be avoided if you intend to distribute your code.) 2.3.5. Other options¶ There are still some other options which can be used to handle special cases. The optional option is a boolean; if it is true, a build failure in the extension will not abort the build process, but instead simply not install the failing extension. The extra_objects option is a list of object files to be passed to the linker. These files must not have extensions, as the default extension for the compiler is used. extra_compile_args and extra_link_args can be used to specify additional command line options for the respective compiler and linker command lines. export_symbols is only useful on Windows. It can contain a list of symbols (functions or variables) to be exported. This option is not needed when building compiled extensions: Distutils will automatically add initmodule to the list of exported symbols. The depends option is a list of files that the extension depends on (for example header files). The build command will call the compiler on the sources to rebuild extension if any on this files has been modified since the previous build. 2.4. Relationships between Distributions and Packages¶ A distribution may relate to packages in three specific ways: It can require packages or modules. It can provide packages or modules. It can obsolete packages or modules. These relationships can be specified using keyword arguments to the distutils.core.setup() function. Dependencies on other Python modules and packages can be specified by supplying the requires keyword argument to setup(). The value must be a list of strings. Each string specifies a package that is required, and optionally what versions are sufficient. To specify that any version of a module or package is required, the string should consist entirely of the module or package name. Examples include 'mymodule' and 'xml.parsers.expat'. If specific versions are required, a sequence of qualifiers can be supplied in parentheses. Each qualifier may consist of a comparison operator and a version number. The accepted comparison operators are: < > == <= >= != These can be combined by using multiple qualifiers separated by commas (and optional whitespace). In this case, all of the qualifiers must be matched; a logical AND is used to combine the evaluations. Let’s look at a bunch of examples: Requires Expression Explanation Now that we can specify dependencies, we also need to be able to specify what we provide that other distributions can require. This is done using the provides keyword argument to setup(). The value for this keyword is a list of strings, each of which names a Python module or package, and optionally identifies the version. If the version is not specified, it is assumed to match that of the distribution. Some examples: Provides Expression Explanation A package can declare that it obsoletes other packages using the obsoletes keyword argument. The value for this is similar to that of the requires keyword: a list of strings giving module or package specifiers. Each specifier consists of a module or package name optionally followed by one or more version qualifiers. Version qualifiers are given in parentheses after the module or package name. The versions identified by the qualifiers are those that are obsoleted by the distribution being described. If no qualifiers are given, all versions of the named module or package are understood to be obsoleted. 2.5. Installing Scripts¶ So far we have been dealing with pure and non-pure Python modules, which are usually not run by themselves but imported by scripts. Scripts are files containing Python source code, intended to be started from the command line. Scripts don’t require Distutils to do anything very complicated. The only clever feature is that if the first line of the script starts with #! and contains the word “python”, the Distutils will adjust the first line to refer to the current interpreter location. By default, it is replaced with the current interpreter location. The --executable (or -e) option will allow the interpreter path to be explicitly overridden. The scripts option simply is a list of files to be handled in this way. From the PyXML setup script: setup(..., scripts=['scripts/xmlproc_parse', 'scripts/xmlproc_val'] ) Changed in version 3.1: All the scripts will also be added to the MANIFEST file if no template is provided. See Specifying the files to distribute. 2.6. Installing Package Data¶ Often, additional files need to be installed into a package. These files are often data that’s closely related to the package’s implementation, or text files containing documentation that might be of interest to programmers using the package. These files are called package data. Package data can be added to packages using the package_data keyword argument to the setup() function. The value must be a mapping from package name to a list of relative path names that should be copied into the package. The paths are interpreted as relative to the directory containing the package (information from the package_dir mapping is used if appropriate); that is, the files are expected to be part of the package in the source directories. They may contain glob patterns as well. The path names may contain directory portions; any necessary directories will be created in the installation. For example, if a package should contain a subdirectory with several data files, the files can be arranged like this in the source tree: setup.py src/ mypkg/ __init__.py module.py data/ tables.dat spoons.dat forks.dat The corresponding call to setup() might be: setup(..., packages=['mypkg'], package_dir={'mypkg': 'src/mypkg'}, package_data={'mypkg': ['data/*.dat']}, ) Changed in version 3.1: All the files that match package_data will be added to the MANIFEST file if no template is provided. See Specifying the files to distribute. 2.7. Installing Additional Files¶ The data_files option can be used to specify additional files needed by the module distribution: configuration files, message catalogs, data files, anything which doesn’t fit in the previous categories. data_files specifies a sequence of (directory, files) pairs in the following way: setup(..., data_files=[('bitmaps', ['bm/b1.gif', 'bm/b2.gif']), ('config', ['cfg/data.cfg'])], ) Each (directory, files) pair in the sequence specifies the installation directory and the files to install there. Each file name in files is interpreted relative to the setup.py script at the top of the package source distribution. Note that you can specify the directory where the data files will be installed, but you cannot rename the data files themselves. The directory should be a relative path. It is interpreted relative to the installation prefix (Python’s sys.prefix for system installations; site.USER_BASE for user installations). Distutils allows directory to be an absolute installation path, but this is discouraged since it is incompatible with the wheel packaging format. No directory information from files is used to determine the final location of the installed file; only the name of the file is used. You can specify the data_files options as a simple sequence of files without specifying a target directory, but this is not recommended, and the install command will print a warning in this case. To install data files directly in the target directory, an empty string should be given as the directory. Changed in version 3.1: All the files that match data_files will be added to the MANIFEST file if no template is provided. See Specifying the files to distribute. 2.8. Additional meta-data¶ The setup script may include additional meta-data beyond the name and version. This information includes: Meta-Data Description Value Notes Notes: These fields are required. It is recommended that versions take the form major.minor[.patch[.sub]]. Either the author or the maintainer must be identified. If maintainer is provided, distutils lists it as the author in PKG-INFO. The long_description field is used by PyPI when you publish a package, to build its project page. The license field is a text indicating the license covering the package where the license is not a selection from the “License” Trove classifiers. See the Classifier field. Notice that there’s a licence distribution option which is deprecated but still acts as an alias for license. This field must be a list. The valid classifiers are listed on PyPI. To preserve backward compatibility, this field also accepts a string. If you pass a comma-separated string 'foo, bar', it will be converted to ['foo', 'bar'], Otherwise, it will be converted to a list of one string. ‘short string’ A single line of text, not more than 200 characters. ‘long string’ Multiple lines of plain text in reStructuredText format (see https://docutils.sourceforge.io/). ‘list of strings’ See below. Encoding the version information is an art in itself. Python packages generally adhere to the version format major.minor[.patch][sub]. The major number is 0 for initial, experimental releases of software. It is incremented for releases that represent major milestones in a package. The minor number is incremented when important new features are added to the package. The patch number increments when bug-fix releases are made. Additional trailing version information is sometimes used to indicate sub-releases. These are “a1,a2,…,aN” (for alpha releases, where functionality and API may change), “b1,b2,…,bN” (for beta releases, which only fix bugs) and “pr1,pr2,…,prN” (for final pre-release release testing). Some examples: 0.1.0 the first, experimental release of a package 1.0.1a2 the second alpha release of the first patch version of 1.0 classifiers must be specified in a list: setup(..., classifiers=[ 'Development Status :: 4 - Beta', 'Environment :: Console', 'Environment :: Web Environment', 'Intended Audience :: End Users/Desktop', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'License :: OSI Approved :: Python Software Foundation License', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Microsoft :: Windows', 'Operating System :: POSIX', 'Programming Language :: Python', 'Topic :: Communications :: Email', 'Topic :: Office/Business', 'Topic :: Software Development :: Bug Tracking', ], ) Changed in version 3.7: setup now warns when classifiers, keywords or platforms fields are not specified as a list or a string. 2.9. Debugging the setup script¶ Sometimes things go wrong, and the setup script doesn’t do what the developer wants. Distutils catches any exceptions when running the setup script, and print a simple error message before the script is terminated. The motivation for this behaviour is to not confuse administrators who don’t know much about Python and are trying to install a package. If they get a big long traceback from deep inside the guts of Distutils, they may think the package or the Python installation is broken because they don’t read all the way down to the bottom and see that it’s a permission problem. On the other hand, this doesn’t help the developer to find the cause of the failure. For this purpose, the DISTUTILS_DEBUG environment variable can be set to anything except an empty string, and distutils will now print detailed information about what it is doing, dump the full traceback when an exception occurs, and print the whole command line when an external program (like a C compiler) fails.
8585
dbpedia
0
96
https://codelabs.developers.google.com/vertex-training-autopkg
en
Vertex AI: Use autopackaging to fine tune Bert with Hugging Face on Vertex AI Training
[ "https://www.gstatic.com/devrel-devsite/prod/v20ab951cf37b43fc7a428ae75ce91d8269f391204ca16525bc8a5ececea0ab56/codelabs/images/lockup.svg", "https://www.gstatic.com/devrel-devsite/prod/v20ab951cf37b43fc7a428ae75ce91d8269f391204ca16525bc8a5ececea0ab56/codelabs/images/lockup.svg", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/vertex-product-overview_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/enable-vertex_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/workbench-menu_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/managed-notebooks_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/new-notebook_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/create-notebook_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/idle-shutdown_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/open-jl_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/authenticate_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/launcher-terminal_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/local-training_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-started_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/training-job_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/model-output_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-nb_2880.png 2880w", "https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_36.png 36w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_48.png 48w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_72.png 72w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_96.png 96w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_480.png 480w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_720.png 720w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_856.png 856w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_960.png 960w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_1440.png 1440w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_1920.png 1920w,https://codelabs.developers.google.com/static/vertex-training-autopkg/img/delete-storage_2880.png 2880w", "https://www.gstatic.com/devrel-devsite/prod/v20ab951cf37b43fc7a428ae75ce91d8269f391204ca16525bc8a5ececea0ab56/codelabs/images/lockup-google-for-developers.svg" ]
[]
[]
[ "" ]
null
[]
null
en
https://www.gstatic.com/…ages/favicon.png
Google Codelabs
https://codelabs.developers.google.com/vertex-training-autopkg
1. Overview In this lab, you'll learn how to run a custom training job on Vertex AI Training with the autopackaging feature. Custom training jobs on Vertex AI use containers. If you do not want to build your own image, you can use auotpackaging, which will build a custom Docker image based on your code, push the image to Container Registry, and start a CustomJob based on the image. What you learn You'll learn how to: Use local mode to test your code. Configure and launch a custom training job with autopackaging. The total cost to run this lab on Google Cloud is about $2. 2. Use Case Overview Using libraries from Hugging Face, you'll fine tune a Bert model on the IMDB dataset. The model will predict whether a movie review are positive or negative. The dataset will be downloaded from the Hugging Face datasets library, and the Bert model from the Hugging Face transformers library. 3. Intro to Vertex AI This lab uses the newest AI product offering available on Google Cloud. Vertex AI integrates the ML offerings across Google Cloud into a seamless development experience. Previously, models trained with AutoML and custom models were accessible via separate services. The new offering combines both into a single API, along with other new products. You can also migrate existing projects to Vertex AI. If you have any feedback, please see the support page. Vertex AI includes many different products to support end-to-end ML workflows. This lab will focus on Training and Workbench. 4. Set up your environment You'll need a Google Cloud Platform project with billing enabled to run this codelab. To create a project, follow the instructions here. Step 1: Enable the Compute Engine API Navigate to Compute Engine and select Enable if it isn't already enabled. Step 2: Enable the Vertex AI API Navigate to the Vertex AI section of your Cloud Console and click Enable Vertex AI API. Step 3: Enable the Container Registry API Navigate to the Container Registry and select Enable if it isn't already. You'll use this to create a container for your custom training job. Step 4: Create a Vertex AI Workbench instance From the Vertex AI section of your Cloud Console, click on Workbench: From there, click MANAGED NOTEBOOKS: Then select NEW NOTEBOOK. Give your notebook a name, and then click Advanced Settings. Under Advanced Settings, enable idle shutdown and set the number of minutes to 60. This means your notebook will shutdown automatically when not in use so you don't incur unnecessary costs. You can leave all of the other advanced settings as is. Next, click Create. Once the instance has been created, select Open JupyterLab. The first time you use a new instance, you'll be asked to authenticate. 5. Write training code To start, from the Launcher menu, open a Terminal window in your notebook instance: Create a new directory called autopkg-codelab and cd into it. mkdir autopkg-codelab cd autopkg-codelab From your Terminal, run the following to create a directory for the training code and a Python file where you'll add the code: mkdir trainer touch trainer/task.py You should now have the following in your autopkg-codelab/ directory: + trainer/ + task.py Next, open the task.py file you just created and copy the code below. import argparse import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer from transformers import TFAutoModelForSequenceClassification CHECKPOINT = "bert-base-cased" def get_args(): '''Parses args.''' parser = argparse.ArgumentParser() parser.add_argument( '--epochs', required=False, default=3, type=int, help='number of epochs') parser.add_argument( '--job_dir', required=True, type=str, help='bucket to store saved model, include gs://') args = parser.parse_args() return args def create_datasets(): '''Creates a tf.data.Dataset for train and evaluation.''' raw_datasets = load_dataset('imdb') tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) tokenized_datasets = raw_datasets.map((lambda examples: tokenize_function(examples, tokenizer)), batched=True) # To speed up training, we use only a portion of the data. # Use full_train_dataset and full_eval_dataset if you want to train on all the data. small_train_dataset = tokenized_datasets['train'].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets['test'].shuffle(seed=42).select(range(1000)) full_train_dataset = tokenized_datasets['train'] full_eval_dataset = tokenized_datasets['test'] tf_train_dataset = small_train_dataset.remove_columns(['text']).with_format("tensorflow") tf_eval_dataset = small_eval_dataset.remove_columns(['text']).with_format("tensorflow") train_features = {x: tf_train_dataset[x] for x in tokenizer.model_input_names} train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset["label"])) train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8) eval_features = {x: tf_eval_dataset[x] for x in tokenizer.model_input_names} eval_tf_dataset = tf.data.Dataset.from_tensor_slices((eval_features, tf_eval_dataset["label"])) eval_tf_dataset = eval_tf_dataset.batch(8) return train_tf_dataset, eval_tf_dataset def tokenize_function(examples, tokenizer): '''Tokenizes text examples.''' return tokenizer(examples['text'], padding='max_length', truncation=True) def main(): args = get_args() train_tf_dataset, eval_tf_dataset = create_datasets() model = TFAutoModelForSequenceClassification.from_pretrained(CHECKPOINT, num_labels=2) model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy(), ) model.fit(train_tf_dataset, validation_data=eval_tf_dataset, epochs=args.epochs) model.save(f'{args.job_dir}/model_output') if __name__ == "__main__": main() A few things to note about the code: CHECKPOINT is the model we want to fine tune. In this case, we use Bert. The TFAutoModelForSequenceClassification method will load the specified language model architecture + weights in TensorFlow and add a classification head on top with randomly initialized weights. In this case, we have a binary classification problem (positive or negative) so we specify num_labels=2 for this classifier. 6. Containerize and run training code locally You can use the gcloud ai custom-jobs local-run command to build a Docker container image based on your training code and run the image as a container on your local machine. Running a container locally executes your training code in a similar way to how it runs on Vertex AI Training, and can help you debug problems with your code before you perform custom training on Vertex AI. In our training job, we'll export our trained model to a Cloud Storage Bucket. From your Terminal, run the following to define an env variable for your project, making sure to replace your-cloud-project with the ID of your project: PROJECT_ID='your-cloud-project' Then, create a bucket. If you have an existing bucket, feel free to use that instead. BUCKET_NAME="gs://${PROJECT_ID}-bucket" gsutil mb -l us-central1 $BUCKET_NAME When we run the custom training job on Vertex AI Training, we'll make use of a GPU. But since we did not specify our Workbench instance with GPUs, we'll use a CPU based image for local testing. In this example, we use a Vertex AI Training pre-built container. Run the following to set the URI of a Docker image to use as the base of the container. BASE_CPU_IMAGE=us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-7:latest Then set a name for the resulting Docker image built by the local run command. OUTPUT_IMAGE=$PROJECT_ID-local-package-cpu:latest Our training code uses the Hugging Face datasets and transformers libraries. These libraries are not included in the image we have selected as our base image, so we will need to provide them as requirements. To do this, we will create a requirements.txt file in our autopkg-codelab directory. Ensure you are in the autopkg-codelab directory and type the following in your terminal. touch requirements.txt You should now have the following in your autopkg-codelab directory: + requirements.txt + trainer/ + task.py Open up the requirements file and paste in the following datasets==1.18.2 transformers==4.16.2 Finally, execute the gcloud ai custom-jobs local-run command to kick of training on our Workbench managed instance. gcloud ai custom-jobs local-run \ --executor-image-uri=$BASE_CPU_IMAGE \ --python-module=trainer.task \ --output-image-uri=$OUTPUT_IMAGE \ -- \ --job_dir=$BUCKET_NAME You should see the Docker image being built. The dependencies we added to the requirements.txt file will be pip installed. This may take a few minutes to complete the first time you execute this command. Once the image is built, thetask.py file will start running and you'll see the model training. You should see something like this: Because we are not using a GPU locally, model training will take a long time. You can Ctrl+c and cancel local training instead of waiting for the job to complete. Note that if you wanted to do further testing, you can also directly run the image built above, without repackaging. gcloud beta ai custom-jobs local-run \ --executor-image-uri=$OUTPUT_IMAGE \ -- \ --job_dir=$BUCKET_NAME \ --epochs=1 7. Create a custom job Now that we have tested out local mode, we'll use the autopackaging feature to launch our custom training job on Vertex AI Training. With a single command, this feature will: Build a custom Docker image based on your code. Push the image to Container Registry. Start a CustomJob based on the image. Return to the terminal and cd up one level above your autopkg-codelab directory. + autopkg-codelab + requirements.txt + trainer/ + task.py Specify the Vertex AI Training pre-built TensorFlow GPU image as the base image for the custom training job. BASE_GPU_IMAGE=us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-7:latest Next, execute the gcloud ai custom-jobs create command. First, this command will build a custom Docker image based on the training code. The base image is the Vertex AI Training pre-built container we set as BASE_GPU_IMAGE. The autopackaging feature will then pip install the datasets and transformers libraries as specified in our requirements.txt file. gcloud ai custom-jobs create \ --region=us-central1 \ --display-name=fine_tune_bert \ --args=--job_dir=$BUCKET_NAME \ --worker-pool-spec=machine-type=n1-standard-4,replica-count=1,accelerator-type=NVIDIA_TESLA_V100,executor-image-uri=$BASE_GPU_IMAGE,local-package-path=autopkg-codelab,python-module=trainer.task Let's take a look at the worker-pool-spec argument. This defines the worker pool configuration used by the custom job. You can specify multiple worker pool specs in order to create a custom job with multiple worker pools for distributed training. In this example, we only specify a single worker pool, as our training code is not configured for distributed training. Here are some of the key fields of this spec: machine-type (Required): The type of the machine. Click here for supported types. replica-count: The number of worker replicas to use for this worker pool, by default the value is 1. accelerator-type: The type of GPUs. Click here for supported types. In this example, we specified one NVIDIA Tesla V100 GPU. accelerator-count: The number of GPUs for each VM in the worker pool to use, by default the value is 1. executor-image-uri: The URI of a container image that will run the provided package. This is set to our base image. local-package-path: The local path of a folder that contains training code. python-module: The Python module name to run within the provided package. Similar to when you ran the local command, you will see the Docker image being built and then the training job kick off. Except instead of seeing the output of the training job, you'll see the following message confirming that your training job has launched. Note that the first time you run the custom-jobs create command, it may take a few minutes for the image to be built and pushed. Return to the Vertex AI Training section of the cloud console and under CUSTOM JOBS you should see your job running. The job will take around 20 minutes to complete. Once complete, you should see the following saved model artifacts in the model_output directory in your bucket. 🎉 Congratulations! 🎉 You've learned how to use Vertex AI to: Containerize and run training code locally Submit training jobs to Vertex AI Training with autopackaging To learn more about different parts of Vertex AI, check out the documentation. 8. Cleanup Because we configured the notebook to time out after 60 idle minutes, we don't need to worry about shutting the instance down. If you would like to manually shut down the instance, click the Stop button on the Vertex AI Workbench section of the console. If you'd like to delete the notebook entirely, click the Delete button. To delete the Storage Bucket, using the Navigation menu in your Cloud Console, browse to Storage, select your bucket, and click Delete:
8585
dbpedia
0
79
https://tex.stackexchange.com/questions/110501/auto-package-download-for-texlive
en
Auto Package download for TeXLive
https://cdn.sstatic.net/…g?v=eaf26b461720
https://cdn.sstatic.net/…g?v=eaf26b461720
[ "https://cdn.sstatic.net/Sites/tex/Img/logo.svg?v=43890f90cb01", "https://www.gravatar.com/avatar/f7ab1a2cd3140f688ca1a7286f4b3c03?s=64&d=identicon&r=PG", "https://i.sstatic.net/XuZ4a.png?s=64", "https://i.sstatic.net/4r8yp.png?s=64", "https://i.sstatic.net/Y6tnm.png?s=64", "https://tex.stackexchange.com/posts/110501/ivc/3f38?prg=0c045943-c7cb-435d-aeb1-8b03d39fef6c" ]
[]
[]
[ "" ]
null
[]
2013-04-24T14:19:53
I use MiKTeX on Windows and quite satisfied with it. Recently I started switching all my tasks toward open-source alternatives, and in the course I would love to use Linux. In Linux TeXLive is avai...
en
https://cdn.sstatic.net/Sites/tex/Img/favicon.ico?v=91427af8e60a
TeX - LaTeX Stack Exchange
https://tex.stackexchange.com/questions/110501/auto-package-download-for-texlive
While in MiKTeX an installation process is automatically triggered if you have, say, \usepackage{beamer} in a document preamble without the corresponding package installed, there is no such feature on TeX Live. The last statement is not true actually, as pointed out by wasteofspace in the comments there is the texliveonfly package that implements the on demand installation in TeX Live 2010 and later. I never tested it and don't know if it has drawbacks. However, if you install the full (or almost full) TeX Live collection of packages (~2400) you will not need to add new packages, a periodic tlmgr update -all will take care of everything, including the installation of packages added to the TeX Live collection after you first full installation. This feature is explained in the tlmgr manual. Analogously, if a package has been added to a collection on the server that is also installed locally, it will be added to the local installation. This is called auto-install and is announced as such when using the option --list. This auto-installation can be suppressed using the option --no-auto-install The manual has lots of info on useful commands and it is a recommended reading for every user. The downside is of course that you need the full set of packages installed in your machine, which may be a problem if you don't have enough free space. If you really can't spare 2GB from your HD, it is also possible to install TeX Live in a, say, 4GB USB key and live happily ever after :) Everything I just wrote requires that you install TeX Live with one the methods described here. If you decide to use the TeX packages from your distro you are forced to follow their update policy, which is different for different distros texliveonfly As mentioned in comments, there is a TeX Live package called texliveonfly which you can use with texliveonfly filename.tex, and it will automatically downloaded the right TeX Live packages. This also works for packages for which the LaTeX package name and the TeX Live package name don't match (for example the LaTeX rubikrotation package is contained in the rubik TeX Live package), and it also takes package dependencies into account. Usage Installing It is a Python script so it requires Python to be installed. You can then install it like usually with tlmgr install texliveonfly. If you have to use sudo tlmgr here, you will have to use sudo texliveonfly later. Running If you go in your terminal to the directory of your filename.tex file, you can run it with texliveonfly filename.tex. Other compilers At the moment it uses pdflatex by default, but you can configure it to run with other compiler engines by using the --compiler (or -c) flag, so like texliveonfly --compiler=lualatex filename.tex. Compiler flags You can pass flags for the compiler you use to texliveonfly using the --arguments (or -a) flag, so for example if you previously used latexmk -shell-escape -pdf filename.tex then you now use texliveonfly --compiler=latexmk --arguments='-shell-escape -pdf' filename.tex. Known problems There are some cases of missing packages which fail with a non-standard error message, for example babel when it's missing languages, in which case texliveonfly doesn't download them. At the moment the following packages are known to have to be installed manually: (please edit if you find more) Babel languages, for example for european languages install the collection-langeuropean package Biblatex styles, e.g. for the nature style you need the biblatex-nature package fontenc encodings, e.g. to get t2aenc.def you need the cyrillic package, and to get the ly1enc.def you need the ly1 package. Packages involved when using the minted package, which are minted fvextra upquote lineno xstring framed caption (thanks to pablgonz for testing) When running external programs like texcount in your LaTeX file, texliveonfly does not detect that you need the texcount package. When giving options to texliveonfly, for example for a different compiler, it sometimes hangs for no apparent reason when installing packages. You can most probably work around it by first running texliveonfly without options, so texliveonfly main.tex (so it will download the packages) and then running whatever you wanted to, for example latexmk main.tex. Background Essentially texliveonfly is a build tool like latexmk (which is a Perl script), it wraps the TeX engine. Note however that you can chain them with texliveonfly --compiler=latexmk filename.tex. It is a python script which works by trying to run your LaTeX file, and if it fails because a package is missing it will try to install that package. Besides on ctan.org/pkg/texliveonfly you can view the source at ctan.org/tex-archive/support/texliveonfly or on latex.org/forum PS I tested this on Arch Linux 4.19.4 and on Travis CI (Ubuntu 14.04).
8585
dbpedia
0
5
https://medium.com/%40nickuzmenkov/build-your-first-python-package-and-automate-it-74053ed1f535
en
Build your first python package… and automate it!
https://miro.medium.com/…X0P_aIRkSvQ.jpeg
https://miro.medium.com/…X0P_aIRkSvQ.jpeg
[ "https://miro.medium.com/v2/resize:fill:64:64/1*dmbNkD5D-u45r44go_cf0g.png", "https://miro.medium.com/v2/resize:fill:88:88/1*M3v-KXG0tuLEMeQqexxIdQ.jpeg", "https://miro.medium.com/v2/resize:fill:144:144/1*M3v-KXG0tuLEMeQqexxIdQ.jpeg" ]
[]
[]
[ "" ]
null
[ "Nick Kuzmenkov", "medium.com" ]
2021-11-27T08:04:13.433000+00:00
But let’s take a peek at a few steps ahead. Now having your very own piece of software, you have to maintain it, e.g. release bug fixes and/or some new helpful features. Each time you implement a new…
en
https://miro.medium.com/v2/5d8de952517e8160e40ef9841c781cdc14a5db313057fa3c3de41c6f5b494b19
Medium
https://medium.com/@nickuzmenkov/build-your-first-python-package-and-automate-it-74053ed1f535
Building your first open-source package is so much fun, even if it’s just a learning side-project. But let’s take a peek at a few steps ahead. Now having your very own piece of software, you have to maintain it, e.g. release bug fixes and/or some new helpful features. Each time you implement a new idea or fix something, you have to make sure that the upcoming changes won’t break the code. And when you’re done, you have to rebuild the package and redeploy it. The longer this takes, the greater the chances you start asking yourself: is there a way to automate this routine? This question is so common, that there are two special terms referring to it: CI (Continuous Integration) and CD (Continuous deployment). Those are rarely meant standalone, so you would see them both throughout like this: CI/CD. The former relates to automated developing routine (e.g. testing, maintaining code style, etc.), and the latter relates to automated deployment. A proper CI/CD, being a nice feature for a small solo project, turns out to be a must-have for long-running team projects. Here I’ll show you just one simple way for setting up CI/CD for your python package with automated tests, code style checks, and deployment. Furthermore, it’s built into version control and has a pretty web interface like this: What I’m talking about is BitBucket pipelines — a powerful yet easy-to-use CI/CD tool. If you’re interested only in automation, just go straight to section 3. You will need some basic knowledge of git, and also BitBucket, PyPI, and TestPyPI accounts. 1. Setup your repository First things first. Go to BitBucket and make sure your two-step authentication is on: go to “Personal settings” — “Security” — “Two-step verification”. If this page looks like this: you are ready to go for the next steps. If not, do not worry — just follow the instructions on the page. Two-step verification will keep your account as safe as possible, and it is also required to enable pipelines. Once you’ve done, go ahead and create a repository. You can fill the form like this: Think of “Project” as a folder to store similar repositories. If you don’t have any existing projects, type whatever you would like to call it, so BitBucket can create one for you. Once created, go to the “Branches” page, click on “Create branch”, and name it “develop”. We will separate the concerns using the master branch as a release branch and keep all the development inside the develop branch. Go ahead to the repository main page, click on “Repository settings”, and find a section called “Pipelines”. Go to “Settings” in that section and toggle the “Enable Pipelines” switch: Our pipelines will need TestPyPI and PyPI credentials to deploy your packages, but you should never keep them as plain text inside the repository. Thankfully, BitBucket has a solution for this: secret variables. From the same page, go to “Repository variables” and add four secured variables: PYPI_USERNAME, PYPI_PASSWORD, TEST_PYPI_USERNAME, and TEST_PYPI_PASSWORD: Now go to the “Workflow” — “Branching model” section and set the development branch to develop, and the production branch to master. 2. Setup your package Now it’s time to clone your repository and add in some code. Firstly check out the develop branch: Now you need to initialize the project file structure. Go ahead and create those files and folders one by one: Feel free to pick your own names for files and folders as long as you keep them consistent throughout the project. However, do not rename files under the root directory. Those are named by convention, so renaming them would break some features and make your code much less readable. Next place your package under the autodeploy_template directory. If your package is split into multiple files, just copy all of them. I will place in a placeholder class named BarCounter with a single count method which simply shows a progress bar: After you’re done, mention all the classes, functions, and variables from all the module files in the __init__.py: We would usually like to cover the code with tests to make sure any time you add a new feature or fix something, you won’t break something else. If you have a unittest ready, copy it into tests/test.py, if not, just leave it blank for now. Writing tests is somehow boring yet very important part of your project. The sooner you set up proper testing, the more time you save in future. Now updateREADME.md to make your package’s homepage nice and pretty: Another important part is the license. If you’re deploying your package for learning purposes, an MIT license is a good choice. It grants full access to your code to anyone who finds it useful without any warranties from your side. If you find that’s true, copy this to LICENSE.txt file and add your name and surname: Our setup script will produce some extra files which should not pollute our commits. Place these three lines at the bottom of the .gitignore file to exclude build artifacts from version control: Now the most important part: the setup.py file. It configures all your package’s contents and all sorts of auxiliary information. Though it seems complicated at first, it turns out to be pretty straightforward: Let’s quickly look through the main parameters: the name is how your package would be named in PyPI: pip install <name>; the packages is a list of folders containing python files. It is also how you reference your package when you import it: from <package>.<file> import <Any>; the version typically starts from 0.1.0 and increments each time you make a new release. By convention you increment the last digit each time you make a bug fix, the second digit — each time you add a new feature (e.g. BarCounter now accepts bar length parameter), and the third digit only when you make a backward-incompatible change (e.g. BarCounter is renamed to ProgressBarCounter); the install_requires is a list of the package dependencies. Make sure you list all the dependencies with the versions used (e.g. tqdm>=4.62.0 means your package requires tqdm version 4.62.0 or newer). If manually listing all the dependencies seems annoying, pipreqs is a great tool to automate it. Just install it pip install pipreqs and run pipreqs . (note the dot at the end) inside the project directory. Unlike pip freeze > requirements.txt, which lists all the packages installed, this command would list only those packages that are used inside .py files in the current or subsequent directories saving you a lot of time. That’s it! Now, all we have to do is to test the installation. Firstly make sure setuptools, wheel, and twine packages are installed and up to date. Then create a source distribution and the wheels, and then run checks: If you feel confused about the terms source distribution and wheel format, don’t worry, I don’t know much about it either. It’s just two ways to build the package so that twine can publish it to the python package index and pip can handle its download and installation. Once the checks have passed, push your changes to develop branch: git commit -am 'initial version' && git push. 3. Setup the pipelines At this point at some other tutorials you would probably see the next line: twine upload dist/*. And that’s OK, but we want the magic of automation. So go ahead and create bitbucket-pipelines.yml file in the root directory with the very first step in it: If you’re not familiar with the YAML format, it’s very similar to JSON: both can be read into python dict and hold serialized dicts. Unlike JSONs, YAML files don’t have any braces. A colon represents a key-value pair, strings can be placed both with and without quotation marks, sequence elements are marked with hyphens. The first key is pipelines, followed by the key default storing a sequence of operations, or steps. All steps defined in the default section will be applied to every single commit pushed to origin. BitBucket then copies your repository to a remote server and runs each step in a separate Docker container. Docker is arguably the most powerful tool for virtual environments. Unlike virtualenv (or similar tools) it not only creates a local version of python with all its dependencies but also places your project folder into a virtual machine (called container) starting from scratch each time you run your code. This prevents all system-related inconsistency and makes sure your code runs equally on any machine. Let’s breakdown the step contents: the name is a string with a step summary. It can be whatever you wish as long as all step names are unique; the image is a name of a Docker image to pull from Docker Hub public image storage. The convention is <image_name>:<version>-<type>. If you need to run only python and pip commands, then python-slim is the best option. Here we use python-slim version 3.9; the script is a sequence of bash commands to run within that step. Here we simply run tests.py file and exit with error code 1 if tests were unsuccessful; But enough talking. Make a commit, push it, go to the repository main page, click on “Pipelines” and see the magic! Congrats on your first successful pipeline! Let’s go ahead and add another step with code style and linting checks: Here we use flake8 for linting and black for code style checks. Notice that this step depends on corresponding libraries which aren’t built-in. We provide the caches: pip instruction to make pipelines run faster. All the rest is the same. Once again, make a commit and push it. As long as we’ve set up the CI part, let’s focus on CD. Add these two final steps to the end of your bitbucket-pipelines.yml file: Notice that instead of going on in the default section we add another: branches: master. This means that all the subsequent steps will only apply to new commits in the master branch. As long as we don’t deploy our package from any branch except master, that is exactly what we want! First copy the previous two steps (tests and codestyle) from the default section and add two other: Test PyPI and PyPI. These steps are a bit more complex, so I won’t cover them in detail. Make sure your complete pipeline looks like this: Once you’ve pushed the changes to develop, we’re ready to deploy. Merge your develop branch with master (either via git merge or a pull request). Go to the pipelines of the last merge commit. Notice the two new steps in the pipeline. But wait, why weren’t they executed straight after the previous ones? It’s because we provided the trigger: manual instructions preventing it from running without your permission. Go ahead and click on the “Deploy” button. After this step executes, go to TestPyPI and make sure the new package has popped up in your profile: Great! We now can test the package by installing it (make sure you add the --index-url instruction): Fine, our package is installable and works as expected. But remember that anything deployed to TestPyPI doesn’t affect the real index. So once you are ready return to the repository and click on the second “Deploy” button to deploy it. After it finishes, your project can be installed simply by pip install autodeploy-template. Further reading Oh! It’s been quite a lot to do at once. If you’ve followed me along all the way here, it makes me happy. The full project code is available at BitBucket. Actually, CI/CD is a very complex topic both in terms of concepts and technologies involved. We’ve only touched some of the core concepts and tools and there’s still much, much more you can learn about it! Here are some links for further reading:
8585
dbpedia
3
94
https://developer.android.com/cars
en
Android Developers
https://developer.androi…d-developers.png
https://developer.androi…d-developers.png
[ "https://www.gstatic.com/devrel-devsite/prod/v20ab951cf37b43fc7a428ae75ce91d8269f391204ca16525bc8a5ececea0ab56/android/images/lockup.svg", "https://www.gstatic.com/devrel-devsite/prod/v20ab951cf37b43fc7a428ae75ce91d8269f391204ca16525bc8a5ececea0ab56/android/images/lockup.svg", "https://developer.android.com/static/images/auto/design-for-cars.svg", "https://developer.android.com/_static/android/images/logo-x.svg", "https://www.gstatic.com/images/icons/material/product/2x/youtube_48dp.png", "https://developer.android.com/_static/android/images/logo-linkedin.svg", "https://www.gstatic.com/devrel-devsite/prod/v20ab951cf37b43fc7a428ae75ce91d8269f391204ca16525bc8a5ececea0ab56/android/images/lockup-google-for-developers.svg" ]
[]
[]
[ "collection_distributebestpracticeslaunchdistributeautorelated" ]
null
[]
null
Let drivers listen to and control content in your music and other media apps and hear and respond to your messaging service via the car's controls and screen.
en
https://www.gstatic.com/…ages/favicon.svg
Android Developers
https://developer.android.com/cars
Stay organized with collections Save and categorize content based on your preferences. Android for Cars Build apps that help users connect on the road through Android Automotive OS and Android Auto. Users who have a vehicle with Android Automotive OS can install your app onto their vehicle's infotainment system. Android Auto lets users connect their phone, Android 9 or higher, to a compatible vehicle to display a driver-optimized version of your app directly on the console. Android for cars Design Learn how to design apps that are optimized for drivers and vehicles. Design apps that delight your users with experiences that are simple, consistent, and free from distractions. Build media apps Build media apps such as music, radio, and audiobook players that users can install in their Android-powered vehicle, project into their vehicle from their phone, or use on their phone while on the road. Build messaging apps Build messaging apps that receive incoming notifications, read messages using text-to-speech, and let users reply through the Android Auto app using their voice while on the road. Build point of interest, internet of things, and navigation apps Build point of interest, internet of things, and navigation apps for Android Auto and Android Automotive OS that help users find where they want to go and get directions there. Build video apps Port your existing Android streaming video app to Android Automotive OS for users to enjoy in their parked car. Latest news Latest videos Building an ecosystem We partnered with manufacturers around the world to bring the Android platform to cars. Starting in 2020, Android-powered vehicles using Android Automotive OS let users enjoy a dedicated Android experience in the car. And Android Auto gives millions of users a safe and convenient way to enjoy app experiences on the road through hundreds of compatible cars and aftermarket stereo systems. Users can also download the Android Auto app to experience the magic of Android Auto without a compatible vehicle.
8585
dbpedia
1
74
https://forum.bigfix.com/t/rfe-firefox-on-mac/13246
en
RFE: FIrefox on Mac
https://forum.bigfix.com…89dfb255e536.png
https://forum.bigfix.com…89dfb255e536.png
[ "https://github.githubassets.com/favicons/favicon.svg", "https://opengraph.githubassets.com/032135e0a804a3e14ad6c4eeba47d95fabc5c038efe4597ca73346747601c1b5/autopkg/autopkg", "https://github.githubassets.com/favicons/favicon.svg", "https://opengraph.githubassets.com/64fb69e4cc93c307e153dc3aae0ddb51b9685d55318a909ff313aea96c501bd9/autopkg/hansen-m-recipes", "https://github.githubassets.com/favicons/favicon.svg", "https://opengraph.githubassets.com/7cdf36419eeae31d4347c0956a9d6e4f40a50a6f07a2f16cf9c3b1d9f7b2639a/CLCMacTeam/AutoPkgBESEngine", "https://forum.bigfix.com/user_avatar/forum.bigfix.com/hansen_m/48/83_2.png", "https://forum.bigfix.com/images/emoji/twitter/slight_smile.png?v=5" ]
[]
[]
[ "" ]
null
[]
2015-05-01T06:18:18+00:00
I stumbled onto this one in the RFE listings: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&amp;CR_ID=65788 (voted)
en
https://forum.bigfix.com…e536_2_32x32.png
BigFix Forum
https://forum.bigfix.com/t/rfe-firefox-on-mac/13246
@hansen_m has most of our software updates for the Mac fully automated through AutoPkg and a processor that helps it create BigFix tasks. Most of his work is open source. AutoPkg runs nightly and checks if new software has been released from vendors through download recipes, most of which are already available on GitHub for most products. Then a recipe that knows how to take that particular product and put it into a BigFix task is run, and the final task is placed into the BigFix Console automatically. A separate job runs and finds all the newly created tasks and pushes them out to our test machines automatically. We validate the results and then publish them internally to our organization. Now @hansen_m is using AutoPkg to do the same with Windows software. References: http://www.bigfix.me/analysis/details/2994767 Very cool! Shame you can’t share your work, I’d love to collaborate on this stuff! By its nature IEM doesn’t make it incredibly easy to share content, but I’m working on a few ideas to make that easier. Another huge benefit for us is the fact that all this work is in source control management, so we can easily track changes to fixlets/tasks and use more of a devops style approach to systems management.
8585
dbpedia
2
21
https://python-obd.readthedocs.io/
en
python
[]
[]
[]
[ "" ]
null
[]
null
None
en
img/favicon.ico
null
Welcome Python-OBD is a library for handling data from a car's On-Board Diagnostics port (OBD-II). It can stream real time sensor data, perform diagnostics (such as reading check-engine codes), and is fit for the Raspberry Pi. This library is designed to work with standard ELM327 OBD-II adapters. NOTE: Python-OBD is below 1.0.0, meaning the API may change between minor versions. Consult the GitHub release page for changelogs before updating. Installation Install the latest release from pypi: $ pip install obd Note: If you are using a Bluetooth adapter on Linux, you may also need to install and configure your Bluetooth stack. On Debian-based systems, this usually means installing the following packages: $ sudo apt-get install bluetooth bluez-utils blueman Basic Usage import obd connection = obd.OBD() # auto-connects to USB or RF port cmd = obd.commands.SPEED # select an OBD command (sensor) response = connection.query(cmd) # send the command, and parse the response print(response.value) # returns unit-bearing values thanks to Pint print(response.value.to("mph")) # user-friendly unit conversions OBD connections operate in a request-reply fashion. To retrieve data from the car, you must send commands that query for the data you want (e.g. RPM, Vehicle speed, etc). In python-OBD, this is done with the query() function. The commands themselves are represented as objects, and can be looked up by name or value in obd.commands. The query() function will return a response object with parsed data in its value property. Module Layout import obd obd.OBD # main OBD connection class obd.Async # asynchronous OBD connection class obd.commands # command tables obd.Unit # unit tables (a Pint UnitRegistry) obd.OBDStatus # enum for connection status obd.scan_serial # util function for manually scanning for OBD adapters obd.OBDCommand # class for making your own OBD Commands obd.ECU # enum for marking which ECU a command should listen to obd.logger # the OBD module's root logger (for debug) License GNU General Public License V2
8585
dbpedia
1
23
https://managingosx.wordpress.com/2015/07/30/using-autopkg-for-general-purpose-packaging/
en
Using autopkg for “general purpose” packaging
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://1.gravatar.com/avatar/dfd8aecb09520679ecbb7faaf0a85350394c501ea61961477c996fe4fd55d308?s=50&d=identicon&r=G", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2015-07-30T00:00:00
A few days ago I made a simple tool for building packages available: munkipkg. https://github.com/munki/munki-pkg I got many comments and suggestions for additional features and all sorts of cool additions. Some have even been added to the tool already. But I would like to keep munkipkg a pretty simple, basic tool. The Luggage (https://github.com/unixorn/luggage) has…
en
https://s1.wp.com/i/favicon.ico
Managing OS X
https://managingosx.wordpress.com/2015/07/30/using-autopkg-for-general-purpose-packaging/
A few days ago I made a simple tool for building packages available: munkipkg. https://github.com/munki/munki-pkg I got many comments and suggestions for additional features and all sorts of cool additions. Some have even been added to the tool already. But I would like to keep munkipkg a pretty simple, basic tool. The Luggage (https://github.com/unixorn/luggage) has been around for a while; if munkipkg is too simple for your needs, please have look at that. I also suggested to several people that if they had more complex needs than munkipkg could handle, it might make more sense to use autopkg, which supports very complex, customizable workflows. I could tell by the awkward silence that my suggestion was confusing to some — that they had trouble grokking how to use autopkg to build packages “from scratch”, using files and scripts on the local disk. So I created a GitHub repo demonstrating how to use autopkg in this manner. It’s here: https://github.com/gregneagle/autopkg-packaging-demo munkipkg comes with three demo package projects. Two of the packages install files, the third is a “payload-free” package that simply runs a script when installed. The autopkg-packaging-demo duplicates these packages, but uses autopkg to build them instead of munkipkg. (One could also imagine building these packages using either tool: the payload and scripts directories would be the same — in other words, you could have both a build-info.plist for munkipkg and a recipe for autopkg in the same package project directory.) Assuming you have autopkg installed, you can `git clone` the repo, or download and expand the zip file, and run the autopkg recipes within. I hope this clears up some confusion, and sparks some new ideas!
8585
dbpedia
0
80
https://docs.aws.amazon.com/lambda/latest/dg/golang-package.html
en
Deploy Go Lambda functions with .zip file archives
https://docs.aws.amazon.com/assets/images/favicon.ico
https://docs.aws.amazon.com/assets/images/favicon.ico
[ "https://d1ge0kk1l5kms0.cloudfront.net/images/G/01/webservices/console/warning.png" ]
[]
[]
[ "Lambda", "AWS Lambda", "serverless", "serverless applications", "cloud computing" ]
null
[]
null
This page describes how to create a .zip file as your deployment package for Go using an (the provided runtime family).
en
/assets/images/favicon.ico
https://docs.aws.amazon.com/lambda/latest/dg/golang-package.html
Your AWS Lambda function's code consists of scripts or compiled programs and their dependencies. You use a deployment package to deploy your function code to Lambda. Lambda supports two types of deployment packages: container images and .zip file archives. This page describes how to create a .zip file as your deployment package for the Go runtime, and then use the .zip file to deploy your function code to AWS Lambda using the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS Serverless Application Model (AWS SAM). Note that Lambda uses POSIX file permissions, so you may need to set permissions for the deployment package folder before you create the .zip file archive. Creating a .zip file on macOS and Linux The following steps show how to compile your executable using the go build command and create a .zip file deployment package for Lambda. Before compiling your code, make sure you have installed the lambda package from GitHub. This module provides an implementation of the runtime interface, which manages the interaction between Lambda and your function code. To download this library, run the following command. go get github.com/aws/aws-lambda-go/lambda If your function uses the AWS SDK for Go, download the standard set of SDK modules, along with any AWS service API clients required by your application. To learn how to install the SDK for Go, see Getting Started with the AWS SDK for Go V2. Using the provided runtime family Go is implemented differently than other managed runtimes. Because Go compiles natively to an executable binary, it doesn't require a dedicated language runtime. Use an OS-only runtime (the provided runtime family) to deploy Go functions to Lambda. Creating a .zip file on Windows The following steps show how to download the build-lambda-zip tool for Windows from GitHub, compile your executable, and create a .zip deployment package. Before compiling your code, make sure you have installed the lambda library from GitHub. To download this library, run the following command. go get github.com/aws/aws-lambda-go/lambda If your function uses the AWS SDK for Go, download the standard set of SDK modules, along with any AWS service API clients required by your application. To learn how to install the SDK for Go, see Getting Started with the AWS SDK for Go V2. Using the provided runtime family Go is implemented differently than other managed runtimes. Because Go compiles natively to an executable binary, it doesn't require a dedicated language runtime. Use an OS-only runtime (the provided runtime family) to deploy Go functions to Lambda. Creating and updating Go Lambda functions using .zip files Once you have created your .zip deployment package, you can use it to create a new Lambda function or update an existing one. You can deploy your .zip package using the Lambda console, the AWS Command Line Interface, and the Lambda API. You can also create and update Lambda functions using AWS Serverless Application Model (AWS SAM) and AWS CloudFormation. The maximum size for a .zip deployment package for Lambda is 250 MB (unzipped). Note that this limit applies to the combined size of all the files you upload, including any Lambda layers. The Lambda runtime needs permission to read the files in your deployment package. In Linux permissions octal notation, Lambda needs 644 permissions for non-executable files (rw-r--r--) and 755 permissions (rwxr-xr-x) for directories and executable files. In Linux and MacOS, use the chmod command to change file permissions on files and directories in your deployment package. For example, to give an executable file the correct permissions, run the following command. chmod 755 <filepath> To change file permissions in Windows, see Set, View, Change, or Remove Permissions on an Object in the Microsoft Windows documentation. Creating and updating functions with .zip files using the console To create a new function, you must first create the function in the console, then upload your .zip archive. To update an existing function, open the page for your function, then follow the same procedure to add your updated .zip file. If your .zip file is less than 50MB, you can create or update a function by uploading the file directly from your local machine. For .zip files greater than 50MB, you must upload your package to an Amazon S3 bucket first. For instructions on how to upload a file to an Amazon S3 bucket using the AWS Management Console, see Getting started with Amazon S3. To upload files using the AWS CLI, see Move objects in the AWS CLI User Guide. Note You cannot convert an existing container image function to use a .zip archive. You must create a new function. Creating and updating functions with .zip files using the AWS CLI You can can use the AWS CLI to create a new function or to update an existing one using a .zip file. Use the create-function and update-function-code commands to deploy your .zip package. If your .zip file is smaller than 50MB, you can upload the .zip package from a file location on your local build machine. For larger files, you must upload your .zip package from an Amazon S3 bucket. For instructions on how to upload a file to an Amazon S3 bucket using the AWS CLI, see Move objects in the AWS CLI User Guide. Note If you upload your .zip file from an Amazon S3 bucket using the AWS CLI, the bucket must be located in the same AWS Region as your function. To create a new function using a .zip file with the AWS CLI, you must specify the following: You must also specify the location of your .zip file. If your .zip file is located in a folder on your local build machine, use the --zip-file option to specify the file path, as shown in the following example command. aws lambda create-function --function-name myFunction \ --runtime provided.al2023 --handler bootstrap \ --role arn:aws:iam::111122223333:role/service-role/my-lambda-role \ --zip-file fileb://myFunction.zip To specify the location of .zip file in an Amazon S3 bucket, use the --code option as shown in the following example command. You only need to use the S3ObjectVersion parameter for versioned objects. aws lambda create-function --function-name myFunction \ --runtime provided.al2023 --handler bootstrap \ --role arn:aws:iam::111122223333:role/service-role/my-lambda-role \ --code S3Bucket=amzn-s3-demo-bucket,S3Key=myFileName.zip,S3ObjectVersion=myObjectVersion To update an existing function using the CLI, you specify the the name of your function using the --function-name parameter. You must also specify the location of the .zip file you want to use to update your function code. If your .zip file is located in a folder on your local build machine, use the --zip-file option to specify the file path, as shown in the following example command. aws lambda update-function-code --function-name myFunction \ --zip-file fileb://myFunction.zip To specify the location of .zip file in an Amazon S3 bucket, use the --s3-bucket and --s3-key options as shown in the following example command. You only need to use the --s3-object-version parameter for versioned objects. aws lambda update-function-code --function-name myFunction \ --s3-bucket amzn-s3-demo-bucket --s3-key myFileName.zip --s3-object-version myObject Version Creating and updating functions with .zip files using the Lambda API To create and update functions using a .zip file archive, use the following API operations: Creating and updating functions with .zip files using AWS SAM The AWS Serverless Application Model (AWS SAM) is a toolkit that helps streamline the process of building and running serverless applications on AWS. You define the resources for your application in a YAML or JSON template and use the AWS SAM command line interface (AWS SAM CLI) to build, package, and deploy your applications. When you build a Lambda function from an AWS SAM template, AWS SAM automatically creates a .zip deployment package or container image with your function code and any dependencies you specify. To learn more about using AWS SAM to build and deploy Lambda functions, see Getting started with AWS SAM in the AWS Serverless Application Model Developer Guide. You can also use AWS SAM to create a Lambda function using an existing .zip file archive. To create a Lambda function using AWS SAM, you can save your .zip file in an Amazon S3 bucket or in a local folder on your build machine. For instructions on how to upload a file to an Amazon S3 bucket using the AWS CLI, see Move objects in the AWS CLI User Guide. In your AWS SAM template, the AWS::Serverless::Function resource specifies your Lambda function. In this resource, set the following properties to create a function using a .zip file archive: With AWS SAM, if your .zip file is larger than 50MB, you don’t need to upload it to an Amazon S3 bucket first. AWS SAM can upload .zip packages up to the maximum allowed size of 250MB (unzipped) from a location on your local build machine. To learn more about deploying functions using .zip file in AWS SAM, see AWS::Serverless::Function in the AWS SAM Developer Guide. Creating and updating functions with .zip files using AWS CloudFormation You can use AWS CloudFormation to create a Lambda function using a .zip file archive. To create a Lambda function from a .zip file, you must first upload your file to an Amazon S3 bucket. For instructions on how to upload a file to an Amazon S3 bucket using the AWS CLI, see Move objects in the AWS CLI User Guide. In your AWS CloudFormation template, the AWS::Lambda::Function resource specifies your Lambda function. In this resource, set the following properties to create a function using a .zip file archive: The .zip file that AWS CloudFormation generates cannot exceed 4MB. To learn more about deploying functions using .zip file in AWS CloudFormation, see AWS::Lambda::Function in the AWS CloudFormation User Guide. Creating a Go layer for your dependencies Note Using layers with functions in a compiled language like Go may not provide the same amount of benefit as with an interpreted language like Python. Since Go is a compiled language, your functions still have to manually load any shared assemblies into memory during the init phase, which can increase cold start times. Instead, we recommend including any shared code at compile time to take advantage of any built-in compiler optimizations. The instructions in this section show you how to include your dependencies in a layer. Lambda automatically detects any libraries in the /opt/lib directory, and any binaries in the /opt/bin directory. To ensure that Lambda properly finds your layer content, create a layer with the following structure: custom-layer.zip └ lib | lib_1 | lib_2 └ bin | bin_1 | bin_2
8585
dbpedia
0
18
https://python-poetry.org/
en
Python dependency management and packaging made easy
https://python-poetry.or…n-origami-32.png
https://python-poetry.or…n-origami-32.png
[ "https://python-poetry.org/images/logo-origami.svg", "https://python-poetry.org/images/logo-origami.svg", "https://python-poetry.org/images/logo-origami.svg" ]
[]
[]
[ "" ]
null
[]
null
Python dependency management and packaging made easy
/images/favicon-origami-32.png
https://python-poetry.org/
Libraries This chapter will tell you how to make your library installable through Poetry. Versioning Poetry requires PEP 440-compliant versions for all projects. While Poetry does not enforce any release convention, it used to encourage the use of semantic versioning within the scope of PEP 440 and supports version constraints that are especially suitable for semver. Note As an example, 1.0.0-hotfix.1 is not compatible with PEP 440. Configuration Poetry can be configured via the config command (see more about its usage here) or directly in the config.toml file that will be automatically created when you first run that command. Repositories Poetry supports the use of PyPI and private repositories for discovery of packages as well as for publishing your projects. By default, Poetry is configured to use the PyPI repository, for package installation and publishing. So, when you add dependencies to your project, Poetry will assume they are available on PyPI. This represents most cases and will likely be enough for most users. Private Repository Example Installing from private package sources By default, Poetry discovers and installs packages from PyPI.. Dependency specification Dependencies for a project can be specified in various forms, which depend on the type of the dependency and on the optional constraints that might be needed for it to be installed. Version constraints Caret requirements Caret requirements allow SemVer compatible updates to a specified version. Plugins Poetry supports using and building plugins if you wish to alter or expand Poetry’s functionality with your own. For example if your environment poses special requirements on the behaviour of Poetry which do not apply to the majority of its users or if you wish to accomplish something with Poetry in a way that is not desired by most users. In these cases you could consider creating a plugin to handle your specific logic.. Contributing to Poetry First off, thanks for taking the time to contribute! The following is a set of guidelines for contributing to Poetry on GitHub. FAQ Why is the dependency resolution process slow? While the dependency resolver at the heart of Poetry is highly optimized and should be fast enough for most cases, with certain sets of dependencies it can take time to find a valid solution. This is due to the fact that not all libraries on PyPI have properly declared their metadata and, as such, they are not available via the PyPI JSON API..
8585
dbpedia
1
0
https://pypi.org/project/autopackage/
en
autopackage
https://pypi.org/static/…er.abaf4b19.webp
https://pypi.org/static/…er.abaf4b19.webp
[ "https://pypi.org/static/images/logo-small.8998e9d1.svg", "https://pypi-camo.freetls.fastly.net/d0ad7ce78df62f23469b31df433d79dd48f16adf/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f61323563333063663963363562636361396533353165623430333161313136653f73697a653d3530", "https://pypi-camo.freetls.fastly.net/d0ad7ce78df62f23469b31df433d79dd48f16adf/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f61323563333063663963363562636361396533353165623430333161313136653f73697a653d3530", "https://pypi.org/static/images/blue-cube.572a5bfb.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi-camo.freetls.fastly.net/ed7074cadad1a06f56bc520ad9bd3e00d0704c5b/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6177732d77686974652d6c6f676f2d7443615473387a432e706e67", "https://pypi-camo.freetls.fastly.net/8855f7c063a3bdb5b0ce8d91bfc50cf851cc5c51/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f64617461646f672d77686974652d6c6f676f2d6668644c4e666c6f2e706e67", "https://pypi-camo.freetls.fastly.net/df6fe8829cbff2d7f668d98571df1fd011f36192/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f666173746c792d77686974652d6c6f676f2d65684d3077735f6f2e706e67", "https://pypi-camo.freetls.fastly.net/420cc8cf360bac879e24c923b2f50ba7d1314fb0/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f676f6f676c652d77686974652d6c6f676f2d616734424e3774332e706e67", "https://pypi-camo.freetls.fastly.net/524d1ce72f7772294ca4c1fe05d21dec8fa3f8ea/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6d6963726f736f66742d77686974652d6c6f676f2d5a443172685444462e706e67", "https://pypi-camo.freetls.fastly.net/d01053c02f3a626b73ffcb06b96367fdbbf9e230/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f70696e67646f6d2d77686974652d6c6f676f2d67355831547546362e706e67", "https://pypi-camo.freetls.fastly.net/67af7117035e2345bacb5a82e9aa8b5b3e70701d/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f73656e7472792d77686974652d6c6f676f2d4a2d6b64742d706e2e706e67", "https://pypi-camo.freetls.fastly.net/b611884ff90435a0575dbab7d9b0d3e60f136466/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f737461747573706167652d77686974652d6c6f676f2d5467476c6a4a2d502e706e67" ]
[]
[]
[ "" ]
null
[]
2022-07-22T11:41:23+00:00
Tool for easy and fast packaging in Python wheels.
en
/static/images/favicon.35549fe8.ico
PyPI
https://pypi.org/project/autopackage/
Tool with which you can package a code quickly and have it ready for distribution either in the form of installable wheel or portable program.
8585
dbpedia
2
17
https://en.wikipedia.org/wiki/List_of_software_package_management_systems
en
List of software package management systems
https://en.wikipedia.org/static/favicon/wikipedia.ico
https://en.wikipedia.org/static/favicon/wikipedia.ico
[ "https://en.wikipedia.org/static/images/icons/wikipedia.png", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en.svg", "https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en.svg", "https://upload.wikimedia.org/wikipedia/en/thumb/d/db/Symbol_list_class.svg/16px-Symbol_list_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/16px-Symbol_category_class.svg.png", "https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/12px-Commons-logo.svg.png", "https://login.wikimedia.org/wiki/Special:CentralAutoLogin/start?type=1x1", "https://en.wikipedia.org/static/images/footer/wikimedia-button.svg", "https://en.wikipedia.org/static/images/footer/poweredby_mediawiki.svg" ]
[]
[]
[ "" ]
null
[ "Contributors to Wikimedia projects" ]
2010-09-10T16:13:01+00:00
en
/static/apple-touch/wikipedia.png
https://en.wikipedia.org/wiki/List_of_software_package_management_systems
This is a list of notable software package management systems, categorized first by package format (binary, source code, hybrid) and then by operating system family.[1] The following package management systems distribute apps in binary package form; i.e., all apps are compiled and ready to be installed and use. dpkg: Originally used by Debian and now by Ubuntu. Uses the .deb format and was the first to have a widely known dependency resolution tool, APT. The ncurses-based front-end for APT, aptitude, is also a popular package manager for Debian-based systems; Entropy: Used by and created for Sabayon Linux. It works with binary packages that are bzip2-compressed tar archives (file extension: .tbz2), that are created using Entropy itself, from tbz2 binaries produced by Portage: From ebuilds, a type of specialized shell script; Flatpak: A containerized/sandboxed packaging format previously known as xdg-app; GNU Guix: Used by the GNU System. It is based on the Nix package manager with Guile Scheme APIs and specializes in providing exclusively free software; Homebrew: a port of the MacOS package manager of the same name (see below), formerly referred to as 'Linuxbrew'; ipkg: A dpkg-inspired, very lightweight system targeted at storage-constrained Linux systems such as embedded devices and handheld computers. Used on HP's webOS; netpkg: The package manager used by Zenwalk. Compatible with Slackware package management tools; Nix package manager: Nix is a package manager for Linux and other Unix-like systems that makes package management reliable and reproducible. It provides atomic upgrades and rollbacks, side-by-side installation of multiple versions of a package, multi-user package management and easy setup of build environments; OpenPKG: Cross-platform package management system based on RPM Package Manager; opkg: Fork of ipkg lightweight package management intended for use on embedded Linux devices; Pacman: Used in Arch Linux, Frugalware and DeLi Linux. Its binary package format is a compressed tar archive (default file extension: .pkg.tar.zst) built using the makepkg utility (which comes bundled with pacman) and a specialized type of shell script called a PKGBUILD; PETget: Used by Puppy Linux; PISI: Stands for "Packages Installed Successfully as Intended". Pisi package manager is used by Pisi Linux.[2] Pardus used to use Pisi, but migrated to APT in 2013;[3] pkgsrc: A cross-platform package manager, with binary packages provided for Enterprise Linux, macOS and SmartOS by Joyent and other vendors; Portage: A package management system ran by the emerge command, originally created for and used by Gentoo Linux; RPM Package Manager: Created by Red Hat. RPM is the Linux Standard Base packaging format and the base of a number of additional tools, including apt4rpm, Red Hat's up2date, Mageia's urpmi, openSUSE's ZYpp (zypper), PLD Linux's poldek, Fedora's DNF, and YUM, which is used by Red Hat Enterprise Linux, and Yellow Dog Linux; slackpkg; slapt-get: An APT-like package manager for Slackware; Smart Package Manager: Used by CCux Linux; Snap: Cross-distribution package manager, non-free on the server-side, originally developed for Ubuntu; Swaret; xbps (X Binary Package System): Used by Void Linux; apk-tools: Used by Alpine Linux. Originally a collection of shell scripts, but has been since rewritten in C; Amazon Appstore: Alternative app store for Android devices; Aptoide: application for installing mobile applications which runs on the Android operating system. In Aptoide there is not a unique and centralized store; instead, each user manages their own store. Cafe Bazaar: Alternative app store for Android. F-Droid: Alternative app store for Android, whose official repository contains only free software; Samsung Galaxy Store: An app store developed by Samsung for Android, Tizen, Windows Mobile and Bada devices. GetJar: An independent mobile phone app store founded in Lithuania in 2004; Google Play: Online app store developed by Google for Android devices that license the proprietary Google Application set; Huawei AppGallery: An app store developed by Huawei for Android devices and HarmonyOS devices. SlideME: Alternative app store for Android Mac App Store: Official digital distribution platform for OS X apps. Part of OS X 10.7 and available as an update for OS X 10.6; Fink: A port of dpkg, it is one of the earliest package managers for macOS; Homebrew: Command-Line Interface-based package manager, known for its ease-of-use and extensibility. MacPorts: Formerly known as DarwinPorts, based on FreeBSD Ports (as is macOS itself); Joyent: Provides a repository of 10,000+ binary packages for macOS based on pkgsrc;[4] FreeBSD pkg – FreeBSD binary packages are built on top of source based FreeBSD Ports and managed with pkg tool; OpenBSD ports: The infrastructure behind the binary packages on OpenBSD; pkgsrc: A cross-platform package manager, with regular binary packages provided for NetBSD, Linux and macOS by multiple vendors; dpkg: Used as part of Debian GNU/kFreeBSD; OpenPKG: Cross-platform package management system based on rpm; PC-BSD: Up to and including version 8.2[5] uses files with the .pbi (Push Button Installer) filename extension which, when double-clicked, bring up an installation wizard program. Each PBI is self-contained and uses de-duplicated private dependencies to avoid version conflicts. An autobuild system tracks the FreeBSD ports collection and generates new PBIs daily. PC-BSD also uses the FreeBSD pkg binary package system; new packages are built approximately every two weeks from both a stable and rolling release branch of the FreeBSD ports tree. Image Packaging System (IPS, also known as "pkg(5)"): Used by Solaris, OpenSolaris and Illumos distributions like OpenIndiana and OmniOS; pkgsrc: SmartOS, OS distribution of Illumos from Joyent uses pkgsrc, that also can be bootstrapped to use on OpenIndiana;[6] OpenCSW: Community supported collection of packages in SysV format for SunOS 5.8-5.11 (Solaris 8-11); OpenPKG: Cross-platform package management system based on RPM Package Manager. App Store: Official app store for iOS apps; Cydia: Frontend to a port of APT. Maintained by the jailbreak community. Microsoft Store: Official app store for Universal Windows Platform apps on Windows NT and Windows 10 Mobile. As of Windows 11, it distributes video games and films as well; Windows Package Manager (aka winget): Free and open-source package manager designed for Microsoft Windows; Chocolatey: Open-source decentralized package manager for Windows in the spirit of Yum and apt-get. Usability wrapper for NuGet; Cygwin: Free and open-source software repository for Windows NT. Provides many Linux tools and an installation tool with package manager; Homebrew: a port of the MacOS package manager meant for use with Windows Subsystem for Linux, using the already existing Linux port as its base; Ninite: Proprietary package manager for Windows NT; NuGet: A Microsoft-official free and open-source package manager for Windows, available as a plugin for Visual Studio, and extendable from the command-line; Pacman: MSYS2-ported Windows version of the Arch Linux package manager; Scoop Package Manager: free and open-source package manager for Windows wpkg: Open-source package manager that handles Debian packages on Windows. Started as a clone of dpkg, and has many apt-get like features too; Superseded: Windows Phone Store: Former official app store for Windows Phone. Now superseded by Microsoft Store; SMP/E. The following package management systems distribute the source code of their apps. Either the user must know how to compile the packages, or they come with a script that automates the compilation process. For example, in GoboLinux a recipe file contains information on how to download, unpack, compile and install a package using its Compile tool. In both cases, the user must provide the computing power and time needed to compile the app, and is legally responsible for the consequences of compiling the package. FreeBSD Ports is an original implementation of source based software management system commonly referred to as Ports collection. It gave way and inspired many others systems; OpenBSD ports is a Perl based reimplementation of ports collection; ABS is used by Arch Linux to automate binary packages building from source or even other binary archives, with automatic download and dependency checking; apt-build is used by distributions which use deb packages, allowing automatic compiling and installation of software in a deb source repository; Sorcery is Sourcemage GNU/Linux's bash based package management program that automatically downloads software from their original site and compiles and installs it on the local machine; Fink, for OS X, derives partially from dpkg/apt and partially from ports; MacPorts, formerly called DarwinPorts, originated from the OpenDarwin project; Homebrew, with close Git integration; pkgsrc can be used to install software directly from source-code, or to use the binary packages provided by several independent vendors. vcpkg:[7] A Microsoft C++ package manager for Windows, Linux and MacOS. Nix package manager: Package manager that manages software in a purely functional way, featuring multi-user support, atomic upgrades and rollbacks. Allows multiple versions or variants of a software to be installed at the same time. It has support for macOS and is cross-distribution in its Linux support; Portage and emerge are used by Gentoo Linux, Funtoo Linux, and Sabayon Linux. It is inspired by the BSD ports system and uses text based "ebuilds" to automatically download, customize, build, and update packages from source code. It has automatic dependency checking and allows multiple versions of a software package to be installed into different "slots" on the same system. Portage also employs "use flags" to allow the user to fully customize a software build to suit the needs of their platform in an automated fashion. While source code distribution and customization is the preferred methodology, some larger packages that would take many hours to compile on a typical desktop computer are also offered as pre-compiled binaries in order to ease installation; Upkg: Package management and build system based on Mono and XML specifications. Used by paldo and previously by ExTiX Linux; MacPorts (for OS X); NetBSD's pkgsrc works on several Unix-like operating systems, with regular binary packages for macOS and Linux provided by multiple independent vendors; Collective Knowledge Framework is a cross-platform package and workflow framework with JSON API that can download binary packages or build them from sources for Linux, Windows, MacOS and Android platforms.[8] The following unify package management for several or all Linux and sometimes Unix variants. These, too, are based on the concept of a recipe file. AppImage (previously klik and PortableLinuxApps) aims to provide an easy way to get software packages for most major distributions without the dependency problems so common in many other package formats. Autopackage uses .package files. PackageKit is a set of utilities and libraries for creating applications that can manage packages across multiple package managers using back-ends to call the correct program. Package management systems geared toward developing and distributing video games. Steam: A cross-platform video game distribution, licensing and social gameplay platform, developed and maintained by Valve. Used to shop for, download, install, update, uninstall and back up video games. Works on Windows NT, OS X and Linux; Uplay: A cross-platform video game distribution, licensing and social gameplay platform, developed and maintained by Ubisoft. Used to shop for, download, install and update video games. Works on Windows NT and Windows Phone, as well as PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Wii U, iOS and Android. Xbox Live: A cross-platform video game distribution platform by Microsoft. Works on Windows NT, Windows Phone and Xbox. Initially called Games for Windows – Live on Windows 7 and earlier. On Windows 10, the distribution function is taken over by Windows Store; A wide variety of package management systems are in common use today by proprietary software operating systems, handling the installation of both proprietary and free packages. Software Distributor is the HP-UX package manager. Bitnami: a library of installers or software packages for web applications; Cargo: is Rust's build system and package manager. It downloads, compiles, distributes, and uploads packages—called crates; CocoaPods: a dependency manager for Swift and Objective-C Cocoa projects; Composer: a dependency Manager for PHP; Conda: a package manager for open data science platform of the Python and R; CPAN: a programming library and package manager for Perl; CRAN: a programming library and package manager for R; CTAN: a package manager for TeX; Docker: Docker, a system for managing containers, serves as a package manager for deploying containerized applications; Enthought Canopy: a package manager for Python scientific and analytic computing distribution and analysis environment; Gradle: a build system and package manager for Groovy and other JVM languages, and also C++; Ivy: a package manager for Java, integrated into the Ant build tool, also used by sbt; Leiningen: a project automation tool for Clojure; LuaRocks: a programming library and package manager for Lua; Maven: a package manager and build tool for Java; npm: a programming library and package manager for Node.js and JavaScript; NuGet: the package manager for the Microsoft development platform including .NET Framework and Xamarin; PAR::Repository and Perl package manager: binary package managers for Perl; PEAR: a programming library for PHP; pip: a package manager for Python and the PyPI programming library; RubyGems: a package manager and repository for Ruby; sbt: a build tool for Scala, uses Ivy for dependency management; yarn: an alternative to npm for Node.js and JavaScript;
8585
dbpedia
0
9
https://groups.google.com/g/munki-discuss/c/xapas_bA9LE
en
Pkginfo for Managed-python3
https://www.gstatic.com/…/groups_32dp.png
https://www.gstatic.com/…/groups_32dp.png
[ "https://fonts.gstatic.com/s/i/productlogos/groups/v9/web-48dp/logo_groups_color_1x_web_48dp.png", "https://lh3.googleusercontent.com/a-/ALV-UjXGP7P3-o_MudMMs8fhL5c51FWcPmcxcVLTpN9iOTwRS0vVeufK=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjVl-SzPNhQmH5w6byDkdpQwpsIuKRPiqNrxUYoAzT1LC5c_qbqO=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjWDrbAFe2Cjo21BkIuXIipKhl52POgVoKkWMuo59-gyifCWBls=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjXGP7P3-o_MudMMs8fhL5c51FWcPmcxcVLTpN9iOTwRS0vVeufK=s40-c", "https://lh3.googleusercontent.com/a-/ALV-UjU0RKaUSOReYWOMz8pKdoOdxxjni0foHASCV9Kikghj_Bi81T87vQ=s40-c" ]
[]
[]
[ "" ]
null
[]
null
en
//www.gstatic.com/images/branding/product/1x/groups_32dp.png
https://groups.google.com/g/munki-discuss/c/xapas_bA9LE
* Processing manifest item Managed-Python3 for update Looking for detail for: Managed-Python3, version latest... Considering 1 items with name Managed-Python3 from catalog testing Considering item Managed-Python3, version 3.10.2.80694 with minimum os version required 10.5.0 Our OS version is 11.7 Found Managed-Python3, version 3.10.2.80694 in catalog testing * Processing manifest item Managed-Python3 for install Looking for detail for: Managed-Python3, version latest... Considering 1 items with name Managed-Python3 from catalog testing Considering item Managed-Python3, version 3.10.2.80694 with minimum os version required 10.5.0 Our OS version is 11.7 Found Managed-Python3, version 3.10.2.80694 in catalog testing Managed-Python3 version 3.10.2.80694 (or newer) is already installed. Looking for updates for: Managed-Python3 Looking for updates for: Managed-Python3-3.10.2.80694 Looking for updates for: Managed-Python3--3.10.2.80694 This is as I described. Managed Software Centre has the install button greyed out underneath it says 'Installed'. I just added it to a manifest for a second Mac also running macOS 12.6 and Munki and it shows the same result. :( This second Mac has absolutely never run this particular installer pkg. The second Mac whilst Managed Software Centre says the same thing shows fewer entries in the log output in Terminal.
8585
dbpedia
0
75
https://tljh.jupyter.org/en/latest/howto/user-env/user-environment.html
en
Install conda, pip or apt packages #
https://tljh.jupyter.org…minal-button.png
https://tljh.jupyter.org…minal-button.png
[ "https://tljh.jupyter.org/en/latest/_static/logo.png", "https://tljh.jupyter.org/en/latest/_images/new-terminal-button.png", "https://tljh.jupyter.org/en/latest/_images/new-terminal-button.png", "https://tljh.jupyter.org/en/latest/_images/new-terminal-button.png" ]
[]
[]
[ "" ]
null
[]
null
TLJH (The Littlest JupyterHub) starts all users in the same conda environment. Packages / libraries installed in this environment are available to all users on the JupyterHub. Users with admin righ...
en
../../_static/favicon.ico
The Littlest JupyterHub
https://tljh.jupyter.org/howto/user-env/user-environment.html
Installing conda packages# Conda lets you install new languages (such as new versions of python, node, R, etc) as well as packages in those languages. For lots of scientific software, installing with conda is often simpler & easier than installing with pip - especially if it links to C / Fortran code. We recommend installing packages from conda-forge, a community maintained repository of conda packages. Log in as an admin user and open a Terminal in your Jupyter Notebook. If you already have a terminal open as an admin user, that should work too! Install a package! sudo -E conda install -c conda-forge gdal This installs the gdal library from conda-forge and makes it available to all users. gdal is much harder to install with pip. Note If you get an error message like sudo: conda: command not found, make sure you are not missing the -E parameter after sudo. Installing apt packages# apt is the official package manager for the Ubuntu Linux distribution. You can install utilities (such as vim, sl, htop, etc), servers (postgres, mysql, nginx, etc) and a lot more languages than present in conda (haskell, prolog, INTERCAL). Some third party software (such as RStudio) is distributed as .deb files, which are the files apt uses to install software. You can search for packages with Ubuntu Package search - make sure to look in the version of Ubuntu you are using! Log in as an admin user and open a Terminal in your Jupyter Notebook. If you already have a terminal open as an admin user, that should work too! Update list of packages available. This makes sure you get the latest version of the packages possible from the repositories. sudo apt update Install the packages you want. sudo apt install mysql-server git This installs (and starts) a MySQL database server and git. User environment location# The user environment is a conda environment set up in /opt/tljh/user, with a python3 kernel as the default. It is readable by all users, but writeable only by users who have root access. This makes it possible for JupyterHub admins (who have root access with sudo) to install software in the user environment easily. Accessing user environment outside JupyterHub# We add /opt/tljh/user/bin to the $PATH environment variable for all JupyterHub users, so everything installed in the user environment is available to them automatically. If you are using ssh to access your server instead, you can get access to the same environment with: exportPATH=/opt/tljh/user/bin:${PATH} Whenever you run any command now, the user environment will be searched first before your system environment is. So if you run python3 <somefile>, it’ll use the python3 installed in the user environment (/opt/tljh/user/bin/python3) rather than the python3 installed in your system environment (/usr/bin/python3). This is usually what you want! To make this change ‘stick’, you can add the line to the end of the .bashrc file in your home directory. When using sudo, the $PATH environment variable is usually reset, for security reasons. This leads to error messages like: sudo conda install -c conda-forge gdal sudo: conda:command not found The most common & portable way to fix this when using ssh is: sudoPATH=${PATH} conda install -c conda-forge gdal
8585
dbpedia
3
20
https://hackage.haskell.org/package/auto
en
auto
https://hackage.haskell.…atic/favicon.png
https://hackage.haskell.…atic/favicon.png
[ "https://img.shields.io/static/v1?label=Build&message=InstallOk&color=success", "https://img.shields.io/static/v1?label=Documentation&message=Available&color=success", "https://img.shields.io/hackage/v/auto.svg?maxAge=2592000", "http://stackage.org/package/auto/badge/lts", "http://stackage.org/package/auto/badge/nightly", "https://travis-ci.org/mstksg/auto.svg?branch=master", "https://badges.gitter.im/Join%20Chat.svg" ]
[]
[]
[ "" ]
null
[]
null
Denotative, locally stateful programming DSL & platform
/static/favicon.png
Hackage
https://hackage.haskell.org/package/auto
$ cabal install auto Check it out! -- Let's implement a PID feedback controller over a black box system. import Control.Auto import Prelude hiding ((.), id) -- We represent a system as `System`, an `Auto` that takes stream of `Double`s -- as input and transforms it into a stream of `Double`s as output. The `m` -- means that a `System IO` might do IO in the process of creating its ouputs, -- for instance. -- type System m = Auto m Double Double -- A PID controller adjusts the input to the black box system until the -- response matches the target. It does this by adjusting the input based on -- the current error, the cumulative sum, and the consecutive differences. -- -- See http://en.wikipedia.org/wiki/PID_controller -- -- Here, we just lay out the "concepts"/time-varying values in our system as a -- recursive/cyclic graph of dependencies. It's a feedback system, after all. -- pid :: MonadFix m => (Double, Double, Double) -> System m -> System m pid (kp, ki, kd) blackbox = proc target -> do -- proc syntax; see tutorial rec -- err :: Double -- the difference of the response from the target let err = target - response -- cumulativeSum :: Double -- the cumulative sum of the errs cumulativeSum <- sumFrom 0 -< err -- changes :: Maybe Double -- the consecutive differences of the errors, with 'Nothing' at first. changes <- deltas -< err -- adjustment :: Double -- the adjustment term, from the PID algorithm let adjustment = kp * err + ki * cumulativeSum + kd * fromMaybe 0 changes -- the control input is the cumulative sum of the adjustments control <- sumFromD 0 -< adjustment -- the response of the system, feeding the control into the blackbox response <- blackbox -< control -- the output of this all is the value of the response id -< response What is it? Auto is a Haskell DSL and platform providing an API with declarative, compositional, denotative semantics for discrete-step, locally stateful, interactive programs, games, and automations, with implicitly derived serialization. It is suited for any domain where your program's input or output is a stream of values, input events, or output views. At the high-level, it allows you to describe your interactive program or simulation as a value stream transformer, by composition and transformation of other stream transformers. So, things like: Chat bots Turn-based games GUIs Numerical simulations Process controllers Text-based interfaces (Value) stream transformers, filters, mergers, processors It's been called "FRP for discrete time contexts". Intrigued? Excited? Start at the tutorial! It's a part of this package directory and also on github at the above link. The current development documentation server is found at https://mstksg.github.io/auto. From there, you can check out my All About Auto series on my blog, where I break sample projects and show to approach projects in real life. You can also find examples and demonstrations in the auto-examples repo on github. Buzzwords explained! Haskell DSL/library: It's a Haskell library that provides a domain-specific language for composing and declaring your programs/games. Why Haskell? Well, Haskell is one of the only languages that has a type system expressive enough to allow type-safe compositions without getting in your way. Every composition and component is checked at compile-time to make sure they even make sense, so you can work with an assurance that everything fits together in the end --- and also in the correct way. The type system can also guide you in your development as well. All this without the productivity overhead of explicit type annotations. In all honesty, it cuts the headache of large projects down --- and what you need to keep in your head as you develop and maintain --- by at least 90%. Platform: Not only gives the minimal tools for creating your programs, but also provides a platform to run and develop and integrate them, as well as many library/API functions for common processes. Declarative: It's not imperative. That is, unlike in other languages, you don't program your program by saying "this happens, then this happens...and then in case A, this happens; in case B, something else happens". Instead of specifying your program/game by a series of state-changing steps and procedures (a "game loop"), you instead declare "how things are". You declare fixed or evolving relationships between entities and processes and interactions. And this declaration process is high-level and pure. Denotative: Instead of your program being built of pieces that change things and execute things sequentially, your entire program is composed of meaningful semantic building blocks that "denote" constant relationships and concepts. The composition of such building blocks also denote new concepts. Your building blocks are well-defined ideas. Compositional: You build your eventually complex program/game out of small, simple components. These simple components compose with each other; and compositions of components compose as well with other components. Every "layer" of composition is seamless. It's the scalable program architecture principle in practice: If you combine an A with an A, you don't get a B; you get another A, which can combine with any other A. Like unix pipes, where you can build up complex programs by simply piping together simple, basic ones. Discrete-step: This library is meant for things that step discretely; there is no meaningful concept of "continuous time". Good examples include turn-based games, chat bots, and cellular automata; bad examples include real-time games and day trading simulations. Locally stateful: Every component encapsulates its own local (and "hidden") state. There is no global or impicitly shared state. This is in contrast to those "giant state monad" libraries/abstractions where you carry around the entire game/program state in some giant data type, and have your game loop simply be an update of that state. If you have a component representing a player, and a component representing an enemy --- the two components do not have to ever worry about the state of the other, or the structure of their shared state. Also, you never have to worry about something reading or modifying a part of the shared/global state it wasn't meant to read or modify! (Something you cannot guaruntee in the naive implementatation of the "giant state monad" technique). Interactive: The behavior and structure of your program can respond and vary dynamically with outside interaction. I'm not sure how else to elaborate on the word "interactive", actually! Interactive programs, games and automations: Programs, games, and automations/simulations. If you're making anything discrete-time that encapsulates some sort of internal state, especially if it's interactive, this is for you!! :D Implicitly derived serialization: All components and their compositions by construction are automatically "freezable" and serializable, and re-loaded and resumed with all internal state restored. As it has been called by ertes, it's "save states for free". Support The official support and discussion channel is #haskell-auto on freenode. You can also usually find me (the maintainer and developer) as jle` on #haskell-game or #haskell. There's also a gitter channel if IRC is not your cup of tea. Also, contributions to documentation and tests are welcome! :D Why Auto? Auto is distinct from a "state transformer" (state monad, or explicit state passing) in that it gives you the ability to implicitly compose and isolate state transformers and state. That is, imagine you have two different state monads with different states, and you can compose them together into one giant loop, and: You don't have to make a new "composite type"; you can add a new component dealing with its own state without changing the total state type. You can't write anything cross-talking. You can't write anything that can interfere with the internal state of any components; each one is isolated. So --- Auto is useful over a state monad/state transformer approach in cases where you like to build your problem out of multiple individual components, and compose them all together at once. Examples include a multiple-module stateful chat bot, where every module of the chat bot consists of its own internal state. If you used a state monad approach, every time you added a new module with its own state, you'd have to "add it into" your total state type. This simply does not scale. Imagine a large architecture, where every composition adds more and more complexity. Now, imagine you can just throw in another module with its own state without any other component even "caring". Or be able to limit access implicitly, without explicit "limiting through lifting" with zoom from lens, etc. (Without that, you basically have "global state" --- the very thing that we went to Functional Programming/Haskell to avoid in the first place! And the thing that languages have been trying to prevent in the last twenty years of language development. Why go "backwards"?) In addition to all of these practical reasons, State imposes a large imperative shift in your design. State forces you to begin modeling your problem as "this happens, then this happens, then this happens". When you choose to use a State monad or State passing approach, you immediately begin to frame your entire program from an imperative approach. Auto lets you structure your program denotatively and declaratively. It gives you that awesome style that functional programming promised in the first place. Instead of saying "do this then that", you say "this is how things...just are. This is the structure of my program, and this is the nature of the relationship between each component". If you're already using Haskell...I shouldn't have to explain to you the benefits of a high-level declarative style over an imperative one :) Why not Auto? That being said, there are cases where Auto is either the wrong tool or not very helpful. Cases involving inherently continuous time. Auto is meant for situations where time progresses in discrete ticks --- integers, not reals. You can "fake" it by faking continuous time with discrete sampling...but FRP is a much, much more powerful and safe abstraction/system for handling this than Auto is. See the later section on FRP. Cases where you really don't have interactions/compositions between different stateful components. If all your program is just one foldr or scanl or iterate, and you don't have multiple interacting parts of your state, Auto really can't offer much. If, however, you have multiple folds or states that you want run together and compose, then this might be useful! Intense IO stuff and resource handling. Auto is not pipes or conduit. All IO is done "outside" of the Auto components; Auto can be useful for file processing and stream modification, but only if you separately handle the IO portions. Auto works very well with pipes or conduit; those libraries are used to "connect" Auto to the outside word, and provide a safe interface. In other words, Auto handles "value streams", while pipes/conduit handle "effect streams" Relation to FRP Auto borrows a lot of concepts from Functional Reactive Programming --- especially arrowized, locally stateful libraries like netwire. At best, Auto can be said to bring a lot of API ideas and borrows certain aspects of the semantic model of FRP and incorporates them as a part of a broader semantic model more suitable for discrete-time discrete-stel contexts. But, users of such libraries would likely be able to quickly pick up Auto, and the reverse is (hopefully) true too. Note that this library is not meant to be any sort of meaningful substitution for implementing situations which involve concepts of continuous ("real number-valued", as opposed to "integer valued") time (like real-time games); you can "fake" it using Auto, but in those situations, FRP provides a much superior semantics and set of concepts for working in such contexts. That is, you can "fake" it, but you then lose almost all of the benefits of FRP in the first place. A chatbot import qualified Data.Map as M import Data.Map (Map) import Control.Auto import Prelude hiding ((.), id) -- Let's build a big chat bot by combining small chat bots. -- A "ChatBot" is going to be an `Auto` taking in a stream of tuples of -- incoming nick, message, and timestamps; the result is a "blip stream" that -- emits with messages whenever it wants to respond. type Message = String type Nick = String type ChatBot m = Auto m (Nick, Message, UTCTime) (Blip [Message]) -- Keeps track of last time a nick has spoken, and allows queries seenBot :: Monad m => ChatBot m seenBot = proc (nick, msg, time) -> do -- proc syntax; see tutorial -- seens :: Map Nick UTCTime -- Map containing last time each nick has spoken seens <- accum addToMap M.empty -< (nick, time) -- query :: Blip Nick -- blip stream emits whenever someone queries for a last time seen; -- emits with the nick queried for query <- emitJusts getRequest -< words msg -- a function to get a response from a nick query let respond :: Nick -> [Message] respond qry = case M.lookup qry seens of Just t -> [qry ++ " last seen at " ++ show t ++ "."] Nothing -> ["No record of " ++ qry ++ "."] -- output is, whenever the `query` stream emits, map `respond` to it. id -< respond <$> query where addToMap :: Map Nick UTCTime -> (Nick, UTCTime) -> Map Nick UTCTime addToMap mp (nick, time) = M.insert nick time mp getRequest ("@seen":request:_) = Just request getRequest _ = Nothing -- Users can increase and decrease imaginary internet points for other users karmaBot :: Monad m => ChatBot m karmaBot = proc (_, msg, _) -> do -- karmaBlip :: Blip (Nick, Int) -- blip stream emits when someone modifies karma, with nick and increment karmaBlip <- emitJusts getComm -< msg -- karmas :: Map Nick Int -- keeps track of the total karma for each user by updating with karmaBlip karmas <- scanB updateMap M.empty -< karmaBlip -- function to look up a nick, if one is asked for let lookupKarma :: Nick -> [Message] lookupKarma nick = let karm = M.findWithDefault 0 nick karmas in [nick ++ " has a karma of " ++ show karm ++ "."] -- output is, whenever `karmaBlip` stream emits, look up the result id -< lookupKarma . fst <$> karmaBlip where getComm :: String -> Maybe (Nick, Int) getComm msg = case words msg of "@addKarma":nick:_ -> Just (nick, 1 ) "@subKarma":nick:_ -> Just (nick, -1) "@karma":nick:_ -> Just (nick, 0) _ -> Nothing updateMap :: Map Nick Int -> (Nick, Int) -> Map Nick Int updateMap mp (nick, change) = M.insertWith (+) nick change mp -- Echos inputs prefaced with "@echo"...unless flood limit has been reached echoBot :: Monad m => ChatBot m echoBot = proc (nick, msg, time) -> do -- echoBlip :: Blip [Message] -- blip stream emits when someone wants an echo, with the message echoBlip <- emitJusts getEcho -< msg -- newDayBlip :: Blip UTCTime -- blip stream emits whenever the day changes newDayBlip <- onChange -< utctDay time -- echoCounts :: Map Nick Int -- `countEchos` counts the number of times each user asks for an echo, and -- `resetOn` makes it "reset" itself whenever `newDayBlip` emits. echoCounts <- resetOn countEchos -< (nick <$ echoBlip, newDayBlip) -- has this user flooded today...? let hasFlooded = M.lookup nick echoCounts > Just floodLimit -- output :: Blip [Message] -- blip stream emits whenever someone asks for an echo, limiting flood output | hasFlooded = ["No flooding!"] <$ echoBlip | otherwise = echoBlip -- output is the `output` blip stream id -< output where floodLimit = 5 getEcho msg = case words msg of "@echo":xs -> Just [unwords xs] _ -> Nothing countEchos :: Auto m (Blip Nick) (Map Nick Int) countEchos = scanB countingFunction M.empty countingFunction :: Map Nick Int -> Nick -> Map Nick Int countingFunction mp nick = M.insertWith (+) nick 1 mp -- Our final chat bot is the `mconcat` of all the small ones...it forks the -- input between all three, and mconcats the outputs. chatBot :: Monad m => ChatBot m chatBot = mconcat [seenBot, karmaBot, echoBot] -- Here, our chatbot will automatically serialize itself to "data.dat" -- whenever it is run. chatBotSerialized :: ChatBot IO chatBotSerialized = serializing' "data.dat" chatBot Open questions "Safecopy problem"; serialization schemes are implicitly derived, but if your program changes, it is unlikely that the new serialization scheme will be able to resume something from the old one. Right now the solution is to only serialize small aspects of your program that you can manage and manipulate directly when changing your program. A better solution might exist. In principle very little of your program should be over IO as a monad...but sometimes, it becomes quite convenient for abstraction purposes. Handling IO errors in a robust way isn't quite my strong point, and so while almost all auto idioms avoid IO and runtime, for some applications it might be unavoidable. auto is not and will never be about streaming IO effects...but knowing what parts of IO fit into the semantic model of value stream transformers would yield a lot of insight. Also, most of the Auto "runners" (the functions that translate an Auto into IO that executes it) might be able to benefit from a more rigorous look too. Tests; tests aren't really done yet, sorry! Working on those :)
8585
dbpedia
0
22
https://macintoshguy.wordpress.com/2020/05/14/autopkg-repo-list-fiddling-again/
en
AutoPkg Repo List Fiddling Again
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://1.gravatar.com/avatar/ab080d00dcf010792212471d1a77b3f15bfb8a6dc35f722da61ffb3730e25f8b?s=52&d=identicon&r=G", "https://i0.wp.com/www.linkedin.com/img/webpromo/btn_viewmy_160x33.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2020-05-14T00:00:00
After my last post Graham Pugh mentioned that the AutoPkg repository list is stored in the AutoPkg preference file as RECIPE_REPOS with the search order in RECIPE_SEARCH_DIRS. He suggested doing a while loop on the defaults read output but I thought it was just fiddly enough a task in the shell that I might resort…
en
https://s1.wp.com/i/favicon.ico
The Macintosh Guy
https://macintoshguy.wordpress.com/2020/05/14/autopkg-repo-list-fiddling-again/
After my last post Graham Pugh mentioned that the AutoPkg repository list is stored in the AutoPkg preference file as RECIPE_REPOS with the search order in RECIPE_SEARCH_DIRS. He suggested doing a while loop on the defaults read output but I thought it was just fiddly enough a task in the shell that I might resort to a few lines of Python, so here it is, a Python script to dump out your repository list in search order. Tiny but it does the job. (Thanks to Graham for taking the time to comment on the previous post, it was just what I needed to get me to spend the few minutes doing this.)
8585
dbpedia
3
2
https://github.com/autopkg/autopkg
en
autopkg/autopkg: Automating packaging and software distribution on macOS.
https://opengraph.githubassets.com/67518126e50698c690451b23ac900eb1a273ed27c3a5d64ae241b4e7f545b5b3/autopkg/autopkg
https://opengraph.githubassets.com/67518126e50698c690451b23ac900eb1a273ed27c3a5d64ae241b4e7f545b5b3/autopkg/autopkg
[ "https://camo.githubusercontent.com/7d770c433d6198d89f8c1e2f187b904a9721d176259d0e97157337741cc8e837/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f636f64652532307374796c652d626c61636b2d3030303030302e737667", "https://github.com/autopkg/autopkg/actions/workflows/tests.yaml/badge.svg", "https://avatars.githubusercontent.com/u/700560?s=64&v=4", "https://avatars.githubusercontent.com/u/1202655?s=64&v=4", "https://avatars.githubusercontent.com/u/119358?s=64&v=4", "https://avatars.githubusercontent.com/u/7801391?s=64&v=4", "https://avatars.githubusercontent.com/u/3882687?s=64&v=4", "https://avatars.githubusercontent.com/u/2439367?s=64&v=4", "https://avatars.githubusercontent.com/u/24377?s=64&v=4", "https://avatars.githubusercontent.com/u/19357?s=64&v=4", "https://avatars.githubusercontent.com/u/694298?s=64&v=4", "https://avatars.githubusercontent.com/u/3740088?s=64&v=4", "https://avatars.githubusercontent.com/u/969572?s=64&v=4", "https://avatars.githubusercontent.com/u/2464974?s=64&v=4", "https://avatars.githubusercontent.com/u/1134568?s=64&v=4", "https://avatars.githubusercontent.com/u/202334?s=64&v=4" ]
[]
[]
[ "" ]
null
[]
null
Automating packaging and software distribution on macOS. - autopkg/autopkg
en
https://github.com/fluidicon.png
GitHub
https://github.com/autopkg/autopkg
Latest release is here. AutoPkg is an automation framework for macOS software packaging and distribution, oriented towards the tasks one would normally perform manually to prepare third-party software for mass deployment to managed clients. These tasks typically involve at least several of the following steps: downloading an application and/or updates for it, usually via a web browser extracting them from a multitude of archive formats adding site-specific configuration adding sane versioning information "fixing" poorly-written installer scripts saving these modifications back to a compressed disk image or installer package importing these into a software distribution system like Munki, Jamf Pro, FileWave, etc. customizing the associated metadata for such a system with site-specific data, post-installation scripts, version info or other metadata Often these tasks follow similar patterns for each individual application, and when managing many applications this becomes a daily task full of sub-tasks that one must remember (and/or maintain documentation for) about exactly what had to be done for a successful deployment of every update for every managed piece of software. With AutoPkg, we define these steps in a "Recipe" file in plist or yaml format, run automatically instead of by hand, and shared with others. Install the latest release. AutoPkg requires macOS, and Git is highly recommended to have installed so that autopkg can use git to can manage recipe repositories. Knowledge of Git itself is not required. AutoPkg is tested on the current macOS release. It may work on older releases, but is not actively tested on older releases. Git can be installed via Apple's command-line developer tools package, which can be prompted for installation by simply typing 'git' in a Terminal window (OS X 10.9 or later). Since AutoPkg 2.0, Python 2 is no longer supported. The installer linked above contains a bundled version of Python 3 and all needed dependencies. A getting started guide is available here. Frequently Asked Questions (and answers!) are here. See the wiki for more documentation.
8585
dbpedia
1
81
https://blenderartists.org/t/autopackage/333651
en
Autopackage
https://blenderartists.o…5efe2f52b59d.png
https://blenderartists.o…5efe2f52b59d.png
[ "https://blenderartists.org/images/emoji/twitter/frowning.png?v=5", "https://blenderartists.org/images/emoji/twitter/frowning.png?v=5", "https://blenderartists.org/images/emoji/twitter/slight_smile.png?v=5" ]
[]
[]
[ "Blender 3D Computer Graphics Animation Modeling Texturing Gameengine multimedia Rendering Raytracing art artists" ]
null
[ "galaxitor (galaxitor)", "olivS (olivS)", "Mr_Wonka (Mr Wonka)", "Eric (Eric)" ]
2005-03-28T01:57:59+00:00
Hi there! Does any of you know if are there any plans to use autopackage ( www.autopackage.org ) to make an installer for Linux? It would be useful, because sometimes I want to install Blender system-wide, instead of ha&hellip;
en
https://blenderartists.org/uploads/default/optimized/4X/6/3/8/63805ad38248f37fb2d562b897e24871dd8e974f_2_32x32.ico
Blender Artists Community
https://blenderartists.org/t/autopackage/333651
I did that. The problem was, for some reason, everything that used Python didnt’ work . Aside of that it worked OK. I use Ubuntu. They now have an updated version of Blender in the repositories. But the only version they had for a long time was 2.34 . When I installed Blender from the repositories, I checked where it had placed the files. Blender was all over the place! /usr/bin, /usr/lib , /usr/whatever. Personally I dont’t find it a very organized way to place the files. Maybe a linux guru can answer me why , since I am a relatively newbie, trying to kick the windows habit Now I use the latest version of Blender, but when the next update comes, it would be interesting to see packaged with autpopackage, because I never had problems installing programs with autopackage in several linux distros I tried. Cheers Eduardo I never heard of autopackager. I just checked the site and that looks like a really useful piece of software. :o I use gentoo at the moment so I don’t need it now but it’ll really come in useful when I change. Thanks for the heads up. Mr Wonka
8585
dbpedia
3
98
https://proandroiddev.com/android-auto-tutorial-step-by-step-guide-50bb6b73e2b8
en
Android Auto Tutorial Step by Step Guide
https://miro.medium.com/…yRk1cRfJ-Rg.jpeg
https://miro.medium.com/…yRk1cRfJ-Rg.jpeg
[ "https://miro.medium.com/v2/resize:fill:64:64/1*dmbNkD5D-u45r44go_cf0g.png", "https://miro.medium.com/v2/resize:fill:88:88/1*Sw4BRyBLfXvNHNWDdnNebQ.jpeg", "https://miro.medium.com/v2/resize:fill:48:48/1*XVtdl45m8YaYrPI4buJ5yQ.png", "https://miro.medium.com/v2/resize:fill:144:144/1*Sw4BRyBLfXvNHNWDdnNebQ.jpeg", "https://miro.medium.com/v2/resize:fill:64:64/1*XVtdl45m8YaYrPI4buJ5yQ.png" ]
[]
[]
[ "" ]
null
[ "Akshaya Narayan Dikshit", "medium.com" ]
2023-07-18T17:44:47.282000+00:00
Android Auto provides a driver-optimized app experience for users who have an Android phone and the Android Auto app. It’s an extension of a connected Android smartphone to a compatible car that can…
en
https://miro.medium.com/…Uf_MG6hm_Dlw.png
Medium
https://proandroiddev.com/android-auto-tutorial-step-by-step-guide-50bb6b73e2b8
Akshaya Narayan Dikshit · Follow Published in ProAndroidDev · 7 min read · Jul 18, 2023 -- What is Android Auto? Android Auto provides a driver-optimized app experience for users who have an Android phone and the Android Auto app. It’s an extension of a connected Android smartphone to a compatible car that can display some apps, entertainment, and mirror messages on a car’s dashboard. We can connect our device using USB or Bluetooth. Android Auto is only compatible with phones running Android 6.0 (API level 23) or higher When you’ll connect your phone to the car, all your Android Auto-compatible apps will be accessible. What is Android Automotive OS? Android Automotive OS is an Android-based infotainment system that is built into vehicles. The car’s system is a standalone Android device that is optimized for driving. With Android Automotive OS, users install your app directly onto the car instead of their phones. Android Auto and/or Android Automotive OS support the following types of apps: Media apps — audio: Media apps let users browse and play music, radio, audiobooks, and other audio content in the car. Messaging apps: Messaging apps let users receive incoming notifications, read messages aloud using text-to-speech, and send replies via voice input in the car. Navigation apps: Navigation apps, including providers of driver and delivery services, help users get where they want to go by providing turn-by-turn directions Point of Interest (POI) apps: POI apps let the user discover and navigate to points of interest and take relevant actions, such as parking, charging, and fuel apps Internet of Things (IOT) apps: IOT apps let users take relevant actions on connected devices from within the car. Video apps (for use while parked): Video apps let users view streaming videos while the car is parked. Games (for use while parked): Game apps let users play games while the car is parked. Your App must be of one of those categories above only. you will declare it in the Manifest and this will be reviewed by Google Configuration to start exploring Android Auto device and setting up emulator The Desktop Head Unit (DHU) enables your development machine to emulate an Android Auto head unit, so that you can run and test Android Auto apps. The DHU runs on Windows, Mac os, and Linux systems. Follow the below steps to enable Android Auto Emulator Enable Developer mode on a mobile device running Android 6.0 (API level 23) or higher. Compile and install your app on the device. Install Android Auto on the device. If Android Auto is already installed, make sure that you are using the latest version. Open SDK Manager and navigate to SDK Tools tab, then download Android Auto Desktop Head Unit Emulator package. 5. DHU is installed in the SDK_LOCATION/extras/google/auto/ directory 6. Linux or Mac os systems, run the following command in that directory to ensure the DHU binary is executable: chmod +x ./desktop-head-unit ./desktop-head-unit --usb 7. Emulator will start working and check your Android device if any popup showing related to update, click on update option and restart Emulator. Android Auto Design Templates In Android Auto, We can’t create our custom UI and we can use some set of templates which is only allowed for Android Auto apps. According to me, the predefined UI Templates provided by Google will give better usability to the driver in order to coordinate with the Android auto device while driving. List of available Templates: Tab Container Template — Tab bar with app icon and up to 4 tabs (no back button) — Embedded template, which can be any of the following types: List , Grid, Search, Pane, or Message List or Grid Template Message or Long Message Template Search Template Place List (map) Template Navigation Template Read more about UI Templates Steps to build media apps for cars Declare Android Auto Support in Manifest file Declare your media browser service How Android Auto interact with your media browser service: User launches your app on Android Auto and Android Auto contacts your app’s media browser service using the onCreate() method. In your implementation of the onCreate() method, you must create and register a MediaSessionCompat object and its callback object. Android Auto calls your service’s onGetRoot() method to get the root media item in your content hierarchy. Everything starts at the root and you must return a non-null BrowserRoot to allow connections to your MediaBrowserServiceCompat. Android Auto calls your service’s onLoadChildren() method to get the children of the root media item. Android Auto display these media items as the top level of content items. We have two available flags FLAG_PLAYABLE and FLAG_BROWSABLE which indicates media item can be directly played or It has children of its own If the user selects a browsable media item, your service’s onLoadChildren() method is called again to retrieve the children of the selected menu item. If the user selects a playable media item, Android Auto calls the appropriate media session callback method to perform that action. Example: It will start playing music item. Mandatory steps to support Android Auto in Media Apps : Set standard playback actions Android Auto display playback controls based on the actions that are enabled in the PlaybackStateCompat object. By default, your app must support the following actions: ACTION_PLAY ACTION_PAUSE ACTION_STOP ACTION_PLAY_FROM_MEDIA ACTION_PLAY_FROM_SEARCH Your app can additionally support the following actions if they are relevant to the app’s content: ACTION_SKIP_TO_PREVIOUS ACTION_SKIP_TO_NEXT Media Controller Test is useful if you want to test your media app controls. Media Controller Test (MCT) app allows you to test the intricacies of media playback on Android and helps verify your media session implementation. The MCT includes tests for the following media actions: Play Play From Search Play From Media ID Play From URI Pause Stop Skip To Next Skip To Previous Skip To Queue Item Seek To Support voice actions Your media app must support voice actions to help provide drivers with a safe and convenient experience that minimizes distractions. When Android Auto detects and interprets a voice action, that voice action is delivered to the app through onPlayFromSearch(). On receiving this callback, the app finds content matching the query string and starts playback. Custom playback actions You can add custom playback actions to display additional actions that your media app supports. Each custom action that you create requires an icon resource. Apps in cars can run on many different screen sizes and densities, so icons that you provide must be vector drawables. Example: Shuffle 🔀, Repeat 🔁, Repeat Single song 🔂, etc Conclusion: I personally feel if your app is media app, messaging app, navigation parking app, etc then we should support Android Auto platform too, which will help driver while using the app. Voice command support is useful when user want to avoid looking at screen while driving. Well, that’s all for now. In future articles, I will share sample app with Android Auto and Android Automotive implementation. UAMP Media App is a very useful repository which you can explore for Android Auto and Android Automotive OS both Thank you for taking the time to read this article. If you found this post to be useful and interesting, please clap 👏 and recommend it. You can reach me on social media and other platforms, stay tuned: https://linktr.ee/droiddikshit 🤝 References: https://github.com/android/uamp (Android Media app UAMP) car-samples/car_app_library at main · android/car-samples This directory contains sample apps that use Android for Cars App Library. Please find the library documentation at… github.com Android for Cars overview | Android Developers Get one of our Figma kits for Android, Material Design, or Wear OS, and start designing your app's UI today. developer.android.com Build media apps for cars | Android Developers Get one of our Figma kits for Android, Material Design, or Wear OS, and start designing your app's UI today. developer.android.com Android Auto | Android Google Assistant can read your messages out loud. And you can write back with just your voice, or select a Smart Reply… www.android.com Using the media controller test app | Android Developers Design a beautiful user interface using Android best practices. developer.android.com
8585
dbpedia
0
34
https://pypi.org/project/packit/
en
packit
https://pypi.org/static/…er.abaf4b19.webp
https://pypi.org/static/…er.abaf4b19.webp
[ "https://pypi.org/static/images/logo-small.8998e9d1.svg", "https://pypi-camo.freetls.fastly.net/46b1b67c59f02e43a71853aa3169e39531d268e5/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f62663166646361616639383132303363313731363865353738643965613062353f73697a653d3530", "https://pypi-camo.freetls.fastly.net/0ee10e0fd627356c7c76193af525c091c181ba29/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f32636666623235326130373536643961313931346631366362303766343633653f73697a653d3530", "https://pypi-camo.freetls.fastly.net/46b1b67c59f02e43a71853aa3169e39531d268e5/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f62663166646361616639383132303363313731363865353738643965613062353f73697a653d3530", "https://pypi-camo.freetls.fastly.net/0ee10e0fd627356c7c76193af525c091c181ba29/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f32636666623235326130373536643961313931346631366362303766343633653f73697a653d3530", "https://pypi.org/static/images/blue-cube.572a5bfb.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi-camo.freetls.fastly.net/ed7074cadad1a06f56bc520ad9bd3e00d0704c5b/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6177732d77686974652d6c6f676f2d7443615473387a432e706e67", "https://pypi-camo.freetls.fastly.net/8855f7c063a3bdb5b0ce8d91bfc50cf851cc5c51/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f64617461646f672d77686974652d6c6f676f2d6668644c4e666c6f2e706e67", "https://pypi-camo.freetls.fastly.net/df6fe8829cbff2d7f668d98571df1fd011f36192/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f666173746c792d77686974652d6c6f676f2d65684d3077735f6f2e706e67", "https://pypi-camo.freetls.fastly.net/420cc8cf360bac879e24c923b2f50ba7d1314fb0/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f676f6f676c652d77686974652d6c6f676f2d616734424e3774332e706e67", "https://pypi-camo.freetls.fastly.net/524d1ce72f7772294ca4c1fe05d21dec8fa3f8ea/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6d6963726f736f66742d77686974652d6c6f676f2d5a443172685444462e706e67", "https://pypi-camo.freetls.fastly.net/d01053c02f3a626b73ffcb06b96367fdbbf9e230/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f70696e67646f6d2d77686974652d6c6f676f2d67355831547546362e706e67", "https://pypi-camo.freetls.fastly.net/67af7117035e2345bacb5a82e9aa8b5b3e70701d/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f73656e7472792d77686974652d6c6f676f2d4a2d6b64742d706e2e706e67", "https://pypi-camo.freetls.fastly.net/b611884ff90435a0575dbab7d9b0d3e60f136466/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f737461747573706167652d77686974652d6c6f676f2d5467476c6a4a2d502e706e67" ]
[]
[]
[ "" ]
null
[]
2021-08-25T23:31:23+00:00
Python packaging in declarative way (wrapping pbr to make it flexible)
en
/static/images/favicon.35549fe8.ico
PyPI
https://pypi.org/project/packit/
Contents: Rationale Overview Usage Facilities Including Files Other than Python Libraries Further Development Rationale Creating python packages is routine operation that involves a lot of actions that could be automated. Although there are petty good tools like pbr for that purpose, they miss some features and lack flexibility by trying to enforce some strongly opinionated decisions upon you. PacKit tries to solve this by providing a simple, convenient, and flexible way to create and build packages while aiming for following goals: simple declarative way to configure your package through setup.cfg following distutils2 setup.cfg syntax reasonable defaults open for extension Overview PacKit is wrapper around pbr though it only uses it for interaction with setuptools/distutils through simplified interface. None of pbr functions are exposed but instead PacKit provides its own interface. Available facilities Here's a brief overview of currently implemented facilities and the list will be extended as new ones will be added. auto-version - set package version depending on selected versioning strategy. auto-description - set package long description auto-license - include license file into distribution auto-dependencies - populate install_requires and test_requires from requirement files auto-packages - discover packages to include in distribution. auto-extra-meta - add useful options to the metadata config section auto-package-data - include all files tracked by git from package dirs only. auto-tests - make python setup.py test run tests with tox or pytest (depending on tox.ini presence). On top of that PacKit forces easy_install to honor following PIP's fetch directives: index_url find_links Planned facilities auto-plate - integration with platter auto-license - fill out license information auto-pep8 - produce style-check reports auto-docs - API docs generation auto-clean - configurable clean jobs auto-coverage (?) - produce coverage reports while running tests If you don't see desired facilities or have cool features in mind feel free to contact us and tell about your ideas. Usage Create a setup.py in your project dir: : from setuptools import setup setup(setup_requires='packit', packit=True) That was the first and the last time you touched that file for your project. Now let's create a setup.cfg that you will use in order to configure your package: [metadata] name = cool-package And... if you're not doing anything tricky in your package then that's enough! And if you do, take a look at the section below. Facilities Currently all available facilities are enabled by default. Though you can easily turn them off by using facilities section in your setup.cfg: [facilities] auto-version = 0 auto-dependencies = f auto-packages = false auto-package-data = n auto-tests = no If facility is explicitly disabled it won't be used even if facility-specific configuration section is present. Facility-specific defaults and configuration options described below. auto-version When enabled, auto-version will generate and set package version according to selected versioning strategy. Versioning strategy can be selected using type field under auto-version section within setup.cfg. The default is: [auto-version] type = git-pep440 output = src/templates/version.html You can use output field to ask PacKit to write generated version value into specified filename. The specified filename do not need to exist but the parent directories should exist. Provided path should always use forward slashes. git-pep440 Generate PEP440-compliant version from annotated git tags. It's expected that you are using git tags that follow public version identifier description and git-pep440 will just append number of commits since tag was applied to your tag value (the N in public version identifier description). If number of commits since tag equal to 0 (your building the tagged version) the N value won't be appended. Otherwise, it will be appended and local version identifier equal to first 7 chars of commit hash will be also added. Please note: you must create an annotated tag, otherwise it will be ignored. Example: 1. <git tag -a 1.2.3.dev -m "dev release 1.2.3.dev"> -> version is 1.2.3.dev <git commit> -> version is 1.2.3.dev.post1 <git commit> -> version is 1.2.3.dev.post2 <git tag -a 1.2.3.a -m "Release 1.2.3.a"> -> version is 1.2.3.a <git commit> -> version is 1.2.3.a.post1 <git tag -a 1.2.3 -m "Release 1.2.3"> -> version is 1.2.3 <git commit> -> version is 1.2.3.post1 <git commit> -> version is 1.2.3.post2 fixed Use value specified in value (it's required when this strategy is used) under auto-version section in setup.cfg: [auto-version] type = fixed value = 3.3 file Read a line using UTF-8 encoding from the file specified in value (it's required when this strategy is used) under auto-version section in setup.cfg, strip it and use as a version. [auto-version] type = file value = VERSION.txt shell Execute command specified in value (it's required when this strategy is used) under auto-version section in setup.cfg, read a line from stdout, strip it and use as a version composite The most advanced version strategy designed for special cases. It allows you to generate complex version values based on other version strategies. The usage is pretty simple though: [auto-version] type = composite value = {foo}.{bar}+{git} output = main.version [auto-version:foo] type = fixed value = 42 output = 1st.version [auto-version:bar] type = shell value = echo $RANDOM [auto-version:git] type = git-pep440 output = 3rd.version The value field in composite version strategy should be a valid string format expression. Please note that output directives used here only for reference (to show that they can be used anywhere) and are not required. It's OK to define 'extra' version components and not use them but it's an error to not define any of components mentioned in composite version template. auto-description When enabled will fill out long_description for package from a readme. The readme file name could be specified with file field under auto-description section. If no file name provided, it will be discovered automatically by trying following list of files: README readme CHANGELOG changelog Each of these files will be tried with following extensions: <without extension> .md .markdown .mkdn .text .rst .txt The readme file will be included in the package data. auto-license When enabled will include the license file into the distribution. The license file name could be specified by the file field within auto-license section. If license file name is not provided the facility will try to discover it in the current dir trying following file names: LICENSE license Each of these files will be tried with following extensions: <without extension> .md .markdown .mkdn .text .rst .txt auto-dependencies When enabled will fill install_requires and test_requires from requirement files. Requirement files could be specified by install and test fields under the auto-dependencies section of the setup.cfg. If requirements file names not provided then the facility will try to discover them automatically. For installation requirements following paths will be tried: requires requirements requirements/prod requirements/release requirements/install requirements/main requirements/base For testing requirements following paths will be tried: test-requires test_requires test-requirements test_requirements requirements_test requirements-test requirements/test For each path following extensions will be tried <without extension> .pip .txt Once a file is found, PacKit stops looking for more files. You can use vcs project urls and/or archive urls/paths as described in pip usage - they will be split in dependency links and package names during package creation and will be properly handled by pip/easyinstall during installation. Remember that you can also make "includes" relationships between requirements.txt files by including a line like -r other-requires-file.txt. auto-packages When enabled and no packages provided in setup.cfg through packages option under files section will try to automatically find out all packages in current dir recursively. It operates using exclude and include values that can be specified under auto-packages section within setup.cfg. If exclude not provided the following defaults will be used: test, docs, .tox and env. If include not provided, auto-packages will try the following steps in order to generate it: If packages_root value provided under files section in setup.cfg, it will be used. Otherwise the current working dir will be scanned for any python packages (dirs with __init__.py) while honoring exclude value. This packages also will be included into the resulting list of packages. Once include value is determined, the resulting packages list will be generated using following algorithm: for path in include: found_packages |= set(find_packages(path, exclude)) auto-extra-meta When enabled, adds a number of additional options to 'metadata' section. Right now, only 1 extra option supported: is_pure - allows you to override 'purity' flag for distribution, i.e. you can explicitly say whether your distribution is platform-specific or no. auto-tests Has no additional configuration options [yet]. When enabled, the python setup.py test is equal to running: tox if tox.ini is present pytest with pytest-gitignore and teamcity-messages plugins enabled by default otherwise (if you need any other plugins just add them to test requirements) and activate them with additional options (see below) The facility automatically downloads underlying test framework and install it - you don't need to worry about it. You can pass additional parameters to the underlying test framework with '-a' or '--additional-test-args='. auto-package-data See the next section. Including Files Other than Python Libraries Often, you need to include a data file, or another program, or some other kind of file, with your Python package. Here are a number of common situations, and how to accomplish them using packit: Placing data files with the code that uses them: auto-package-data The default is that the auto-package-data facility is enabled. In this configuration, you can include data files for your python library very easily by just: Placing them inside a Python package directory (so next to an __init__.py or in a subdirectory), and Adding them to git version control. setup.cfg src/ src/nicelib/ src/nicelib/__init__.py src/nicelib/things.py src/nicelib/somedata.csv No change in setup.cfg is required. Putting the files here will cause the packaging system to notice them and install them in the same arrangement next to your Python files, but inside the virtualenv where your package is installed. Once this is done, you have several easy options for accessing them, and all of these should work the same way in development and once installed: The least magical way is pathlib.Path(__file__).parent / 'somedata.csv', or some equivalent with os.path calls. This makes your package non-zip-safe, so it can't be used in a pex or zipapp application. The new hotness is importlib.resources.open_text('nicelib', 'somedata.csv') and related functions, available in the stdlib in Python 3.7+ or as a backport in the importlib_resources PyPI package. One limitation is this does not support putting resources deeper in subdirectories. The previous standard has been pkg_resources.resource_stream('nicelib', 'somedata.csv') and related functions. This supports deeper subdirectories, but is much slower than importlib.resources. You shouldn't need to install pkg_resources, it's part of setuptools, which is always available these days. You can turn off the auto-package-data facility if you don't want this file inclusion mechanism to happen: [facilities] auto-package-data = no auto-package-data will not work if your Python package is not at the root of your git repository (setup.py is not next to .git). Placing data files relative to the virtual environment You can also place files relative to the virtualenv, rather than inside the package hierarchy (which would be in virtualenv/lib/python*/site-packages/something). This is often used for things like static files in a Django project, so that they are easy to find for an external web server. The syntax for this is: [files] data_files = dest_dir = src_dir/** dest_dir = file_to_put_there In this example, dest_dir will be created within the top level of the virtualenv. The contents of src_dir will be placed inside it, along with file_to_put_there. If you need to include a compiled executable file in your package, this is a convenient way to do it - include bin = bin/** for example. See the fastatools package for an example of this. There is also a confluence page with more details on including compiled programs. Including Python scripts Scripts need to be treated specially, and not just dropped into bin using data_files, because Python changes the shebang (#!) line to match the virtualenv's python interpreter. This means you can directly run a script without activating a virtualenv - e.g. env/bin/pip install attrs will work even if env isn't activated.[1] If you have some scripts already, the easiest thing is to collect them in one directory, then use scripts: [files] scripts = bin/* Alternatively, setuptools has a special way to directly invoke a Python function from the command line, called the console_scripts entry point. pull-sp-sub is an internal package that uses this: [entry_points] console_scripts = pull-sp-sub = pull_sp_sub:main To explain that last line, it's name-of-the-script = dotted-path-of-the-python-module:name-of-the-python-function. So with this configuration, once the package is installed, setuptools creates a script at $VIRTUAL_ENV/bin/pull-sp-sub which activates the virtualenv and then calls the main function in the pull_sp_sub module. Scripts created this way are slightly slower to start up than scripts that directly run a Python file. Also, setuptools seems to do more dependency checking when starting a script like this, so if you regularly live with broken dependencies inside your virtualenv, this will be frustrating for you. On the other hand, scripts made this way will work better on Windows, if that's one of your target environments. Including compiled shared libraries in both source and binary packages This works because the NCBI Python/Linux environment is so homogeneous, but it does cause problems - these compiled items are linux- and architecture-specific, but this doesn't tell Python's packaging system about that. So for example if you run pip install applog on a Mac, it will claim to succeed, but the library won't work. See the next section for how to do this in a more robust way. This includes things that use the C++ Toolkit (see python-applog and cpp-toolkit-validators for examples). These .so files should get placed inside the python package hierarchy. Presumably, if you're compiling them, they are build artifacts that won't be tracked by git, so they won't be included automatically by auto-package-data. Instead, once they are there, use extra_files to have the packaging system notice them: [files] extra_files = ncbilog/libclog.so ncbilog/libclog.version If your packages live inside a src directory, you do need to include that in the extra_files path: [files] extra_files = src/mypkg/do_something_quickly.so Notice that extra_files is different from data_files which we used above. Including uncompiled C extensions (including Cython) Packit can coexist with setuptools's support for C extensions. Here is an example with a C file that will be compiled on the user's system. In that particular package, the author chose to require Cython for developers but not for end users, so the distribution and the git repo include both the .pyx file and the .c file it's translated to. Known Issues If your Python package is not in the root of your Git repository (so setup.py is not in the same directory as .git), then auto-package-data will not work. The auto-package-data section has configuration options, but they don't do anything right now (PY-504). Further Development Add tests Improve docs More configuration options for existing facilities New facilities Allow extension through entry points
8585
dbpedia
2
95
https://unix.stackexchange.com/questions/80734/creating-deb-and-rpm-from-the-same-source
en
Creating deb and rpm from the same source
https://cdn.sstatic.net/…g?v=32fb07f7ce26
https://cdn.sstatic.net/…g?v=32fb07f7ce26
[ "https://cdn.sstatic.net/Sites/unix/Img/logo.svg?v=eb6eb2b9e73c", "https://i.sstatic.net/TcGUp.jpg?s=64", "https://www.gravatar.com/avatar/1594d12477526fb4cce0223be9f0f077?s=64&d=identicon&r=PG", "https://www.gravatar.com/avatar/33f0e33470776483ac21fbad04d0c0ab?s=64&d=identicon&r=PG", "https://unix.stackexchange.com/posts/80734/ivc/3f38?prg=d3ff60ec-3824-4127-8ab7-40cdb7058e1b" ]
[]
[]
[ "" ]
null
[]
2013-06-26T13:12:25
Is there a standard for source packages to be able to build rpms, debs (and perhaps others) without too much customization? I'm talking mostly about Python, PyQt programs.
en
https://cdn.sstatic.net/Sites/unix/Img/favicon.ico?v=fb86ccabb921
Unix & Linux Stack Exchange
https://unix.stackexchange.com/questions/80734/creating-deb-and-rpm-from-the-same-source
FPM can build debs/rpms from python packages on PyPI or from a local setup.py file. You can build a deb with fpm -s python -t deb $package-name-on-pypi or fpm -s python -t deb setup.py Building packages in other formats only requires you to change the -t (target type) parameter. To produce debs I can also recommend python-stdeb. It looks like you are looking for something like PyInstaller. It can package the application for you in a very simple way. Please have a look at the site. http://www.pyinstaller.org/ http://sourceforge.net/projects/pyinstaller/ The downside is it can only handle up to Python 2.7 Autopackage : If you want to package for different linux distributions you can try autopackage http://code.google.com/p/autopackage/ I do not have any experience with that so i do not know the details, and it seems to be unmaintained. I have looked through the code and it can be updated easily. Brief explanation : If you want to package for different distributions then there is no real tool that can do that flawless for you. Even PyInstaller has its issues. If you really want to support different distro's, the best way to go is to make packages for the distro you want and maintain/update these as your program grows.
8585
dbpedia
3
36
https://dart.dev/tools/pub/automated-publishing
en
Automated publishing of packages to pub.dev
https://dart.dev/assets/…for-shares.png?2
https://dart.dev/assets/…for-shares.png?2
[ "https://dart.dev/assets/img/logo/logo-white-text.svg", "https://dart.dev/assets/img/tools/pub/pub-dev-gh-setup.png", "https://dart.dev/assets/img/tools/pub/audit-log-pub-gh.png", "https://dart.dev/assets/img/tools/pub/pub-dev-gh-env-setup.png", "https://dart.dev/assets/img/tools/pub/gh-pending-review.png", "https://dart.dev/assets/img/tools/pub/pub-dev-gcb-config.png", "https://dart.dev/assets/img/tools/pub/gcb-trigger-configuration.png", "https://dart.dev/assets/img/tools/pub/gcb-approval-checkbox.png", "https://dart.dev/assets/img/tools/pub/gcp-waiting-for-approval.png", "https://dart.dev/assets/img/logo/logo-white-text.svg" ]
[]
[]
[ "" ]
null
[]
null
Publish Dart packages to pub.dev directly from GitHub Actions.
en
/assets/img/logo/dart-64.png
https://dart.dev/tools/pub/automated-publishing/
You can automate publishing from: GitHub Actions, Google Cloud Build or, Anywhere else using a GCP service account. The following sections explain how automated publishing is configured, and how you can customize publishing flows in line with your preferences. When configuring automated publishing you don't need to create a long-lived secret that is copied into your automated deployment environment. Instead, authentication relies on temporary OpenID-Connect tokens signed by either GitHub Actions (See OIDC for GitHub Actions) or Google Cloud IAM. You can use exported service account keys for deployment environments where an identity service isn't present. Such exported service account keys are long-lived secrets, they might be easier to use in some environments, but also pose a larger risk if accidentally leaked. Publishing packages using GitHub Actions # You can configure automated publishing using GitHub Actions. This involves: Enabling automated publishing on pub.dev, specifying: The GitHub repository and, A tag-pattern that must match to allow publishing. Creating a GitHub Actions workflow for publishing to pub.dev. Pushing a git tag for the version to be published. The following sections outline how to complete these steps. Configuring automated publishing from GitHub Actions on pub.dev # To enable automated publication from GitHub Actions to pub.dev, you must be: An uploader on the package, or, An admin of the publisher (if the package is owned by a publisher). If you have sufficient permission, you can enable automated publishing by: Navigating to the Admin tab (pub.dev/packages/<package>/admin). Find the Automated publishing section. Click Enable publishing from GitHub Actions, this prompts you to specify: A repository (<organization>/<repository>, example: dart-lang/pana), A tag-pattern (a string containing {{version}}). The repository is the <organization>/<repository> on GitHub. For example, if your repository is https://github.com/dart-lang/pana you must specify dart-lang/pana in the repository field. The tag pattern is a string that must contain {{version}}. Only GitHub Actions triggered by a push of a tag that matches this tag pattern will be allowed to publish your package. Example: a tag pattern like v{{version}} allows GitHub Actions (triggered by git tag v1.2.3 && git push v1.2.3) to publish version 1.2.3 of your package. Thus, it's also important that the version key in pubspec.yaml matches this version number. If your repository contains multiple packages, give each a separate tag-pattern. Consider using a tag-pattern like my_package_name-v{{version}} for a package named my_package_name. Configuring a GitHub Action workflow for publishing to pub.dev # When automated publishing from GitHub Actions is enabled on pub.dev, you can create a GitHub Actions workflow for publishing. This is done by creating a .github/workflows/publish.yml file as follows: Make sure to match the pattern in on.push.tags with the tag pattern specified on pub.dev. Otherwise, the GitHub Action workflow won't work. If publishing multiple packages from the same repository, use a per-package tag pattern like my_package_name-v{{version}} and create a separate workflow file for each package. The workflow file above uses dart-lang/setup-dart/.github/workflows/publish.yml to publish the package. This is a reusable workflow that allows the Dart team to maintain the publishing logic and enables pub.dev to know how the package was published. Using this reusable workflow is strongly encouraged. If you need generated code in your package, then it is preferable to check this generated code into your repository. This simplifies verifying that the files published on pub.dev match the files from your repository. If checking generated or built artifact into your repository is not reasonable, you can create a custom workflow along the lines of: The workflow authenticates to pub.dev using a temporary GitHub-signed OIDC token, the token is created and configured in the dart-lang/setup-dart step. To publish to pub.dev, subsequent steps can run dart pub publish --force. Triggering automated publishing from GitHub Actions # After you've configured automated publishing on pub.dev and created a GitHub Actions workflow, you can publish a new version of your package. To publish, push a git tag matching the configured tag pattern. Once pushed, review the workflow logs at https://github.com/<organization>/<repository>/actions. If the Action didn't trigger, check that the pattern configured in .github/workflows/publish.yml matches the pushed git tag. If the Action failed, the logs might contain clues as to why it failed. Once published, you can see the publication event in the audit-log on pub.dev. The audit-log entry should contain a link to the GitHub Action run that published the package version. If you don't like using the git CLI to create tags, you can create releases on GitHub from https://github.com/<organization>/<repository>/releases/new. To learn more, check out managing releases in a repository from GitHub. Hardening security with tag protection rules on GitHub # Configuring automated publishing from GitHub Actions allows anyone who can push a tag to your repository to trigger publishing to pub.dev. You can restrict who can push tags to your repository using tag protection rules on GitHub. By limiting who can create tags matching your tag pattern, you can limit who can publish the package. At this time, the tag protection rules lack flexibility. You might want to restrict who can trigger publishing using GitHub Deployment Environments, as outlined in the next section. Hardening security with GitHub Deployment Environments # When configuring automated publishing from GitHub Actions on pub.dev, you can require a GitHub Actions environment. To require a GitHub Actions environment for publishing you must: Navigate to the Admin tab (pub.dev/packages/<package>/admin). Find the Automated publishing section. Click Require GitHub Actions environment. Specify an Environment name, (pub.dev is typically a good name) When an environment is required on pub.dev, GitHub Actions won't be able to publish unless they have environment: pub.dev. Thus, you must: Create an environment with the same name on GitHub (typically pub.dev) Alter your .github/workflows/publish.yml workflow file to specify environment: pub.dev, as follows: The environment is reflected in the temporary GitHub-signed OIDC token used for authentication with pub.dev. Thus, a user with permission to push to your repository cannot circumvent environment protection rules by modifying the workflow file. In GitHub repository settings, you can use environment protection rules to configure required reviewers. If you configure this option, GitHub prevents actions with the environment from running until one of the required reviewers have approved the run. Publishing from Google Cloud Build # You can configure automated publishing from Google Cloud Build. This involves: Register a Google Cloud Project (or using an existing project), Create a service account for publishing to pub.dev, Enable automated publishing in the admin tab for the package on pub.dev, specifying the email of the service account created for publishing. Grant the default Cloud Build service account permission to impersonate the service account created for publishing. Create a cloudbuild.yaml file that obtains a temporary OIDC id_token and uses it for publishing to pub.dev Configure a Cloud Build trigger, for running the steps in cloudbuild.yaml in your project on Google Cloud Build. The following sections outline how to complete these steps. Creating a service account for publishing # For publishing to pub.dev you are going to create a service account that is granted permission to publish your package on pub.dev. You are then going to grant Cloud Build permission to impersonate this service account. Create a cloud project, if you don't have an existing project. Create a service account as follows: $ gcloud iam service-accounts create pub-dev \ --description='Service account to be impersonated when publishing to pub.dev' \ --display-name='pub-dev' This creates a service account named pub-dev@$PROJECT_ID.iam.gserviceaccount.com. Grant the service account permission to publish your package. To complete this step, you must have uploader permission on the package or be an admin of the publisher that owns the package. a. Navigate to the Admin tab (pub.dev/packages/<package>/admin). a. Click Enable publishing with Google Cloud Service account. a. Type the email of the service account into the Service account email field. You created this account in the previous step: pub-dev@$PROJECT_ID.iam.gserviceaccount.com With this procedure complete, anyone who can impersonate the service account can publish new versions of the package. Make sure to review who has permissions to impersonate the service account and change permissions in the cloud project as needed. Granting Cloud Build permission to publish # To publish from Cloud Build you must give the default Cloud Build service account permission to impersonate the service account created for publishing in the previous section. Enable the IAM Service Account Credentials API in the cloud project. Attempts to impersonate a service account will fail without this API. # Enable IAM Service Account Credentials API $ gcloud services enable iamcredentials.googleapis.com Find the project number. # The PROJECT_NUMBER can be obtained as follows: $ gcloud projects describe $PROJECT_ID --format='value(projectNumber)' Grant the permission to impersonate the publishing service account. # Grant default cloud $ gcloud iam service-accounts add-iam-policy-binding \ pub-dev@$PROJECT_ID.iam.gserviceaccount.com \ --member=serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com \ --role=roles/iam.serviceAccountTokenCreator Writing a Cloud Build configuration file # To publish from Cloud Build, you must specify steps for Cloud Build to: Impersonate the service account to obtain a temporary OIDC token. Provide the temporary OIDC token to dart pub for use when publishing. Calling dart pub publish to publish the package. Steps for Google Cloud Build are provided in a cloudbuild.yaml file, see build configuration file schema for full documentation of the format. For publishing to pub.dev from Google Cloud Build, a cloudbuild.yaml file as follows will do: The gcloud auth print-identity-token creates an OIDC id_token impersonating the specified service account. This id_token is signed by Google, with a signature that expires within 1 hour. The audiences parameter lets pub.dev know that it is the intended recipient of the token. The --include-email option is necessary for pub.dev to recognize the service account. Once the id_token is created, it's written to a file that resides in a volume; this mechanism is used to pass data between steps. Don't store the token in /workspace. Since /workspace is where the repository from which you wish to publish is checked out. Not using /workspace for storing the token reduces the risk that you accidentally include it in your package when publishing. Creating a Cloud Build trigger # With service accounts configured and a cloudbuild.yaml file in the repository you can create a Cloud Build Trigger using the console.cloud.google.com dashboard. To create a build trigger, you need to connect to a source repository and specify which events should trigger a build. You can use GitHub, Cloud Source Repository, or one of the other options. To learn how to configure a Cloud Build Trigger, check out creating and managing build triggers. To use the cloudbuild.yaml from the previous step, configure the Cloud Build Trigger type as "Cloud Build Configuration" located in the repository in the /cloudbuild.yaml file. Do not specify a service account for the build to be triggered with. Instead you'll want to use the default service account for Cloud Build. When configuring your Cloud Build trigger, consider who can trigger the build. Because triggering a build might publish a new version of your package. Consider only allowing manual builds or use Cloud Build approvals to gate builds as outlined in next section. Hardening security with Cloud Build Approvals # When configuring a Cloud Build trigger, you can select require approval before build executes. If a Cloud Build trigger requires approval, it won't run when triggered. Instead, it'll wait for approval. This can be used to limit who can publish new versions of your package. Only a user with the Cloud Build Approver role can give approval. When giving a approval, the approver can specify a URL and comment. You can also configure notifications for pending approvals. To learn more, check out gate build on approval. Publish from anywhere using a Service Account # To allow automated publishing outside of GitHub Actions, you might authenticate using service accounts in way similar to Cloud Build. This usually involves: Create a service account for publishing, Impersonate the publishing service account in one of two ways: Workload Identity Federation Exported Service Account Keys The section for Cloud Build outlined how to create a service account for publishing. This should provide a service account, such as pub-dev@$PROJECT_ID.iam.gserviceaccount.com. Publish using Workload Identity Federation # When running on a cloud service that supports OIDC or SAML, you can impersonate a GCP service account using Workload Identity Federation. This enables you to leverage your cloud provider's identity services. For example, if deploying on EC2, you can configure workload identity federation with AWS, allowing temporary AWS tokens from the EC2 metadata service to impersonate a service account. To learn how to configure these flows, check out workload identity federation. Publish using Exported Service Account Keys # When running on a custom system without identity services, you can export service account keys. Exported service account keys allows you to authenticate as said service account. To learn more, check out how to create and manage service account keys. Export service account keys # Create exported service account keys for an existing service account. $ gcloud iam service-accounts keys create key-file.json \ --iam-account=pub-dev@$PROJECT_ID.iam.gserviceaccount.com Save the key-file.json file for later use. Publish packages using exported service account keys # To publish a package using exported service account keys:
8585
dbpedia
3
37
https://jumpcloud.com/blog/install-macos-software-autopkg-jumpcloud
en
Install macOS Software with JumpCloud & AutoPkg
https://jumpcloud.com/wp…kg-jumpcloud.png
https://jumpcloud.com/wp…kg-jumpcloud.png
[ "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/search-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/close-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/logos/jumpcloud-wordmark-tm-oceanblue.svg", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/logos/jumpcloud-wordmark-tm-oceanblue.svg", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/logos/jumpcloud-wordmark-tm-oceanblue.svg", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/close-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/directories.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/api-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/hybrid-work-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/identity-lifecycle.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/infrastructure-security-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/zero-trust.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/compliance.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/unify-stack.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/workspace.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/directories.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/api-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/hybrid-work-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/identity-lifecycle.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/infrastructure-security-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/zero-trust.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/compliance.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/unify-stack.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/workspace.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/directories.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/identity-lifecycle.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mfa.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/conditional-access.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/password-manager.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/hris.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/api-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/sso.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-ldap.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-radius.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mfa.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/password-manager.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/conditional-access.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/directory-insights.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/app-catalog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/api-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/device-management.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mdm.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/remote-work.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mfa.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/conditional-access.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/patch-management.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/system-insights.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/windows-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/apple-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/linux-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/android-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/directories.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/identity-lifecycle.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mfa.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/conditional-access.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/password-manager.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/hris.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/api-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/sso.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-ldap.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-radius.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mfa.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/password-manager.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/conditional-access.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/directory-insights.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/app-catalog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/api-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/device-management.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mdm.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/remote-work.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/mfa.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/conditional-access.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/patch-management.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/system-insights.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/windows-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/apple-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/linux-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/android-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-and-msps.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-and-msps.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-and-msps.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/technology-partners.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/resources.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/jcu.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/case-studies.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/blog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/case-studies.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/blog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/login.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/connect.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-and-msps.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-and-msps.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/cloud-and-msps.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/technology-partners.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/resources.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/jcu.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/case-studies.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/blog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/case-studies.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/blog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/login.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/connect.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/demo.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/it-hour.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/webinar.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/events.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/guided-sims.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/resources.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/blog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/jcu.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/youtube.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/case-studies.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/support.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/pro-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/demo.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/it-hour.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/webinar.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/events.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/guided-sims.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/resources.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/blog.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/jcu.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/youtube.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/case-studies.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/support.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/pro-services.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-icons/community.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/user-login-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/search-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/close-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/navigation/nav-menu/navigation-language-icon.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/blog-general/social-share-twitter.svg", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/blog-general/social-share-linkedin.svg", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/blog-general/social-share-facebook.svg", "https://jumpcloud.com/wp-content/uploads/2023/08/202304-Web-Sidebar-ExploreJumpCloudPricing.png", "https://secure.gravatar.com/avatar/5e7294544cdffdcc7fe5b166ac7f6450?s=65&d=mm&r=g", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/logos/jumpcloud-wordmark-tm-oceanblue.svg", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/social-logos/logo-twitter.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/social-logos/logo-facebook.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/social-logos/logo-linkedIn.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/social-logos/logo-youtube.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/social-logos/logo-g2-crowd.png", "https://jumpcloud.com/wp-content/themes/jumpcloud/assets/images/icons/privacyoptions.svg" ]
[]
[]
[ "" ]
null
[ "Joe Workman" ]
2020-09-09T13:00:00+00:00
A new integration gives admins application management capabilities for macOS systems using AutoPkg, PowerShell, and System Insights.
en
https://jumpcloud.com/wp…-57x57-light.png
JumpCloud
https://jumpcloud.com/blog/install-macos-software-autopkg-jumpcloud
Application deployment and patch management are crucial for security compliance and keeping system software up to date. Depending on the needs of a business, specific software versions or approved software titles may be required. Individual teams may also require different software tools, versions, and licensing. Understanding the different requirements and needs of individuals and teams, let’s explore how JumpCloud and AutoPkg gives IT administrators the tools to have flexibility and control, even in the most restrictive environments. Security, Software, & Solutions Maintaining and updating software titles is a difficult task in and of itself. Some organizations take a hands-off approach and provide select employees with administrative rights to install applications on their systems. Others restrict users from installing software entirely. Established IT teams provide their employees with optionality: In some cases, employees are given access to an approved repository from which they can install software of their choosing. Depending on the approach, IT teams can be left with the task of installing and updating software. Providing employees with administrative rights poses security concerns, namely the installation of unwanted or malicious software. Restricting employee’s system access may reduce unintended software installations at the cost of increased IT workload. IT and security teams must walk a fine line between locking down their systems and providing their employees with the software they need to work effectively. If a restrictive approach is taken the problem still remains — application management is an arduous task that even the best IT teams struggle to fully automate. Years back, working at a university help desk position, I found that I was not alone in my endeavor to securely and effectively manage software across my systems. Through community Slack channels, I discovered that many admins used a tool called AutoPkg. What is AutoPkg? AutoPkg is a packaging framework that automates several of the manual tasks an admin would otherwise complete before deploying software to systems. Since beginning to use AutoPkg, I saved countless hours testing and deploying software to my managed systems. The task of maintaining a library of approved software titles is made easier with AutoPkg. Providing employees with the ability to install approved applications or remotely installing those packages, removes some of the burden from IT teams. Whether IT teams release software packages for employees to install at their leisure, install applications remotely or through manual intervention, IT teams should trust but verify their software sources. Automation does not come without risk, blindly trusting that “Download” button or recipe can be disastrous when automating actions across multiple systems. AutoPkg: Automation and Autonomy AutoPkg integrates with a number of management tools like Munki — a popular open source software deployment application. Munki or similar applications provide a framework for admins to define software loadouts and provide employees a sanctioned way to install vetted software. AutoPkg presents admins a way to build and compile software .pkg installers. Munki then ingests those .pkg installers and deploys software to managed systems. IT teams who automate a process for niche software deployment are not alone. Community-supported AutoPkg recipe repositories contain software sources and overrides to automate specific actions. These repositories provide custom overrides for importing AutoPkg recipes into management suites. IT teams who have yet to define an application management process should investigate AutoPkg and the various tools in which AutoPkg integrates. Finding the best approach for software management may be challenging. Tools like AutoPkg can reduce some of the overhead required to deploy and update applications on managed systems. Using JumpCloud and AutoPkg Organizations need a platform their IT teams can use to manage, unify, and secure their environment. JumpCloud is a cloud directory platform that integrates with devices like macOS, Linux, and Windows, secures networks through RADIUS, provides LDAP services, as well as holistic event logging. Pairing JumpCloud with the power of AutoPkg, admins are able to secure their Mac devices, but also automate package and software management. The JumpCloud AutoPkg Importer processes AutoPkg .pkg recipes, uploads those .pkgs to a distribution resource, and dynamically creates JumpCloud commands for scripted software deployments. The JumpCloud AutoPkg Importer is designed to help admins automate the tasks required to install .pkg files on JumpCloud-managed macOS systems. Evaluate JumpCloud & our AutoPkg Importer AutoPkg can be extended to JumpCloud with the JumpCloud AutoPkg Importer to help manage remote software and package installations and versioning. To learn more about the project, visit the JumpCloud AutoPkg Importer Wiki Page.
8585
dbpedia
2
94
https://managingosx.wordpress.com/author/managingosx/page/2/
en
GregN
https://secure.gravatar.com/avatar/e1088e2a7d9b1e45999c4ba13389b232?s=200&d=identicon&r=g
https://secure.gravatar.com/avatar/e1088e2a7d9b1e45999c4ba13389b232?s=200&d=identicon&r=g
[ "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2020-10-06T21:53:59-07:00
Read all of the posts by GregN on Managing OS X
en
https://s1.wp.com/i/favicon.ico
Managing OS X
http://managingosx.wordpress.com/
In my Wednesday session for MacSysAdmin 2020 Online – “This One Goes to 11” – (http://macsysadmin.se/program/program.html) I talk about the implications of macOS Big Sur’s version numbering. I didn’t talk in too much detail about how that might affect Munki admins specifically, and I’ll remedy that here. Continue reading “This One Goes to 11: macOS version comparisons and Munki” → In my Wednesday session for MacSysAdmin 2020 Online, I talk a bit about the dual-versioning of macOS Big Sur. Since the talk was recorded and submitted a few weeks ago, some things have changed! When I recorded the presentation, Big Sur was on beta 6. In that version of Big Sur, the platform module in the bundled Python reported Big Sur’s version as 11.0: # sw_vers ProductName: macOS ProductVersion: 11.0 BuildVersion: 20A5364e # /usr/bin/python WARNING: Python 2.7 is not recommended. This version is included in macOS for compatibility with legacy software. Future versions of macOS will not include Python 2.7. Instead, it is recommended that you transition to using 'python3' from within Terminal. Python 2.7.16 (default, Aug 24 2020, 12:22:49) [GCC Apple LLVM 12.0.0 (clang-1200.0.30.1) [+internal-os, ptrauth-isa=sign+stri on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import platform >>> platform.mac_ver() ('11.0', ('', '', ''), 'x86_64') In Big Sur beta 9, that behavior changed: Continue reading “MacSysAdmin 2020 update: This One Goes to 11” → MacSysAdmin is online this year and open to everyone! http://macsysadmin.se/program/program.html Each day will have some great new presentations from a wide array of speakers. I have a presentation today (Tuesday) and ones on Wednesday and Friday as well. There are also some great presentations from the archives! Check here tomorrow for some updates and expansions for my Wednesday presentation. Here’s a list of resources for Mac admins converting their scripts from Python 2 to Python 3. Python 3 framework https://www.python.org/downloads/mac-osx/ (generic/stock framework from python.org) https://www.dropbox.com/s/2amjix194163li6/Python3.framework.zip?dl=0 (relocatable framework with pip, PyObjC, xattr, and six pre-installed) python-modernize https://python-modernize.readthedocs.io/en/latest/ pylint https://pylint.readthedocs.io/en/latest/ Install python-modernize and pylint using pip: pip install modernize pip install pylint -or- Apple Installer pkg containing python-modernize and pylint: https://www.dropbox.com/s/3e4gxs4fx4s05q9/py3portingtools-1.0.pkg?dl=0 six https://six.readthedocs.io six is installed as part of Apple’s Python 2.7 install; if you download the relocatable Python 3 framework from the link above, it also includes (a newer version of) six. The Conservative Python 3 Porting Guide https://portingguide.readthedocs.io/en/latest/index.html Cheat Sheet: Writing Python 2-3 compatible code https://python-future.org/compatible_idioms.html One case-study https://medium.com/@boxed/moving-a-large-and-old-codebase-to-python3-33a5a13f8c99 https://github.com/munki/munki/releases/tag/v4.0.0RC1 This is a release candidate of Munki 4.0, a major architectural change to the Munki tools. Munki 4 removes the dependency on Apple’s Python, and includes its own copy of Python 3.7.4. Functionality is intended to be identical to Munki 3.6.4. See Introduction to Munki 4 for more information on the architectural changes. IMPORTANT NOTE: If you use AutoPkg, do not use the munkitools3.munki recipe to import this release, as it will not import the new embedded Python package and any clients upgraded with the results will be broken. A new munkitools4.munki recipe is available in the AutoPkg recipes repo. Previous betas for this code base were numbered as 3.7.0 betas. Here’s a list of resources for the Python 3 Convert-a-thon at PSU MacAdmins 2019: Python 3 framework https://www.python.org/downloads/mac-osx/ (generic/stock framework from python.org) https://www.dropbox.com/s/vjbb8zeqb7w1z2g/Python.framework.zip?dl=0 (relocatable framework with pip, PyObjC, xattr, and six pre-installed) python-modernize https://python-modernize.readthedocs.io/en/latest/ pylint https://pylint.readthedocs.io/en/latest/ Install python-modernize and pylint using pip: pip install modernize pip install pylint -or- Apple Installer pkg containing python-modernize and pylint: https://www.dropbox.com/s/nxk5uq8b1vg2xij/psumacpytools-1.0.pkg?dl=0 six https://six.readthedocs.io six is installed as part of Apple’s Python 2.7 install; if you download the relocatable Python 3 framework from the link above, it also includes (a newer version of) six. The Conservative Python 3 Porting Guide https://portingguide.readthedocs.io/en/latest/index.html Cheat Sheet: Writing Python 2-3 compatible code https://python-future.org/compatible_idioms.html One case-study https://medium.com/@boxed/moving-a-large-and-old-codebase-to-python3-33a5a13f8c99 If you are participating in my packaging workshop next week at Penn State, please prepare in advance by downloading these items. (Don’t install them, just download them): Sample packages (new — added July 7): https://www.dropbox.com/s/lyhv517en1mo8gy/packages.zip?dl=0 Firefox disk image: https://download.mozilla.org/?product=firefox-latest&os=osx&lang=en-US Google Chrome disk image: https://dl.google.com/chrome/mac/stable/GGRO/googlechrome.dmg Suspicious Package http://www.mothersruin.com/software/SuspiciousPackage/ Pacifist https://www.charlessoft.com WhiteBox Packages: http://s.sudre.free.fr/Software/files/Packages.dmg munki-pkg: https://github.com/munki/munki-pkg/archive/master.zip Sure, you can also try downloading these items the morning of the workshop, but why tempt the WiFi spirits? https://github.com/munki/munki/releases/tag/v3.6.0 This is the official release of Munki 3.6, an important new release of the Munki tools. The primary new feature of Munki 3.6 is new versions of Managed Software Center.app and MunkiStatus.app, rewritten in Swift. These new applications are intended to behave nearly the same as the PyObjC-based applications they replace, with some speed and reliability improvements. Munki 3.6 will install (and is supported) on macOS 10.10+. Older macOS versions are no longer supported.
8585
dbpedia
3
3
https://www.osnews.com/story/2972/interview-with-autopackages-project-leader/
en
Interview with Autopackage’s Project Leader – OSnews
https://www.osnews.com/i…avicon-32x32.png
https://www.osnews.com/i…avicon-32x32.png
[ "https://www.osnews.com/wp-content/uploads/2022/02/osnews-ukraine.png", "https://secure.gravatar.com/avatar/cb876ec005278bf5b01ecf1b62624bc0?s=72&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/e8ef902070391346f1a2f512a17bb0ff?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/bd5d14fb14d68f7b4b3ef0ac1c60d073?s=68&d=identicon&r=r", "https://www.osnews.com/images/emo/sad.gif", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/3902857753b6a1f99a8581f1ceb99f45?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/93025e80a042ee4f6bac973949273d36?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/bd5d14fb14d68f7b4b3ef0ac1c60d073?s=68&d=identicon&r=r", "https://www.osnews.com/images/emo/smile.gif", "https://secure.gravatar.com/avatar/1e084bf9a45e258e31f0197715b67ba1?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/923d10bc97028030e8e67e7db62658d1?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/070373dbd6c3a8a7de650f3c839e00d8?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/bd5d14fb14d68f7b4b3ef0ac1c60d073?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/?s=68&d=identicon&r=r", "https://secure.gravatar.com/avatar/bd5d14fb14d68f7b4b3ef0ac1c60d073?s=68&d=identicon&r=r" ]
[]
[]
[ "" ]
null
[ "Eugenia Loli" ]
null
en
/icons/apple-touch-icon.png
https://www.osnews.com/story/2972/interview-with-autopackages-project-leader/
8585
dbpedia
1
80
https://www.kali.org/tools/bloodhound.py/
en
Kali Linux Tools
https://www.kali.org/ima…es/kali-logo.svg
https://www.kali.org/ima…es/kali-logo.svg
[ "https://www.kali.org/images/kali-tools-icon-missing.svg" ]
[]
[]
[ "kali", "linux", "kalilinux", "Penetration", "Testing", "Penetration Testing", "Distribution", "Advanced" ]
null
[]
2024-08-06T00:00:00+00:00
en
https://www.kali.org/images/favicon.png
Kali Linux
https://www.kali.org/tools/bloodhound.py/
bloodhound.py This package contains a Python based ingestor for BloodHound, based on Impacket. BloodHound.py currently has the following limitations: * Supports most, but not all BloodHound (SharpHound) features. Primary missing features are GPO local groups and some differences in session resolution between BloodHound and SharpHound. * Kerberos authentication support is not yet complete, but can be used from the updatedkerberos branch. This package installs the library for Python 3. Installed size: 343 KB How to install: sudo apt install bloodhound.py Dependencies: python3 python3-dnspython python3-impacket python3-ldap3 python3-pyasn1 bloodhound-python
8585
dbpedia
3
99
https://jfrog.com/help/r/jfrog-artifactory-documentation/upload-authenticated-pypi-packages-to-jfrog-artifactory
en
JFrog Help Center
https://jfrog.com/help/favicon.ico
https://jfrog.com/help/favicon.ico
[ "https://jfrog.com/help/internal/api/webapp/splash-image?v=199390e0" ]
[]
[]
[ "" ]
null
[]
null
en
https://jfrog.com/help/favicon.ico
null
8585
dbpedia
0
23
https://realpython.com/python-all-attribute/
en
: Packages, Modules, and Wildcard Imports – Real Python
https://files.realpython…698d61e0300d.jpg
https://files.realpython…698d61e0300d.jpg
[ "https://realpython.com/static/real-python-logo.893c30edea53.svg", "https://realpython.com/static/pytrick-dict-merge.4201a0125a5e.png", "https://files.realpython.com/media/Pythons-__all__-Set-Up-Your-Packages-and-Modules-for-Wildcard-Imports_Watermarked.698d61e0300d.jpg", "https://realpython.com/static/pytrick-dict-merge.4201a0125a5e.png", "https://realpython.com/cdn-cgi/image/width=862,height=862,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/Perfil_final1.9f896bc212f6.jpg", "https://realpython.com/cdn-cgi/image/width=862,height=862,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/Perfil_final1.9f896bc212f6.jpg", "https://realpython.com/cdn-cgi/image/width=959,height=959,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/PP.9b8b026f75b8.jpg", "https://realpython.com/cdn-cgi/image/width=800,height=800,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/gahjelle.470149ee709e.jpg", "https://realpython.com/cdn-cgi/image/width=400,height=400,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/VZxEtUor_400x400.7169c68e3950.jpg", "https://realpython.com/cdn-cgi/image/width=456,height=456,fit=crop,gravity=auto,format=auto/https://files.realpython.com/media/martin_breuss_python_square.efb2b07faf9f.jpg", "https://realpython.com/static/videos/lesson-locked.f5105cfd26db.svg", "https://realpython.com/static/videos/lesson-locked.f5105cfd26db.svg", "https://realpython.com/static/videos/lesson-locked.f5105cfd26db.svg", "https://files.realpython.com/media/Pythons-__all__-Set-Up-Your-Packages-and-Modules-for-Wildcard-Imports_Watermarked.698d61e0300d.jpg" ]
[]
[]
[ "" ]
null
[ "Real Python" ]
2024-03-04T14:00:00+00:00
In this tutorial, you'll learn about wildcard imports and the __all__ variable in Python. With __all__, you can prepare your packages and modules for wildcard imports, which are a quick way to import everything.
en
/static/favicon.68cbf4197b0c.png
https://realpython.com/python-all-attribute/
Importing Objects in Python When creating a Python project or application, you’ll need a way to access code from the standard library or third-party libraries. You’ll also need to access your own code from the multiple files that may make up your project. Python’s import system is the mechanism that allows you to do this. The import system lets you get objects in different ways. You can use: Explicit imports Wildcard imports In the following sections, you’ll learn the basics of both strategies. You’ll learn about the different syntax that you can use in each case and the result of running an import statement. Explicit Imports In Python, when you need to get a specific object from a module or a particular module from a package, you can use an explicit import statement. This type of statement allows you to bring the target object to your current namespace so that you can use the object in your code. To import a module by its name, you can use the following syntax: Python import module [as name] Copied! This statement allows you to import a module by its name. The module must be listed in Python’s import path, which is a list of locations where the path based finder searches when you run an import. The part of the syntax that’s enclosed in square brackets is optional and allows you to create an alias of the imported name. This practice can help you avoid name collisions in your code. As an example, say that you have the following module: Python calculations.py def add(a, b): return float(a + b) def subtract(a, b): return float(a - b) def multiply(a, b): return float(a * b) def divide(a, b): return float(a / b) Copied! This sample module provides functions that allow you to perform basic calculations. The containing module is called calculations.py. To import this module and use the functions in your code, go ahead and start a REPL session in the same directory where you saved the file. Then run the following code: Python >>> import calculations >>> calculations.add(2, 4) 6.0 >>> calculations.subtract(8, 4) 4.0 >>> calculations.multiply(5, 2) 10.0 >>> calculations.divide(12, 2) 6.0 Copied! The import statement at the beginning of this code snippet brings the module name to your current namespace. To use the functions or any other object from calculations, you need to use fully qualified names with the dot notation. Note: You can create an alias of calculations using the following syntax: Python import calculations as calc Copied! This practice allows you to avoid name clashes in your code. In some contexts, it’s also common practice to reduce the number of characters to type when using qualified names. For example, if you’re familiar with libraries like NumPy and pandas, then you’ll know that it’s common to use the following imports: Python import numpy as np import pandas as pd Copied! Using shorter aliases when you import modules facilitates using their content by taking advantage of qualified names. You can also use a similar syntax to import a Python package: Python import package [as name] Copied! In this case, Python loads the content of your package’s __init__.py file into your current namespace. If that file exports objects, then those objects will be available to you. Finally, if you want to be more specific in what you import into your current namespace, then you can use the following syntax: Python from module import name [as name] Copied! With this import statement, you can import specific names from a given module. This approach is recommended when you only need a few names from a long module that defines many objects or when you don’t expect name collisions in your code. To continue with the calculations module, you can import the needed function only: Python >>> from calculations import add >>> add(2, 4) 6.0 Copied! In this example, you only use the add() function. The from module import name syntax lets you import the target name explicitly. In this case, the rest of the functions and the module itself won’t be accessible in your namespace or scope. Wildcard Imports on Modules When you’re working with Python modules, a wildcard import is a type of import that allows you to get all the public names from a module in one go. This type of import has the following syntax: Python from module import * Copied! The name wildcard import derives from the asterisk at the end of the statement, which denotes that you want to import all the objects from module. Go back to your terminal window and restart your REPL session. Then, run the following code: Python >>> from calculations import * >>> dir() [ ... 'add', 'divide', 'multiply', 'subtract' ] Copied! In this code snippet, you first run a wildcard import. This import makes available all the names from the calculations modules and brings them to your current namespace. The built-in dir() function allows you to see what names are available in the current namespace. As you can confirm from the output, all the functions that live in calculations are now available. When you’re completely sure that you need all the objects that a given module defines, using a wildcard import is a quick solution. In practice, this situation is rare, and you just end up cluttering your namespace with unneeded objects and names. Using wildcard imports is explicitly discouraged in PEP 8 when they say: Wildcard imports (from <module> import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools. There is one defensible use case for a wildcard import, which is to republish an internal interface as part of a public API (for example, overwriting a pure Python implementation of an interface with the definitions from an optional accelerator module and exactly which definitions will be overwritten isn’t known in advance). (Source) The main drawback of wildcard import is that you don’t have control over the imported objects. You can’t be specific. Therefore, you can confuse the users of your code and clutter their namespace with unnecessary objects. Even though wildcard imports are discouraged, some libraries and tools use them. For example, if you search for applications built with Tkinter, then you’ll realize that many of the examples use the form: Python from tkinter import * Copied! This import gives you access to all the objects defined in the tkinter module, which is pretty convenient if you’re starting to learn how to use this tool. You may find many other tools and third-party libraries that use wildcard imports for code examples in their documentation, and that’s okay. However, in real-world projects, you should avoid this type of import. In practice, you can’t control how the users of your code will manage their imports. So, you better prepare your code for wildcard imports. You’ll learn how to do this in the upcoming sections. First, you’ll learn about using wildcard imports on packages. Wildcard Import and Non-Public Names Python has a well-established naming convention that allows you to tell the users of your code when a given name in a module is for internal or external use. If an object’s name starts with a single leading underscore, then that name is considered non-public, so it’s for internal use only. In contrast, if a name starts with a lowercase or uppercase letter, then that name is public and, therefore, is part of the module’s public API. Note: In Python, to define identifiers or names, you can use the uppercase and lowercase letters, the underscore (_), and the digits from 0 through 9. Note that you can’t use a digit as the first character in the name. When you have non-public names in a given module, you should know that wildcard imports won’t import those names. Say that you have the following module: Python shapes.py from math import pi as _pi class Circle: def __init__(self, radius): self.radius = _validate(radius) def area(self): return _pi * self.radius**2 class Square: def __init__(self, side): self.side = _validate(side) def area(self): return self.side**2 def _validate(value): if not isinstance(value, int | float) or value <= 0: raise ValueError("positive number expected") return value Copied! In this module, you have two non-public objects _pi and _validate(). You know this because they have a leading underscore in their names. If someone runs a wildcard import on this module, then the non-public names won’t be imported: Python >>> from shapes import * >>> dir() [ 'Circle', 'Square', ... ] Copied! If you take a look at the output of dir(), then you’ll note that only the Circle and Square classes are available in your current namespace. The non-public objects, _pi and _validate(), aren’t available. So, wildcard imports won’t import non-public names. Wildcard Import on Packages Up to this point, you know how wildcard imports work with modules. You can also use this type of import with packages. In that case, the syntax is the same, but you need to use a package name rather than a module name: Python from package import * Copied! Now, what happens when you run this type of import? You may expect that this import causes Python to search the file system, find the modules and subpackages that are present in the package, and import them. However, doing this file system search could take a long time. Additionally, importing modules might have unwanted side effects, because when you import a module, all the executable code in that module runs. Because of these potential issues, Python has the __all__ special variable, which will allow you to explicitly define the list of modules that you want to expose to wildcard import in a given package. You’ll explore the details in the next section. Preparing Your Packages for Wildcard Imports With __all__ Python has two different behaviors when dealing with wildcard imports on packages. Both behaviors depend on whether the __all__ variable is present in the package’s __init__.py file. If __init__.py doesn’t define __all__, then nothing happens when you run a wildcard import on the package. If __init__.py defines __all__, then the objects listed in it will be imported. To illustrate the first behavior, go ahead and create a new folder called shapes/. Inside the folder, create the following files: shapes/ ├── __init__.py ├── circle.py ├── square.py └── utils.py Leave the __init__.py file empty for now. Take the code of your shapes.py file and split it into the rest of the files. Click the collapsible section below to see how to do this: Python shapes/circle.py from math import pi as _pi from shapes.utils import validate class Circle: def __init__(self, radius): self.radius = validate(radius) def area(self): return _pi * self.radius**2 Copied! Python shapes/square.py from shapes.utils import validate class Square: def __init__(self, side): self.side = validate(side) def area(self): return self.side**2 Copied! Python shapes/utils.py def validate(value): if not isinstance(value, int | float) or value <= 0: raise ValueError("positive number expected") return value Copied! In this sample package, the __init__.py file doesn’t define the __all__ variable. So, if you run a wildcard import on this package, then you won’t import any name into your namespace: Python >>> from shapes import * >>> dir() [ '__annotations__', '__builtins__', ... ] Copied! In this example, the dir() function reveals that the wildcard import didn’t bring any name to your current namespace. The circle, square, and utils modules aren’t available in your namespace. If you don’t define __all__ in a package, the statement from package import * doesn’t import all the modules from the target package into the current namespace. In that case, the import statement only ensures that the package was imported and runs any code in __init__.py. If you want to prepare a Python package for wildcard imports, then you need to define the __all__ variable in the package’s __init__.py file. The __all__ variable should be a list of strings containing those names that you want to export from your package when someone uses a wildcard import. Go ahead and add the following line to the file: Python shapes/__init__.py __all__ = ["circle", "square"] Copied! By defining the __all__ variable in the __init__.py file, you establish the module names that a wildcard import will bring into your namespace. In this case, you only want to export the circle and square modules from your package. Now, run the following code in your interactive session: Python >>> from shapes import * >>> dir() [ ... 'circle', 'square' ] Copied! Now, when you run a wildcard import on your shapes package, the circle and square modules become available in your namespace. Note that the utils module isn’t available because you didn’t list it in __all__. It’s up to you as the package author to build this list and keep it up-to-date. Maintaining the list up-to-date is crucial when you release a new version of your package. In this case, it’s also important to note that you’ll get an AttributeError exception if __all__ contains undefined names. Note: When defining __all__, you must be aware that modules might be shadowed by locally defined names. For example, if you added a square() function to the __init__.py file, the function will shadow the square module. Finally, if you define __all__ as an empty list, then nothing will be exported from your package. It’s like not defining __all__ in the package. Exposing Names From Modules and Packages With __all__ You already know that when you run a wildcard import on a module, then you’ll import all the public constants, variables, functions, classes, and other objects in the module. Sometimes, this behavior is okay. However, in some cases, you need to have fine control over what the module exports. You can also use __all__ for this goal. Another interesting use case of __all__ is when you need to export specific names or objects from a package. In this case, you can also use __all__ in a slightly different way. In the following sections, you’ll learn how to use __all__ for controlling what names a module exports and how to export specific names from a package. Names From a Module You can use the __all__ variable to explicitly control what names a module exposes to wildcard imports. In this sense, __all__ allows you to establish a module’s public interface or API. This technique is also a way to explicitly communicate what the module’s API is. If you have a large module with many public names, then you can use __all__ to create a list of exportable names so that wildcard imports don’t pollute the namespace of your code’s users. In general, modules can have a few different types of names: Public names are part of the module’s public interface. Non-public names are for internal use only. Imported names are names that the module imports as public or non-public names. As you already know, public names are those that start with a lowercase or uppercase letter. Non-public names are those that start with a single leading underscore. Finally, imported names are those that you import as public names in a module. These names are also exported from that module. So, that’s why you’ll see imports like the following in many codebases: Python import sys as _sys Copied! In this example, you import the sys module as _sys. The as specifier lets you create an alias for the imported object. In this case, the alias is a non-public name. With this tiny addition to your import statement, you prevent sys from being exported when someone uses a wildcard import on the module. So, if you don’t want to export imported objects from a module, then use the as specifier and a non-public alias for the imported objects. Ideally, the __all__ list should only contain public names that are defined in the containing module. As an example, say that you have the following module containing functions and classes that allow you to make HTTP requests: Python webreader.py import requests __all__ = ["get_page_content", "WebPage"] BASE_URL = "http://example.com" def get_page_content(page): return _fetch_page(page).text def _fetch_page(page): url = f"{BASE_URL}/{page}" return requests.get(url) class WebPage: def __init__(self, page): self.response = _fetch_page(page) def get_content(self): return self.response.text Copied! In this sample module, you import the requests library. Next, you define the __all__ variable. In this example, __all__ includes the get_page_content() function and the WebPage class, which are public names. Note: You need to have the requests library installed on your current Python environment for the example above to work correctly. Note that the helper function _fetch_page() is for internal use only. So, you don’t want to expose it to wildcard imports. Additionally, you don’t want the BASE_URL constant or the imported requests module to be exposed to wildcard imports. Here’s how the module responds to a wildcard import: Python >>> from webreader import * >>> dir() [ 'WebPage', ... 'get_page_content' ] Copied! When you run a wildcard import on the webreader module, only the names listed in __all__ are imported. Now go ahead and comment out the line where you define __all__, restart your REPL session, and run the import again: Python >>> from webreader import * >>> dir() [ 'BASE_URL', 'WebPage', ... 'get_page_content', 'requests' ] Copied! A quick look at the output of dir() shows that now your module exports all the public names, including BASE_URL and even the imported requests library. The __all__ variable lets you have full control over what a module exposes to wildcard imports. However, note that __all__ doesn’t prevent you from importing specific names from a module using an explicit import: Python >>> from webreader import _fetch_page >>> dir() [ ... '_fetch_page' ] Copied! Note that you can use explicit import to bring any name from a given module, even non-public names as _fetch_page() in the example above. Names From a Package In the previous section, you learned how to use __all__ to define which objects are exposed to wildcard imports. Sometimes, you want to do something similar but at the package level. If you want to control the objects and names that a package exposes to wildcard imports, then you can do something like the following in the package’s __init__.py file: Python package/__init__.py from module_0 import name_0, name_1, name_2, name_3 from module_1 import name_4, name_5, name_6 __all__ = [ "name_0", "name_1", "name_2", "name_3", "name_4", "name_5", "name_6", ] Copied! The import statements tell Python to grab the names from each module in the package. Then, in __all__, you list the imported names as strings. This technique is great for those cases where you have a package with many modules, and you want to provide a direct path for imports. As an example of how this technique works in practice, get back to the shapes package and update the __init__.py file as in the code below: Python shapes/__init__.py from shapes.circle import Circle from shapes.square import Square __all__ = ["Circle", "Square"] Copied! In this update, you’ve added two explicit imports to get the Circle and Square classes from their respective module. Then, you add the class names as strings to the __all__ variable. Here’s how the package responds to wildcard imports now: Python >>> from shapes import * >>> dir() [ 'Circle', 'Square', ... ] Copied! Your shapes package exposes the Circle and Square classes to wildcard imports. These classes are what you’ve defined as the public interface of your package. Note how this technique facilitates direct access to names that otherwise you would have to import through qualified names. Exploring Alternative Use Cases of __all__ in Python Besides allowing you to control what your modules and packages expose to wildcard imports, the __all__ variable may serve other purposes. You can use __all__ to iterate over the names and objects that make up the public interface of a package or module. You can also take advantage of __all__ when you need to expose dunder names. Iterating Over a Package’s Interface Because __all__ is typically a list object, you can use it to iterate over the objects that make up a module’s interface. The advantage of using __all__ over dir() is that the package author has explicitly defined which names they consider to be part of the public interface of their package. If you iterate over __all__, you won’t need to filter out non-public names as you’d have to when you iterate over dir(module). For example, say that you have a module with a few similar classes that share the same interface. Here’s a toy example: Python vehicles.py __all__ = ["Car", "Truck"] class Car: def start(self): print("The car is starting") def drive(self): print("The car is driving") def stop(self): print("The car is stopping") class Truck: def start(self): print("The truck is starting") def drive(self): print("The truck is driving") def stop(self): print("The truck is stopping") Copied! In this module, you have two classes that represent vehicles. They share the same interface, so you can use them in similar places. You’ve also defined the __all__ variable, listing the two classes as strings. Now say that you want to use these classes in a loop. How can you do this? You can use __all__ as in the code below: Python >>> import vehicles >>> for v in vehicles.__all__: ... vehicle = getattr(vehicles, v)() ... vehicle.start() ... vehicle.drive() ... vehicle.stop() ... The car is starting The car is driving The car is stopping The truck is starting The truck is driving The truck is stopping Copied! In this example, you first import the vehicles module. Then, you start a for loop over the __all__ variable. Because __all__ is a list of strings, you can use the built-in getattr() function to access the specified objects from vehicles. This way, you’ve iterated over the classes that make up the module’s public API. Accessing Non-Public and Dunder Names When you’re writing modules and packages, sometimes you use module-level names that start and end with double underscores. These names are typically called dunder names. There are a few dunder constants, such as __version__ and __author__, that you may need to expose to wildcard imports. Remember that the default behavior is that these names aren’t imported because they start with a leading underscore. To work around this issue, you can explicitly list these names in your __all__ variable. To illustrate this practice, get back your webreader.py file and update it as in the code below: Python webreader.py import requests __version__ = "1.0.0" __author__ = "Real Python" __all__ = ["get_page_content", "WebPage", "__version__", "__author__"] BASE_URL = "http://example.com" def get_page_content(page): return _fetch_page(page).text # ... Copied! In this update, you define two module-level constants that use dunder names. The first constant provides information about the module’s version, and the second constant holds the author’s name. Here’s how a wildcard import works on this module: Python >>> from webreader import * >>> dir() [ 'WebPage', ... '__author__', ... '__version__', 'get_page_content' ] Copied! Now, when someone uses a wildcard import on the webreader module, they get the dunder variables imported into their namespace. Using __all__ in Python: Benefits and Best Practices Up to this point, you’ve learned a lot about the __all__ variable and how to use it in your code. While you don’t need to use __all__, it gives you complete control over what your packages and modules expose to wildcard imports. The __all__ variable is also a way to communicate to the users of your packages and modules which parts of your code they’re supposed to be using as the public interface. Here’s a quick summary of the main benefits that __all__ can provide: Control over what you expose to wildcard imports: Using __all__ allows you to explicitly specify the public interface of your packages and modules. This practice prevents accidental usage of objects that shouldn’t be used from outside the module. It provides a clear boundary between the module’s internal implementation and its public API. Enhance readability: Using __all__ allows other developers to quickly learn which objects make up the code’s API without examining the entire codebase. This improves code readability and saves time, especially for larger projects with multiple modules. Reduce namespace cluttering: Using __all__ allows you to list the names to be exposed to wildcard imports. This way, you prevent other developers from polluting their namespace with unnecessary or conflicting names. Even though wildcard imports are discouraged in Python, you have no way to control what the users of your code will do while using it. So, using __all__ is a good way to limit wrong uses of your code. Here’s a quick list of best practices for using __all__ in your code: Try to always define __all__ in your packages and modules. This variable provides you with explicit control over what other developers can import with wildcard imports. Take advantage of __all__ as a tool for explicitly defining the public interface of your packages and modules. This practice makes it clear to other developers which objects are intended for external use and which are for internal use only. Keep __all__ focused. The __all__ variables shouldn’t include every object in your module, just the ones that are part of the public API. Use __all__ in conjunction with good documentation. Clear documentation about the intended use and behavior of each object in the public API is the best complement to __all__. Be consistent in using __all__ across all your packages and modules. This practice allows other developers to better understand how to use your code. Regularly review and update __all__. The __all__ variable should always reflect the latest changes in your code’s API. Regularly maintaining __all__ ensures that your code remains clean and usable. Finally, remember that __all__ only affects the wildcard imports. If a user of your code imports a specific object from a package or module, then that object will be imported even if you don’t have it listed in __all__.
8585
dbpedia
3
21
http://indy.cs.concordia.ca/auto/
en
AUTO
[]
[]
[]
[ "" ]
null
[ "Pankaj Kamthan" ]
null
null
SOFTWARE FOR CONTINUATION AND BIFURCATION PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS This is the Home Page of the AUTO Web Site, established in January 1996. ANNOUNCEMENTS [November 30, 2019] Version 0.8 of AUTO-07p is available at GitHub. [January 1, 2011] Version 0.8 of AUTO-07p is available at SourceForge. INTRODUCTION AUTO is a software for continuation and bifurcation problems in ordinary differential equations, originally developed by Eusebius Doedel, with subsequent major contribution by several people, including Alan Champneys, Fabio Dercole, Thomas Fairgrieve, Yuri Kuznetsov, Bart Oldeman, Randy Paffenroth, Bjorn Sandstede, Xianjun Wang, and Chenghai Zhang. AUTO can do a limited bifurcation analysis of algebraic systems of the form f(u,p) = 0, f,u in Rn and of systems of ordinary differential equations of the form u'(t) = f(u(t),p), f,u in Rn subject to initial conditions, boundary conditions, and integral constraints. Here p denotes one or more parameters. AUTO can also do certain continuation and evolution computations for parabolic PDEs. It also includes the software HOMCONT for the bifurcation analysis of homoclinic orbits. AUTO is quite fast and can benefit from multiple processors; therefore it is applicable to rather large systems of differential equations. For further information and details, see the AUTO Documentation. AUTO STATUS/EVOLUTION The following table represents the historical evolution in the development of AUTO in a chronological order. AUTO AVAILABILITY/DISTRIBUTION The AUTO package is available for UNIX/Linux-based computers. AUTO-07P AUTO-07p is the successor to both AUTO97 and AUTO2000. It includes new plotting utilities, namely PyPlaut and Plaut04. It also contains many of the features of AUTO2000, including the Python CLUI, some parallelization, dynamic memory allocation, and the ability to use user equation files written in C. The overall performance has improved, especially for systems where the Jacobian matrix is sparse. AUTO-07p is written in Fortran. At least a Fortran 90 compiler is required to compile AUTO-07p. One such compiler is the freely downloadable GNU Fortran 95 compiler (gfortran). Gfortran ships with most current Linux distributions. Distribution: Download at GitHub. AUTO DOCUMENTATION The AUTO distribution include a copy of the AUTO Manual in LATEX, PostScript, and Portable Document Format (PDF). AUTO APPLICATIONS AUTO has been used in many scientific and engineering applications. A sample of applications can be found by searching on the Web for "bifurcation software AUTO". RELATED SOFTWARE Other software directly or indirectly related to AUTO includes DSTool, PyDSTool, XPPAUT, Content, MatCont, and DDE-BifTool. RELATED LECTURE NOTES Lecture Notes on Numerical Analysis of Nonlinear Equations. By Eusebius Doedel. Last Modified: Spring 2010. CONTACT/FEEDBACK If you have any comments, questions, or suggestions, please let us know by mailing "doedel at cse dot concordia dot ca" with "Subject: AUTO Related." An enquiry should include full name and affiliation.
8585
dbpedia
0
74
https://breardon.home.blog/author/bartreardon9494/
en
Bart Reardon
https://secure.gravatar.com/avatar/be643e615764f00dae9d2bf43ddec828?s=200&d=identicon&r=g
https://secure.gravatar.com/avatar/be643e615764f00dae9d2bf43ddec828?s=200&d=identicon&r=g
[ "https://breardon.home.blog/wp-content/uploads/2023/03/outset.png_128x128402x.png?w=256", "https://breardon.home.blog/wp-content/uploads/2023/03/image-7-3-2023-at-9.07-pm.jpg?w=526", "https://breardon.home.blog/wp-content/uploads/2022/05/screen-shot-2022-05-19-at-11.24.50-am.png?w=1024", "https://user-images.githubusercontent.com/3598965/152978464-1b602a68-da97-431a-8f79-1d899cb4fccb.png", "https://breardon.home.blog/wp-content/uploads/2021/11/screen-shot-2021-11-01-at-12.32.39-pm.png?w=1024", "https://breardon.home.blog/wp-content/uploads/2021/11/screen-shot-2021-11-01-at-1.21.57-pm.png?w=1024", "https://breardon.home.blog/wp-content/uploads/2021/11/screen-shot-2021-11-01-at-12.56.59-pm.png?w=1024", "https://breardon.home.blog/wp-content/uploads/2021/09/screen-shot-2021-09-21-at-10.44.56-pm.png?w=862", "https://breardon.home.blog/wp-content/uploads/2021/09/screen-shot-2021-09-21-at-11.21.15-pm.png?w=1024", "https://breardon.home.blog/wp-content/uploads/2021/03/image.png?w=720", "https://breardon.home.blog/wp-content/uploads/2021/03/image-1.png?w=720", "https://breardon.home.blog/wp-content/uploads/2021/01/screen-shot-2021-01-20-at-1.40.11-pm.png?w=555", "https://breardon.home.blog/wp-content/uploads/2021/01/screen-shot-2021-01-20-at-8.46.12-pm.png?w=956", "https://breardon.home.blog/wp-content/uploads/2021/01/screen-shot-2021-01-20-at-8.46.52-pm.png?w=935", "https://breardon.home.blog/wp-content/uploads/2020/05/screenshot-from-2020-05-04-22-08-38.png?w=786", "https://breardon.home.blog/wp-content/uploads/2020/05/screenshot-from-2020-05-04-22-22-55-1.png?w=786", "https://breardon.home.blog/wp-content/uploads/2019/10/image.png?w=361", "https://breardon.home.blog/wp-content/uploads/2019/10/screen-shot-2019-10-25-at-4.56.22-pm.png?w=931", "https://breardon.home.blog/wp-content/uploads/2019/10/image-2.png?w=197", "https://breardon.home.blog/wp-content/uploads/2019/10/screen-shot-2019-10-25-at-4.59.34-pm.png?w=1024", "https://breardon.home.blog/wp-content/uploads/2019/10/image-4.png?w=411", "https://breardon.home.blog/wp-content/uploads/2019/04/cropped-epss.png?w=50", "https://breardon.home.blog/wp-content/uploads/2019/04/cropped-epss.png?w=50", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[ "Bart Reardon" ]
2023-03-07T22:39:36+11:00
Read all of the posts by Bart Reardon on Stuff about things
en
https://breardon.home.bl…ed-epss.png?w=32
Stuff about things
http://breardonhome.wordpress.com
This script scripts and installs pkgs at boot and/or login. Joseph Chilcote 2014 So says the explanatory comment at the top of version v1.0.0 of outset, a python script written by Joseph Chilcote and released in February 2015 to assist mac admins with running login scripts and package installs. Since then there have been two major releases with v2 solidifying the fundamentals of how the logic works and v3 moving to python 3, anticipating what eventually came to pass in 2022 with the complete removal of python 2 from macOS. The last minor update was released in December 2020. Outset provides a solution to a problem that many in the mac admin community struggle with, and that is providing a simple scheduler for running boot and login scripts on device, without needing to know exactly how launchd works. Places scripts or packages into one of the processing directories and Outset takes care of running them and how often. It does however come with the dependency of python and with macOS no longer having a default python interpreter installed out of the box, it is up to the admin to install one of the many pythons available resulting in a form of “python roulette” in order to get things going. Outset 4 is my attempt at answering two different problems. Firstly making the utility standalone with no required dependencies. The second is to continue to provide admins an easy way to run login and boot scripts and manage how they appear as login items under macOS 13 and newer. So In January 2023 I ported the existing outset v3.0.3 to Swift and through the early days of March I’ve been adjusting the code and how things operate and am happy to be at the stage where beta 1 is ready to see the light of day. Along with this is the move (with Joseph’s blessing) to Mac Admins Open Source, which was announced recently as a community repository supported by the Mac Admins Foundation offering code signing and notarizarion for open source projects used, and developed, by mac admins, for mac admins. With the move to swift there are many changes in operation so if you’re used to outset you may want to read the notes accompanying recent releases and read the wiki to see what’s changed. I welcome any constructive feedback or feature requests, or if you are able to, please consider contributing to the project. You can reach me on @bartreardon or check out the #outset channel on the macadmins slack. I can also be found on mastodon at https://aus.social/@bartreardon. 🙂 This post is based off a thread in the macadmins slack that it appears no-one has blogged about yet. A heartfelt thanks for those that contributed ideas. As admins we are often asked to deploy a variety of agents or system extensions onto the computers and devices we administer. These can make sense and often required, for example an inventory service or to meet some security requirement. But sometimes we’re asked to deploy software that has conflicts or duplicated functionality with existing agents with little argument to prevent or at least have a discussion about the reasons or purpose. The classic case is being required to install two or more security products, each of which at best overlap in a large percentage of function or at worst, conflict and fight against each other in a battle of dominance over the device, reducing performance, battery life and the user experience. As admins, our job is to protect the production environment and user experience of the systems we manage. What follows is a compiled list (in no particular order) of questions and expectations from a number of authors in the macadmin community that could be used the next time some team or higher up wants you to deploy the Next Great Thing™ they saw on an airport banner ad or product brochure. Considerations before deployment What’s the plan for ensuring that the latest version of this is always available quickly for deployment? Access to vendor download sites or location where the latest supported version is available for packaging Can other agents be removed due to duplicated functionality as a result of installing this? Who is responsible for testing? Ideally the owner of the agent should be involved in testing directly new versions on the latest OSes before rollout to the entire fleet. For agent related issues, which team is responsible for actioning incidents and requests? What resources are being provided to support the product. What’s the impact to end user experience with having this installed? What user impact testing has been performed? Has anyone run this for an appropriate amount of time on a daily workstation? What configurations and exceptions have you come up with to ensure that performance sensitive users are not adversely impacted by its installation? What features does this agent bring that we do not already have on our endpoints? Has testing already been done with this agent in combination with all the other ones we have installed? What was the result of that testing and who was involved? Who in the requesting team is running macOS, Linux or Windows as their primary desktop and is going to be in the canary group? What compliance requirement are being met by the agent? Provide documentation What’s the history of this product when it comes to timely OS compatibility updates? Does the vendor respond quickly to OS releases or do they lag? By how long, days months years? To they build and test against beta OS releases? What technical capability does it add that nothing else does? What organisational bottleneck does this attempt to work around? Full configuration information for Group Policy/MDM profiles will need to be provided so that no manual actions are required on the part of users for installation or updates. Expectations after deployment If this breaks critical workflows during a crucial time, what’s the process for approval to remove this from devices? Approval or justification may be sought after agent removal. What’s the plan for rolling back when this is panicking/crashing machines? The team owning this new agent understands that if there is a new OS update and the vendor states that a newer minimum version is required for compatibility for this version of the OS that there is an expectation that this agent (and any associated infrastructure updates) will need to happen in a timely fashion and not be a blocker for rolling out new OS versions? It is understood that if the agent prevents installation of a new OS update which has high enough severity CVEs for too long, due to -other- compliance requirements for OS updates in a timely fashion, that this product will be removed until it’s compatible again with the newer OS version It is considered more important that CVEs get patched than having this new agent installed. It’s been 5 months since the last update. Dialog has learned a few more tricks in that time. As workflows are capable of getting a bit more complex now, I’ve started a new repository which is going to contain a collection of scripts and is called, unimaginatively enough, Dialog-scripts. The purpose is to provide a collection of workflows that can be freely copied, modified and used as a basis for other workflows, and hopefully serve as a jumping off point for some of the more adventurous things Dialog can do. One of the more recent ones lets you send scripted updates to modify content without re-launching. Combined with a list view, this lets you show a list of steps and update with ongoing progress. I’ve used this as a basis for the first entry to the Dialog-scripts repo that acts as a user-friendly visualisation of Scripting OS X’s awesome Installomator that takes a list of app labels, then steps through them one by one providing feedback as it goes. Feel free to copy, modify, update, provide feedback or suggestions. There will be more to come 🙂. Dialog is a feature rich open source utility app written in SwiftUI that is intended as a way to provide user notifications and interaction from shell scripts, similar to cocoadialog or jamfHelper. The latest release can be found on the Dialog GitHub page https://github.com/bartreardon/Dialog and also in the Jamf Marketplace If you’re like me you’re not a huge fan of writing detailed documentation. While not much can be helped with environment specifics, there is a great selection of free ebooks from Apple that covers most of the hardware, OS and application software. For the last couple of years I’ve been making Apple eBooks available on Jamf Self Service and scoping to particular devices, e.g. everyone with a MacBook Pro gets access to the MacBook Pro Essentials book. In Jamf Pro, go to to the “eBooks” section under “Content Management”, click the “+New” button and search. Choose “eBook available in the iBooks Store”. You can preview the book in the Books app as well to make sure you have the one you are after. Useful books that I’ve found (search for these by title or author – Results may differ depending on what store you select, but these are whats available in the Australian Apple Books store) Hardware “Essentials” series Great to include for users of that device type, especially if they are moving from one type of device to another or if they are first time Mac users. MacBook Pro Essentials MacBook Air Essentials iMac Essentials Mac Pro Essentials Mac Mini Essentials “User Guide” series from Apple Inc. The User Guide series covers non Mac hardware as well as Applications for each platform and how to use them iPhone iPad Keynote Pages Numbers Final Cut Pro Apple Inc. – Business The Employee Starter Guides are a good resource to help new users get up to speed with using Apple devices in an organisational setting. While these can’t go into detail about a specific environment, they do provide some more context that is only applicable in those environments, such as Automated Enrolment Employee Starter Guide for Mac Employee Starter Guide for iOS and iPadOS “Everyone can Create” series from Apple Education The “Everyone Can Create” series is a great resource that goes into creating music, photos and video as well as some teacher guides Author Names Apple have several Author names so it can be useful to search by author to restrict your results. Apple Inc. Apple Inc. – Business Apple Education Final Words This is by no means an exhaustive list. Over time the books are updated but rather than update the existing eBook, Apple may publish a new book, for example there will be more than one “MacBook Pro Essentials”. It’s usually easy to tell which one is the latest by the cover. How It Started A bit over six months ago I had one of those bursts of inspiration that only comes at the most inopportune of times. The basis of the idea was to make an app that would display a message to my mac users that looked nice and was customisable with text and images. SwiftUI was on the list of things I wanted to learn and so Dialog was born. Two days later on the 11th March 2021 the first beta version was released on GitHub as a binary and the following day I wrote a blog post announcing it. The source was released on the 20th after clearance from my employer. The initial version was fairly basic at the time but did what I wanted it to do. Displayed a window where I could control the title, message, button labels and include an image (or icon). It was one fixed size as I wasn’t up to speed with SwiftUI’s declarative layout style and making it do what I wanted, but it was a start and we all have to start somewhere. How it’s going Six months later and I have been nerd sniped with features and created an app that I think does about as much as anyone (a mac admin at least) could want in an app who’s task is to display a message to the current user. Apart from what it started with, Dialog now displays selectable dropdown lists, has text entry, fully customisable text colours and font sizes, support for SF Symbols as an icon, can display another icon as an overlay to the main icon, built in markdown support, can display arbitrary window sizes, has background images, full screen display mode, can timeout after a specified number of seconds (with a custom timeout indicator because why not), text justification, banner images, can provide json output and last but not least, supports a good chunk of the command line arguments from jamfHelper making it easy to drop in replace and not require an admin to re-write existing scripts (too much…it does it’s best and there are some options not currently supported). All in an app thats fully self contained and less than 2MB in size. Along the way I’ve learned more about SwiftUI than I thought I ever would and have used what I have learned in other projects, contributing to Nudge v1.1.1 with a refactor in how its layout is presented by SwiftUI and making future support for arbitrary window sizes easier to implement in that utility. As is evidenced from the progress in Dialog over the last 6 months, I’m more than open to feature suggestions and bug reports. I have no immediate plans to slow down development and with this first SwiftUI app ticking along nicely I have ideas for a few more apps and utilities. Dialog is also in the Jamf Marketplace where it has been getting a semi respectable level of attention in terms of generating traffic (If you are using dialog, please leave a review) and the latest release can be found on the Dialog GitHub page https://github.com/bartreardon/Dialog-public If you check it out, please send through feedback or join the #dialog channel on the macadmins slack. 🙂 Why a dialog app? Two reasons, the tool I have been using to this point was coming up short on some features I wanted to have, and I wanted an excuse to get my teeth stuck into a SwiftUI project. Also, I wanted to finally get around to releasing a tool for the mac admin community that is hopefully useful (I have a number that are useful, just not released) 3 days later…Meet Dialog. The app has a simple premise, show a dialog with a title, some text, one or two buttons and optionally an image. I took inspiration in the UNIX philosophy of having one tool do one job. Everything Ic can do can be specified on the command line and the app will return different exit codes depending on what action the user takes. At its most basic, it looks like this: Which is pretty boring, fortunately you can customise all the options. The three actions a user can take are: 1 – Press the [OK] button (or hit <Enter> on their keyboard) 2 – Press the [Cancel] button (or hit <ESC>) 3 – Press the [More Information] button. Each action is closes the app with exit code of 0, 2 or 3. The [OK] and [More Information] buttons can optionally also redirect to a URL. The Title and Message areas can be specified of course. The icon image can be either a jpg or png from a file or a URL. If the image is square (e.g. a jpg) the corners are slightly rounded for a more pleasing look. Put that together you can make something interesting: Dialog is only supported on macOS 11 (universal) at this point in time. To check out a pre-release version, please visit: https://github.com/bartreardon/Dialog-public/blob/main/README.md Source code will be made available soon. <- Edit: Source is now available as of 20-03-2021 …your laptop battery and make you fans spin hard if you don’t jump when the updater shows up. Recognise that dialog? – Many apps pop up the same type, Lock icon with the App icon overlaid and the text “An update is ready to install. <AppName> is trying to add a new helper tool” Well if you’re lazy and don’t get to it, or it pops up when you’re not at your computer, you may find that the fans in your Mac are making noises. A quick check of Activity Monitor confirm that the app in question is taking up a good chunk of your CPU, but doing what exactly? Spinning wheels it turns out calling the same method repeatedly as long as the dialog is waiting to be dismissed. https://soundmacguy.wordpress.com/2018/02/25/microsoft-skype-definitely-and-teams-maybe-disabling-automatic-updates/ (thank you to Nathaniel Strauss over on the macadmins slack for pointing this out) Squirrel.framework is a common update framework unsed in Electron based apps like Atom, Discord, Microsoft Teams, Visual Studio Code and Signal among others. And it has a bug https://github.com/Squirrel/Squirrel.Mac/issues/247 Not so encouraging though… So, if you were like me and were wondering why – now you know. If you know anyone that works on Electron apps for Mac and has patched this issue, can you send it upstream please? 👍 As was announced at WWDC 2020, Apple will be releasing Macs later this year running on Apple Silicon based on the ARM64 architecture. This transition will hopefully have us running universal applications but also possibly forced to run some intel only apps transcoded through Rosetta 2, depending on vendor support. As a mac admin it might be handy to know how to discover what applications on your systems don’t have a version compiled for ARM (or intel 64 bit for that matter). This (very) simple script will go though all applications system_profiler knows about and report if the application binaries have no match for the current system architecture: #!/bin/bash IFS=$'\n' systemarch=$(uname -m) for apppath in $(system_profiler SPApplicationsDataType | grep 'Location:' | awk -F ": " '{print $NF}'); do apparch=$(mdls -raw -name kMDItemExecutableArchitectures "${apppath}") echo ${apparch} | grep -q ${systemarch} if [[ $? -ne 0 ]]; then echo "${apppath} has no native binary for ${systemarch}" echo "${apparch}" fi done uname -m tells us the current system architecture mdls -raw -name kMDItemExecutableArchitectures /some/file.app tells us the architecture(s) that the app is compiled for which could be ppc, i386, x86_64 or arm64 Sample output from the future might look something like this: /Applications/dosbox.app has no native binary for arm64 ( ppc, i386, "x86_64" ) You could use this as a basis for your own scripts or to perhaps instead check for apps that do match the host architecture or have multiple architectures (aka universal binaries). Happy scripting. macOS presents a wonderful graphical UI but there are times where we log on, copy files or perform other work via the command line over SSH. The default experience for generic logins is rather plain but we can jazz it up a little. The first is by using the file /etc/motd (or Message Of The Day). This is simply a text file and whatever is in it will be displayed after login. It’s a bit of a misnomer as without anything else the file is static, unchanging file rather than something inherently “daily” but with a bit of imagination one can embellish and update the file on a regular basis. for example: #!/bin/sh echo "You have logged on to $(hostname)" > /etc/motd echo "Today's date is $(date +"%d %b %Y")" >> /etc/motd echo "The weather for today is: $(curl -s 'wttr.in?format=3')" >> /etc/motd (A fairly simple script but will do as a demonstration). Save this file to an appropriate location such as “/usr/local/motd_updater.sh“. Set permissions with “sudo chmod 744 /usr/local/motd_updater.sh“ Create a launchd to run it once a day or whatever schedule you prefer – an example here – and place it in /Library/LaunchDaemons/ and load with sudo launchctl load /Library/LaunchDaemons/com.motd_update.plist (or whatever name you gave it) Now, whenever someone connects via ssh they will be greeted with the a message that will update daily. This is great, but a user won’t see the message until AFTER login. What if you want to display a message BEFORE login, such as an Acceptable Use Policy? Fortunately that’s easy enough to do as well as SSHD has a method for displaying a banner on connection. Much like MOTD, the banner is just a plain text file and you can tell SSHD to display it prior to prompting for a password. For a basic banner, edit the file “/etc/banner” and copy in the following: ----------------------- Conditions of Use WARNING: Your access to the system must be authorised. Unauthorised access may be prosecuted. By accessing this system, you agree that: - your actions may be monitored - you must abide by the MY ORG Code of Conduct and any Acceptable Use Policy ---------------------- edit the file “/etc/ssh/sshd_config” find the line “#Banner /etc/banner“, uncomment and save “sudo launchctl kickstart -k system/com.openssh.sshd” to restart sshd Now when someone logs in via SSH, they will see the text displayed in /etc/banner prior to entering their password. Very handy for presenting things like acceptable use policies: As with /etc/motd, you can modify /etc/banner as you wish with a script or the like to keep it up to date as details change, even splash a bit of colour around with ANSI escape codes. Have Fun. If your org is one of the many that uses JIRA internally for tracking workflow or projects then you’ll know about issue collectors. I’ve used issue collectors before in munki and my post on that is still available here https://groups.google.com/d/msg/munki-dev/PwvrYaqKxGc/97w7G-USFwUJ For Jamf Self Service it needs a little bit of extra work as unlike Managed Software Centre there’s not a lot of interface modification you can do. That said, it’s still fairly simple to set up. Step 1 – create your issue collector in JIRA – consult your jira docco on how to do that but the simplest is to set “Prominent” trigger style and then pick a template. I went with “Custom” and selected “Description” and “Attach File” custom fields. Add an appropriate trigger text and message. Step 2 – create a page to add the generated issue collector code to. Here’s a template I created earlier: <html> <head> </head> <body> </body> </html> Save it as feedback.html and place it on a web server somewhere. Step 3 – We aren’t done yet (believe it or not). We need to add in our issue collector code. Go grab it from the issue collector setting in JIRA and paste it in between <body> and </body> tags. All going well, when you re-load your html file you should have a blank page with a JIRA feedback link at the top like this: That’s not super ideal though – we want to see the form straight away. A simple way to do this (because JIRA is weird and don’t let you easily just create a blank issue collector page) is to create an onload event and have a snippet of JS click the “Provide Feedback” link for us. Step 4 – Copy the following into the body of your ever growing html file: <script> window.addEventListener('load', function() { document.getElementById('atlwdg-trigger').click(); }) </script> Now if you re-load your page it should pop up the issue collector straight away (bonus points if you have SSO enabled and it picks up who you are straight away – otherwise have a play around with creating an anonymous issue collector): Step 5 – Now jump onto your JSS and create a new bookmark and link it to the URL of the feedback html (icon shamelessly ripped off from the Apple Feedback Assistant) Boom. Optional – I also created a background image to display on the page so when someone submits their feedback and the form disappears, they see a happy message 😃
8585
dbpedia
0
8
https://derflounder.wordpress.com/2020/07/31/pkgsigner-autopkg-processor-updated-for-python-3/
en
PkgSigner AutoPkg processor updated for Python 3
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://1.gravatar.com/avatar/d678374fabfd2ce5e42a8d2ee219c878fe28d4d27ba3bdfe0905bcdd49a78f9f?s=48&d=identicon&r=G", "https://0.gravatar.com/avatar/9a6eb242728c9344e6078f49f7297e7bbe7b5c5af0b3f99952f35686499ef79c?s=48&d=identicon&r=G", "https://0.gravatar.com/avatar/9851bc7e13a6a30c801e72cd65e1fcc49818a778abfbfc923093a7ae8d60564a?s=48&d=identicon&r=G", "https://1.gravatar.com/avatar/d01b71732017a03705b60dcd6ba6669a9b5148633fa12b8ae7531c3143604cc9?s=48&d=identicon&r=G", "https://1.gravatar.com/avatar/da3a0520ed1bfc83e1f3baa3c3947cf7f0ebb511790f996d7eabad8310adcdb1?s=48&d=identicon&r=G", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2020-07-31T00:00:00
A while back, I discussed how to incorporate installer package signing into AutoPkg workflows. The PkgSigner processor used in this workflow was originally written by Paul Suh and it uses Apple’s productsign tool to access a Developer ID Installer certificate stored in the login keychain. Like other processors and AutoPkg itself, PkgSigner needed updating to Python…
en
https://s1.wp.com/i/favicon.ico
Der Flounder
https://derflounder.wordpress.com/2020/07/31/pkgsigner-autopkg-processor-updated-for-python-3/
A while back, I discussed how to incorporate installer package signing into AutoPkg workflows. The PkgSigner processor used in this workflow was originally written by Paul Suh and it uses Apple’s productsign tool to access a Developer ID Installer certificate stored in the login keychain. Like other processors and AutoPkg itself, PkgSigner needed updating to Python 3 when Python 2 reached end-of-life in April 2020. This updating process has been completed, thanks to Nick McDonald. To make sure PkgSigner is consistently using the same Python environment across machines, PkgSigner has also been set to use the Python 3 install bundled with AutoPkg. For those who need it, I have a copy of the PkgSigner processor available via the link below:
8585
dbpedia
2
16
https://almenscorner.io/tag/python/
en
almen's Intune corner
https://almenscorner.io/…om-457@0@f-1.jpg
https://almenscorner.io/…om-457@0@f-1.jpg
[ "https://almenscorner.io/content/images/size/w300/2023/08/almenscorner-1.png", "https://almenscorner.io/assets/images/tag-bg.svg?v=41ed630341", "https://almenscorner.io/content/images/size/w600/2024/03/_59028bba-9bae-49d3-ac9c-91d39df16767.jpeg", "https://almenscorner.io/content/images/size/w600/2022/09/mmglogo.png", "https://almenscorner.io/content/images/size/w600/2022/01/logicapp.png", "https://almenscorner.io/content/images/size/w600/2021/12/Screenshot-2021-12-14-at-16.00.40.png", "https://almenscorner.io/content/images/size/w600/2021/11/Screenshot-2021-11-17-at-21.26.19.png", "https://almenscorner.io/content/images/size/w600/2021/10/autopkg-1.png", "https://almenscorner.io/content/images/size/w600/2021/09/Screenshot-2021-09-24-at-15.18.44.png" ]
[]
[]
[ "" ]
null
[]
2024-04-02T00:00:00
en
https://almenscorner.io/…almenscorner.png
almen's Intune corner
https://almenscorner.io/tag/python/
8585
dbpedia
1
14
https://checkmarx.com/blog/automatic-execution-of-code-upon-package-download-on-python-package-manager/
en
Automatic Execution of Code Upon Package Download on Python Package Manager
https://checkmarx.com/wp…663349748796.png
https://checkmarx.com/wp…663349748796.png
[ "https://px.ads.linkedin.com/collect/?pid=6477&fmt=gif", "https://checkmarx.com/wp-content/uploads/2024/01/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/uploads/2024/05/CXone.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SAST.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SCA.svg", "https://checkmarx.com/wp-content/uploads/2024/05/AI.svg", "https://checkmarx.com/wp-content/uploads/2024/05/API-Security.svg", "https://checkmarx.com/wp-content/uploads/2024/05/ASPM-icon.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Codebashing.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Container-Security.svg", "https://checkmarx.com/wp-content/uploads/2024/05/DAST.svg", "https://checkmarx.com/wp-content/uploads/2024/05/IaC-Security.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SBOM.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SSCS.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Code-to-Cloud.svg", "https://checkmarx.com/wp-content/uploads/2024/05/DevEx.svg", "https://checkmarx.com/wp-content/uploads/2024/05/DigTrans.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Component-35.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/icon-search.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/icon-search-mob.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/icon-search-mob.svg", "https://checkmarx.com/wp-content/uploads/2024/06/avatar_66.jpg", "https://checkmarx.com/wp-content/uploads/2022/08/Blog_python_automatic-execution.jpg", "https://checkmarx.com/wp-content/uploads/2022/08/carbon-990x1024-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/carbon-990x1024-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Group-2433-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Group-2433-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Picture1-2-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Picture1-2-1.png", "https://checkmarx.com/wp-content/uploads/2024/01/icon-x.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-x.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-yb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-yb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-ln.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-ln.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-fb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-fb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-citi-2.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-citi-2.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cisco-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cisco-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-accenture-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-accenture-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-wipro-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-wipro-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2021-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2021-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-gartner-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-gartner-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2022-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2022-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-dev-insider-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-dev-insider-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo.svg" ]
[]
[]
[ "" ]
null
[ "Yehuda Gelb" ]
2022-08-26T10:00:00+00:00
A worrying feature in pip/PyPi allows code to automatically run when developers are merely downloading a package. Also, this feature is alarming due to the fact that a great deal of the malicious packages we are finding in the wild use this feature of code execution upon installation to achieve higher infection rates.
en
https://checkmarx.com/wp…vicon-32x32.webp
Checkmarx
https://checkmarx.com/blog/automatic-execution-of-code-upon-package-download-on-python-package-manager/
Automatic code execution is triggered upon downloading approximately one third of the packages on PyPi. A worrying feature in pip/PyPi allows code to automatically run when developers are merely downloading a package. Also, this feature is alarming due to the fact that a great deal of the malicious packages we are finding in the wild use this feature of code execution upon installation to achieve higher infection rates. It is important that python developers understand that package downloading can expose them to an increased risk of a supply chain attack. Intro When executing the well-known “pip install <package_name>” command, users may expect code to be run on their machine as part of the installation process. One source of such code usually resides in the setup.py file of python packages. When a python package is installed, pip, python’s package manager, tries to collect and process the metadata of this package, such as its version and the dependencies it needs in order to work properly. This process occurs automatically in the background by pip running the main setup.py script that comes as part of the package structure. setup.py example The purpose of setup.py is to provide a data structure for the package manager to understand how to handle the package. However, the setup.py file is still a regular python script that can contain any code the developer of the package would like. An attacker who understands this process can plant malicious code in the setup.py file, which would then execute automatically during the package’s installation. In fact, much of the malicious packages we are detecting contain malicious code in the setup.py file. What if we just download the package rather than install it? In addition to the “install” command, pip provides several more options, among them is the “download” command. This command is intended to allow users to download packages’ files without the need to install them. There could be various reasons someone would need this. For example, a developer may want to look into the package’s code before using it. A user may want or need to perform a security check, or perhaps even observe the setup.py file for any anomalies. As it turns out, executing the command “Pip download <package_name>” will run the setup.py file, as well as any potentially malicious code contained within it. It may surprise you, but this behavior is not a bug but rather a feature in the pip design. Users who intentionally only download a package do not expect code to run on their system automatically. As a matter of fact, this concern was expressed in an issue from 2014 on the pypa project https://github.com/pypa/pip/issues/1884, yet it was not addressed, and the issue continues to exist to this day. The .whl file type Python wheels are essentially .whl files that are part of the Python ecosystem and bring various performance benefits to the package installation process. But that is not the only thing that wheels bring to the table. In the past, when python code was built into a package, the result would be a tar.gz file that would then be published to the PyPi platform. tar.gz files include the setup.py file which is run upon download and installation. But suppose you’ve recently tried downloading or installing a Python package using pip. In that case, you may have noticed Python supplying you with a .whl file. The reason for this is when developers build a python package using, for example, the “pip -m build” command, in newer pip versions, pip automatically tries to create a secondary .whl file in addition to the tar.gz file, which is then published together to the Python Package manager platform. When a user downloads or installs this package, PIP will by default deliver the .whl file to the user’s machine. The way wheels work cuts the setup.py execution out of the equation. Why is the setup.py still relevant? Even though pip defaults to using wheels instead of tar.gz files, malicious actors can still intentionally publish python packages without a .whl file. When a user downloads a python package from PyPi, pip will preferentially use the .whl file, but will fall back to the tar.gz file if the .whl file is lacking. Is there anything you can do about this? Currently, there are actions users can take to prevent automatic execution upon package download. One action is checking the package file contents at https://pypi.org/project/<package>/#files and observing if a .whl file is present. If there is a .whl file, the user can feel confident they will receive the .whl file, and no code will be executed on their machine. If there is only a tar.gz present, a user can use a safe method of download such as working directly with PyPi’s “simple” API: https://pypi.org/simple/<package-name>/. For example, when using the package listed above, prp1, a user can download it from the following link https://pypi.org/simple/prp1/. Conclusion Code execution upon installation is one of the features attackers use the most in open-source attacks. Developers opting to download, instead of installing packages, are reasonably expecting that no code will run on the machine upon downloading the files. However, PyPi includes a feature allowing just that—code execution on the user’s machine when all that was requested was a file download. It is possible to protect yourselves from suspicious package by following the steps detailed above. As always, we are releasing similar blogs to help keep the open source ecosystem safe and raise the awareness of python developers to this issue so they can avoid unwanted consequences.
8585
dbpedia
1
1
https://stackoverflow.com/questions/46419607/how-to-automatically-install-required-packages-from-a-python-script-as-necessary
en
How to automatically install required packages from a Python script as necessary?
https://cdn.sstatic.net/…g?v=73d79a89bded
https://cdn.sstatic.net/…g?v=73d79a89bded
[ "https://i.sstatic.net/d4HTC.jpg?s=64", "https://lh3.googleusercontent.com/-NzgIGIiK2VM/AAAAAAAAAAI/AAAAAAAAEW4/2cUX2QuzpAw/photo.jpg?sz=64", "https://i.sstatic.net/Z99mk.jpg?s=64", "https://i.sstatic.net/czF1r.gif?s=64", "https://i.sstatic.net/i7iLl.jpg?s=64", "https://i.sstatic.net/6JFOF.png?s=64", "https://i.sstatic.net/WWXSU.png?s=64", "https://i.sstatic.net/1reMo.jpg?s=64", "https://stackoverflow.com/posts/46419607/ivc/3f38?prg=e08b9898-bfc6-47fd-a404-d6b845fbe997" ]
[]
[]
[ "" ]
null
[]
2017-09-26T06:52:21
Is there anything in Python or Linux what basically instructs the system to &quot;install whatever is necessary&quot;. Basically I find it annoying to install python packages for each new script/sy...
en
https://cdn.sstatic.net/Sites/stackoverflow/Img/favicon.ico?v=ec617d715196
Stack Overflow
https://stackoverflow.com/questions/46419607/how-to-automatically-install-required-packages-from-a-python-script-as-necessary
Let's assume that your Python script is example.py: import os import time import sys import fnmatch import requests import urllib.request from bs4 import BeautifulSoup from multiprocessing.dummy import Pool as ThreadPool print('test') You can use pipreqs to automatically generate a requirements.txt file based on the import statements that the Python script(s) contain. To use pipreqs, assuming that you are in the directory where example.py is located: pip install pipreqs pipreqs . It will generate the following requirements.txt file: requests==2.23.0 beautifulsoup4==4.9.1 which you can install with: pip install -r requirements.txt You can use setuptools to install dependencies automatically when you install your custom project on a new machine. Requirements file works just fine if all you want to do is to install a few PyPI packages. Here is a nice comparison between the two. From the same link you can see that if your project has two dependent packages A and B, all you have to include in your setp.py file is a line install_requires=[ 'A', 'B' ] Of course, setuptools can do much more. You can include setups for external libraries (say C files), non PyPI dependencies, etc. The documentation gives a detailed overview on installing dependencies. There is also a really good tutorial on getting started with python packaging. From their example, a typical setup.py file would look like this. from setuptools import setup setup(name='funniest', version='0.1', description='The funniest joke in the world', url='http://github.com/storborg/funniest', author='Flying Circus', author_email='[email protected]', license='MIT', packages=['funniest'], install_requires=[ 'markdown', ], zip_safe=False) In conclusion, it is so simple to get started with setuptools. This package can make it fairly easy to migrate your code to a new machine. Automatic requirements.txt updating approach I'm not really sure about auto installing what is necessary, but it you stop on using requirements.txt, there are 3 approaches: Generate requirements.txt after development, when we want to deploy it. It is performed by pip freeze > requirements.txt or pipreqs for less messy result. Add every module to requirements.txt manually after each install. Install manager that will handle requirements.txt updates for us. There are many answers for the 1-st option on stackoverflow, the 2-d option is self-explanatory, so I would like to describe the 3-d approach. There is a library called to-requirements.txt. To install it type this: pip install to-requirements.txt # Pip install to requirements.txt If you read the whole command at once you would see, what it does. After installing you should setup it. Run: requirements-txt setup It overrides the pip scripts so that each pip install or pip uninstall updates the requirements.txt file of your project automatically with required versions of packages. The overriding is made safely, so that after uninstalling this package the pip will behave ordinary. And you could customize the way it works. For example, disable it globally and activate it only for the required directories, activate it only for git repositories, or allow / disallow to create requirements.txt file if it does not exist. Links: Documentation - https://requirements-txt.readthedocs.io/en/latest/ GitHub - https://github.com/VoIlAlex/requirements-txt PyPI - https://pypi.org/project/to-requirements.txt/
8585
dbpedia
0
19
https://scriptingosx.com/2020/02/wrangling-pythons/
en
Wrangling Pythons
https://i0.wp.com/script…=800%2C557&ssl=1
https://i0.wp.com/script…=800%2C557&ssl=1
[ "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2019/11/cropped-NewShebang-1.png?fit=248%2C248&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2020/02/WranglingPython-Perseus.jpg?resize=800%2C510&ssl=1", "https://i0.wp.com/scriptingosx.com/wp-content/uploads/2020/02/WranglingPython-InstallDevToolsDialog.png?w=660&ssl=1" ]
[]
[]
[ "" ]
null
[ "Author ab" ]
2020-02-11T13:47:34+00:00
As I noted in my last Weekly News Summary, several open source projects for MacAdmins have completed their transition to Python 3. AutoPkg, JSSImport and outset announced Python 3 compatible versions last week and Munki already had the first Python 3 version last December. Why? Apple has included a version of Python 2 with Mac…
en
https://i0.wp.com/script…it=32%2C32&ssl=1
Scripting OS X
https://scriptingosx.com/2020/02/wrangling-pythons/
As I noted in my last Weekly News Summary, several open source projects for MacAdmins have completed their transition to Python 3. AutoPkg, JSSImport and outset announced Python 3 compatible versions last week and Munki already had the first Python 3 version last December. Why? Apple has included a version of Python 2 with Mac OS X since 10.2 (Jaguar). Python 3.0 was released in 2008 and it was not fully backwards compatible with Python 2. For this reason, Python 2 was maintained and updated alongside Python 3 for a long time. Python 2 was finally sunset on January 1, 2020. Nevertheless, presumably because of the compatibility issues, Apple has always pre-installed Python 2 with macOS and still does so in macOS 10.15 Catalina. With the announcement of Catalina, Apple also announced that in a “future version of macOS” there will be no pre-installed Python of any version. Scripting language runtimes such as Python, Ruby, and Perl are included in macOS for compatibility with legacy software. Future versions of macOS won’t include scripting language runtimes by default, and might require you to install additional packages. If your software depends on scripting languages, it’s recommended that you bundle the runtime within the app. (macOS 10.15 Catalina Release Notes) This also applies to Perl and Ruby runtimes and other libraries. I will be focussing on Python because it is used more commonly for MacAdmin tools, but most of this post will apply equally to Perl and Ruby. Just mentally replace “Python” for your preferred language. The final recommendation is what AutoPkg and Munki are following: they are bundling their own Python runtime. How to get Python There is a second bullet in the Catalina release notes, though: Use of Python 2.7 isn’t recommended as this version is included in macOS for compatibility with legacy software. Future versions of macOS won’t include Python 2.7. Instead, it’s recommended that you run python3 from within Terminal. (51097165) This is great, right? Apple says there is a built-in Python 3! And it’s pre-installed? Just move all your scripts to Python 3 and you’ll be fine! Unfortunately, not quite. The python3 binary does exist on a ‘clean’ macOS, but it is only a stub tool, that will prompt a user to download and install the Command Line Developer Tools (aka “Developer Command Line Tools” or “Command Line Tools for Xcode”). This is common for many tools that Apple considers to be of little interest to ‘normal,’ non-developer users. Another common example is git. When you install Xcode, you will also get all the Command Line Developer Tools, including python3 and git. This is useful for developers, who may want to use Python scripts for build operation, or for individuals who just want to ‘play around’ or experiment with Python locally. For MacAdmins, it adds the extra burden of installing and maintaining either the Command Line Developer Tools or the full Xcode install. Python Versions, a multitude of Snakes After installing Xcode or the Command Line Developer Tools, you can check the version of python installed: (versions on macOS 10.15.3 with Xcode 11.3.1) > python --version Python 2.7.16 > python3 --version Python 3.7.3 When you go on the download page for Python.org, you will get Python 3.8.1 (as of this writing). But, on that download page, you will also find download links for “specific versions” which include (as of this writing) versions 3.8.1, 3.7.6, 3.6.10, 3.5.9, and the deprecated 2.7.17. The thing is, that Python isn’t merely split into two major release versions, which aren’t fully compatible with each other, but there are several minor versions of Python 3, which aren’t fully compatible with each other, but are still being maintained in parallel. Developers (individuals, teams, and organisations) that use Python will often hold on to a specific minor (and sometimes even patch) version for a project to avoid issues and bugs that might appear when changing the run-time. When you install the latest version of Munki, it will install a copy of the Python framework in /usr/local/munki/ and create a symbolic link to that python binary at /usr/local/munki/python. You can check its version as well: % /usr/local/munki/python --version Python 3.7.4 All the Python code files for Munki will have a shebang (the first line in the code file) of #!/usr/local/munki/python This ensures that Munki code files use this particular instance of Python and no other copy of Python that may have been installed on the system. The latest version of AutoPkg has a similar approach: > /usr/local/autopkg/python --version Python 3.7.5 In both cases the python binary is a symbolic link. This allows the developer to change the symbolic link to point to a different Python framework. The shebangs in the all the code files point to the symbolic link, which can be changed to point to a different Python framework. This is useful for testing and debugging. Could MacAdmins use this to point both tools to the same Python framework? Should they? The Bridge to macOS On top of all these different versions of Python itself, many scripts, apps, and tools written in Python rely on ‘Python modules.’ These are libraries (or frameworks) of code for a certain task, that can be downloaded and included with a Python installation to extend the functionality of Python. The most relevant of these modules for MacAdmins is the “Python Objective-C Bridge.” This module allows Python code to access and use the native macOS Cocoa and CoreFoundation Frameworks. This not only allows for macOS native GUI applications to be written in Python (e.g. AutoDMG and Munki’s Managed Software Center [update: MSC was re-written in Swift last year]), but also allows short scripts to access system functions. This is sometimes necessary to get a data that matches what macOS applications “see” rather than what the raw unix tools see. For example, the defaults tool can be used to read the value of property lists on disk. But those might not necessarily reflect the actual preference value an application sees, because that value might be controlled by a different plist file or configuration profile. (Shameless self-promotion) Learn more about Property lists, Preferences and Profiles You could build a tool with Swift or Objective-C that uses the proper frameworks to get the “real” preference value. Or you can use Python with the Objective-C bridge: #!/usr/bin/python from Foundation import CFPreferencesCopyAppValue print CFPreferencesCopyAppValue("idleTime", "com.apple.screensaver") Three simple lines of Python code. This will work with the pre-installed Python 2.7, because Apple also pre-installs the Python Objective-C bridge with that. When you try this with the Developer Tools python3 you get an error: ModuleNotFoundError: No module named 'Foundation' This is because the Developer Tools do not include the Objective-C bridge in the installation. You could easily add it with: > sudo python3 -m pip install pyobjc But again, while this command is “easy” enough for a single user on a single Mac, it is just the beginning of a Minoan labyrinth of management troubles. Developers and MacAdmins, have to care about the version of the Python they install, as well as the list of modules and their versions, for each Python version. It is as if the Medusa head kept growing more smaller snakes for every snake you cut off. (Ok, I will ease off with Greek mythology metaphors.) You can get a list of modules included with the AutoPkg and the Munki project with: > /usr/local/munki/python -m pip list > /usr/local/autopkg/python -m pip list You will see that not only do Munki and AutoPkg include different versions of Python, but also a different list of modules. While Munki and AutoPkg share many modules, their versions might still differ. Snake Herding Solutions Apple’s advice in the Catalina Release Notes is good advice: It’s recommended that you bundle the runtime within the app. Rather than the MacAdmin managing a single version of Python and all the modules for every possible solution, each tool or application should provide its own copy of Python and its required modules. If you want to build your own Python bundle installer, you can use this script from Greg Neagle. This might seem wasteful. A full Python 3 Framework uses about 80MB of disk space, plus some extra for the modules. But it is the safest way to ensure that the tool or application gets the correct version of Python and all the modules. Anything else will quickly turn into a management nightmare. This is the approach that Munki and AutoPkg have chosen. But what about smaller, single script solutions? For example simple Python scripts like quickpkg or prefs-tool? Should I bundle my own Python framework with quickpkg or prefs-tool? I think that would be overkill and I am not planning to do that. I think the solution that Joseph Chilcote chose for the outset tool is a better approach for less complex Python scripts. In this case, the project is written to run with Python 3 and generic enough to not require a specific version or extra modules. An admin who wants to use this script or tool, can change the shebang (the first line in the script) to point to either the Developer Tool python3, the python3 from the standard Python 3 installer or a custom Python version, such as the Munki python. A MacAdmin would have to ensure that the python binary in the shebang is present on the Mac when the tool runs. You can also choose to provide your organization’s own copy Python with your chosen set of modules for all your management Python scripts and automations. You could build this with the relocatable Python tool and place it in a well-known location the clients. When updates for the Python run-time or modules are required, you can build and push them with your management system. (Thanks to Nathaniel Strauss for pointing out this needed clarifying.) When you build such scripts and tools, it is important to document which Python versions (and module versions) you have tested the tool with. (I still have to do that for my Python tools.) What about /usr/bin/env python? The env command will determine the path to the python binary in the current environment. (i.e. using the current PATH) This is useful when the script has to run in various environments where the location of the python binary is unknown. This is useful when developers want to use the same script in different environments across different computers, user accounts, and platforms. However, this renders the actual version of python that will interpret the script completely unpredictable. Not only is it impossible to predict which version of Python will interpret a script, but you cannot depend on any modules being installed (or their versions) either. For MacAdmin management scripts and tools, a tighter control is necessary. You should use fixed, absolute paths in the shebang. Conclusion Managing Python runtimes might seem like a hopeless sisyphean task. I believe Apple made the right choice to not pre-install Python any more. Whatever version and pre-selection of module versions Apple would have chosen, it would only have been the correct combination for a few Python solutions and developers. While it may seem wasteful to have a multitude of copies of the Python frameworks distributed through out the system, it is the easiest and most manageable solution to ensure that each tool or application works with the expected combination of run-time and modules.
8585
dbpedia
0
81
https://medium.com/%40ruzin.saleem/packaging-python-lambda-functions-using-terraform-2da9f108cc6a
en
Packaging Python AWS Lambda Functions using Terraform
https://miro.medium.com/…ThD-QJaW-Zog.gif
https://miro.medium.com/…ThD-QJaW-Zog.gif
[ "https://miro.medium.com/v2/resize:fill:64:64/1*dmbNkD5D-u45r44go_cf0g.png", "https://miro.medium.com/v2/resize:fill:88:88/1*Qgi8ioEe3S-MLpBw7oTIDQ.jpeg", "https://miro.medium.com/v2/resize:fill:144:144/1*Qgi8ioEe3S-MLpBw7oTIDQ.jpeg" ]
[]
[]
[ "" ]
null
[ "Ruzin Saleem", "medium.com", "@ruzin.saleem" ]
2019-01-29T14:14:45.454000+00:00
Anyone who’s ever packaged a lambda function, especially one with dependencies, is familiar with how fiddly it can be. Either the zipped up function does not have the right directory structure or…
en
https://miro.medium.com/v2/5d8de952517e8160e40ef9841c781cdc14a5db313057fa3c3de41c6f5b494b19
Medium
https://medium.com/@ruzin.saleem/packaging-python-lambda-functions-using-terraform-2da9f108cc6a
Anyone who’s ever packaged a lambda function, especially one with dependencies, is familiar with how fiddly it can be. Either the zipped up function does not have the right directory structure or it’s not world readable or the dependencies haven’t properly been installed. Unless you have a robust CI/build process, you are going to run into issues. In the case of a complex Lambda deployment, a build/CI server is a must. However, as more and more of us move towards IAC, particularly Terraform, it makes sense to manage small lambda deployments within Terraform. If you can create a lambda resource with Terraform, why not handle the packaging of the code as well? Obviously, I am not the first one to think of this :P. In fact, there are a few Terraform modules available atm to package your lambdas up. The most popular one is Claranet’s Terraform AWS Lambda module. It’s very versatile in that it caters to most languages. However, versatility sometimes has drawbacks. When it comes to packaging of python functions and associated dependencies, I’ve seen some issues: The hashing of the source code directory, installation of the dependencies and packaging of the function is handle by THREE separate python scripts. This is normally fine but it is additional overhead, its kinda ugly and completely unnecessary since Terraform has a ‘data archive’ resource that handles hashing and packaging, a total of six lines of code as opposed to 300!! They don’t make use of virtualenv. This is a BIG flaw as you lose out on being able to package your function in an isolated python environment. As a result, you are more likely to run into problems with dependencies and versions, and indirectly permissions. All things we want to avoid! They use your current system python runtime instead of your AWS Lambda runtime to package your python function and install any dependencies. Say If your AWS lamba function runtime is set to python3.7 and your system runtime is python2.7, you may run into issues again with libraries, dependencies and versions. For example, numpy, a very popular python library, is extremely version specific. Don’t get me wrong, the folks at Claranet have done a really nice job in terms of simplifying an ugly process and catering to several languages. However, I do think the python packaging can definitely be improved. So I rolled up my sleeves and stared at my screen for a few hours and built a terraform_aws_lambda_python module that addresses most of the limitations of Claranet’s Terraform AWS Lambda module. So how is my module different? The module uses a pair of Terraform’s data archive resource to hash and archive the source_code directory. Total lines of code: 6! and the added benefit of being completely managed within Terraform itself. It uses uses virtualenv to create an isolated python environment with the same runtime as that of your aws_lambda function. This eliminates any possible issues that may arise from a difference in runtimes and introduces the benefit of using an isolated python environment. Total lines of code: 10! Null_resource triggers are used to re-package the code if the hash of source_code directory or requirement.txt file changes. So if you make changes to your source code or add new libraries via requirements.txt, the module will repackage your source code and install any new dependencies. It can’t all be sunshine and bugfree surely! Terraform’s data archive resource has a known bug. In the context of my module, this means that the data archive resource used to package the final code will keep re-hashing the code on every terraform plan/apply. There is a workaround to avoid this, however, as it has not impact on functionality other than being mildly annoying, I thought it best to wait for the official fix. Thanks for reading! And if you found it useful, please leave some claps on your way out. Thought of any improvements? Add a PR.
8585
dbpedia
2
98
https://coteditor.com/
en
CotEditor
https://coteditor.com/im…picon/512@2x.png
https://coteditor.com/im…picon/512@2x.png
[ "https://coteditor.com/img/appicon/128@2x.png", "https://coteditor.com/img/MacAppStore.svg", "https://coteditor.com/img/screenshots/screenshot@2x.png", "https://coteditor.com/img/screenshots/darkmode@2x.png", "https://coteditor.com/img/screenshots/tools@2x.png", "https://coteditor.com/img/screenshots/verticalOrientation@2x.png", "https://coteditor.com/img/screenshots/preferences@2x.png", "https://coteditor.com/img/icons/osx.svg", "https://coteditor.com/img/icons/speed.svg", "https://coteditor.com/img/icons/opensource.svg", "https://coteditor.com/img/icons/syntax.svg", "https://coteditor.com/img/icons/find.svg", "https://coteditor.com/img/icons/gui.svg", "https://coteditor.com/img/icons/autobackup.svg", "https://coteditor.com/img/icons/outline.svg", "https://coteditor.com/img/icons/split_view.svg", "https://coteditor.com/img/icons/char_inspector.svg", "https://coteditor.com/img/icons/script.svg", "https://coteditor.com/img/icons/incompatibles.svg", "https://coteditor.com/img/icons/cjk.svg" ]
[]
[]
[ "" ]
null
[]
null
Text Editor for macOS
en
favicon.png
https://coteditor.com
Syntax Highlighting Colorize more than 50 pre-installed major languages like HTML, PHP, Python, Ruby or Markdown. You can also create your own settings. Powerful Find & Replace Super powerful find and replace using the ICU regular expression engine. Setting via Click There are no complex configuration files that require geek knowledge. You can access all your settings including syntax definitions and themes from a standard settings window. Auto Backup You don't need to lose your unsaved data anymore. CotEditor backups your documents automatically while editing. Outline Menu Extract specified lines with the predefined syntax, and you can jump to the corresponding line. Split Editor Split a window into multiple panes to see different parts of your document at the same time. Character Inspector Inspect Unicode character data of each selected character in your document and display them in a popover. Scriptable Make your own macro in your favorite language, whether it is Python, Ruby, Perl, PHP, UNIX shell, AppleScript or JavaScript. Incompatible Characters Check and list-up the characters in your document that cannot convert into the desired encoding.
8585
dbpedia
1
22
https://community.jamf.com/t5/jamf-pro/autopkg-and-adobe-flash-esr/m-p/106598
en
AutoPKG and Adobe Flash ESR
[ "https://community.jamf.com/legacyfs/online/avatars/891268668ccc4a5c8572cfc177ce1048.png", "https://community.jamf.com/legacyfs/online/avatars/9ace4bd92c3546fda85d6d894e79c7c2.png", "https://community.jamf.com/legacyfs/online/avatars/891268668ccc4a5c8572cfc177ce1048.png", "https://community.jamf.com/skins/images/3C7018BFED3E064C6B0C86CAD438737B/responsive_peak/images/icon_anonymous_message.png", "https://community.jamf.com/skins/images/3C7018BFED3E064C6B0C86CAD438737B/responsive_peak/images/icon_anonymous_message.png", "https://community.jamf.com/html/@DB007B9D4B38359F399423E43927D581/assets/logo-jamf-blk.svg" ]
[]
[]
[ "" ]
null
[ "community.jamf.com", "user-id" ]
2015-02-26T14:35:40+00:00
We are looking at starting to use Autopkg in our environment, but we currently use Flash ESR for deployment and would like to see if we - 106598
en
https://community.jamf.com/html/@341C36E148083396DBCB6E6A9C18E572/assets/favicon.ico
Jamf Nation
https://community.jamf.com/t5/jamf-pro/autopkg-and-adobe-flash-esr/m-p/106598#M95715
We are looking at starting to use Autopkg in our environment, but we currently use Flash ESR for deployment and would like to see if we could make that work in Autopkg. I see on the autopkg github site that Anthony Reimer has posted a new recipe, where the direct URL for the flash ESR link can be copy and pasted, but would like if there was a more automated way to check if one is on the current version or not. I, unfortunately, before this Monday have had no experience with Python, to see if this would be possible. I have over the last few days given it my best shot to rewrite the existing AdobeFlashURLProvider.py file, learning Python as I go, but I think I've hit a wall as to the current level of Python knowledge I have at this point. I was wondering if there's anyone else out there who would like to use the Adobe Flash ESR for Autopkg and perhaps has some experience with Python to see where I've gone wrong in the code. I will copy the current version of the AdobeFlashURLProvider.py I have created to the bottom of this discussion. Also I'm aware there are probably a number of unneeded references in the current code, and that is mainly due to the fact that since I wasn't quite sure exactly what all the current code was doing, I pretty much just replaced the sections that parsed the Adobe XML with my new code that parses the HTML from the Adobe distribution page. I'm also going to paste in a second bit of code that is just the HTML parsing code on its own that I wrote, which I am trying to inject and replace the XML code in the original autopkg AdobeFlashURLProvider.py file. Thanks, Lee Weisbecker HTML Parsing Code #!/usr/bin/python2.7 import urllib2 from HTMLParser import HTMLParser import re UPDATE_HTML_URL = "http://adobe.com/products/flashplayer/fp_distribution3.html" DOWNLOAD_TEMPLATE_URL = "http://fpdownload.macromedia.com/get/flashplayer/current/licensing/mac/install_flash_player_%s_osx.dmg" FlashHTMLURL = urllib2.Request(UPDATE_HTML_URL) FlashURL = urllib2.urlopen(FlashHTMLURL) FlashHTML = FlashURL.read() class AdobeFlashHTML(HTMLParser): container = "" def handle_data(self, data): if data.find("Extended Support Release -") != -1: self.container += data return self.container Flash = AdobeFlashHTML() Flash.feed(FlashHTML) print Flash.container FlashVersionList = re.findall(r'd+.d+.d+.d+', Flash.container) FlashVersionStr = FlashVersionList[0] print (FlashVersionStr) FlashVersion = FlashVersionStr[0:2] print FlashVersion print DOWNLOAD_TEMPLATE_URL % FlashVersion Edited AdobeFlashURLProvider.py File #!/usr/bin/python2.7 # # # # # # # # """See docstring for AdobeFlashURLProvider class""" import urllib2 from HTMLParser import HTMLParser import re from autopkglib import Processor, ProcessorError all = ["AdobeFlashURLProvider"] UPDATE_HTML_URL = "http://adobe.com/products/flashplayer/fp_distribution3.html" DOWNLOAD_TEMPLATE_URL = "http://fpdownload.macromedia.com/get/flashplayer/current/licensing/mac/install_flash_player_%s_osx.dmg" class AdobeFlashHTML(HTMLParser): container = "" def handle_data(self, data): if data.find("Extended Support Release -") != -1: self.container += data return self.container class AdobeFlashURLProvider(Processor): """Provides URL to the latest Adobe Flash Player release.""" description = doc input_variables = { "url": { "required": False, "description": ("Override URL. If provided, this processor " "just returns without doing anything."), }, "version": { "required": False, "description": ("Specific version to download. If not defined, " "defaults to latest version.") }, } output_variables = { "url": { "description": "URL to the latest Adobe Flash Player release.", }, } def get_adobe_flash_dmg_url(self): '''Return the URL for the Adobe Flash DMG''' version = self.env.get("version") if not version: # Read update HTML try: FlashHTMLURL = urllib2.Request(UPDATE_HTML_URL) FlashURL = urllib2.urlopen(FlashHTMLURL) FlashHTML = FlashURL.read() except: raise ProcessorError( "Can't download %s" % (UPDATE_HTML_URL)) # Parse HTML data try: Flash = AdobeFlashHTML() FlashVersionList = re.findall(r'd+.d+.d+.d+', Flash.container) FlashVersionStr = FlashVersionList[0] except: raise Exception ("Can't read %s" % (FlashHTML)) # Extract version number from the HTML version = None if Len(FlashVersionStr) == 10: version = FlashVersionStr if not version: raise ProcessorError("Update HTML in unexpected format.") else: self.output("Using provided version %s" % version) # Use version number to build a download URL version = FlashVersionStr[0:2] return DOWNLOAD_TEMPLATE_URL % version def main(self): '''Return a download URL for latest Mac Flash Player''' if "url" in self.env: self.output("Using input URL %s" % self.env["url"]) return self.env["url"] = self.get_adobeflash_dmg_url() self.output("Found URL %s" % self.env["url"])
8585
dbpedia
0
39
https://blogs.ethz.ch/heim/category/win-deployment/
en
Windows Deployment at Nick Heim
[]
[]
[]
[ "" ]
null
[]
null
Nick’s comments on Windows Deployment
en
https://blogs.ethz.ch/he…list/favicon.png
null
Download the actual Windows release. Get the MSI. But first, install all the prequisites! A packaging machine is exposed to the internet and reaches out to dozens of server on the net every day and should therefore be hardened and downlocked. Recommended installation is per user into the profile, which is used to run AutoPkg. This user profile should have not more than standard user rights. For this to work, the MSI have to be advertised with admin rights and the following command: msiexec /jm AutoPkgWin.msi CAUTION: This needs an elevated CMD-shell! PS-console does not work! After this, the Installer can be run with standard user rights. AutoPkg for Windows requires Windows 10 / Server 2016 or newer, Windows 32 or 64bit and to have Git installed is highly recommended, so managing recipe repositories is possible. Knowledge of Git itself is not required but helps. Tested only on 64bit! Easy route: With this script (AutoPkg-PreReq-Installer), you can install everything needed, in one run. Step by step instruction: The following software and tools are needed as prequisites to run AutoPkg on Windows: Python 3.8.x: or 3.10.x Download (Caution: pythonnet is still not compatible with Python 3.9/3.10) (Python 3.10.x works with pythonnet v3.0.0-alpha2 with: pip install pythonnet –pre) Needed libraries: pyyaml, appdirs, msl.loadlib, pythonnet, comtypes, pywin32, certify If Python is present, those libs are automatically installed by the AutoPkg installer. Git (highly recomended): Download 7zip: Download Windows-Installer-SDK: Download, You have to select a version, that fits your OS. This is necessary for some of the MSI-related processors. Download the webinstaller, choose a download directory and select at least: “MSI Tools”, “Windows SDK for Desktop C++ x86 Apps” and on x64 systems also “Windows SDK for Desktop C++ x64 Apps”, (there will be some additional selections). Then install at minimum: “Windows SDK Desktop Tools x86-x86_en-us.msi” and “Windows SDK Desktop Tools x64-x86_en-us.msi” (x64 only). Find the install location (Somewhere under C:\Program Files (x86)\Windows Kits…) Copy the Wi*.vbs and Msi*.exe files over to your MSITools folder. Register the 64bit mergemod DLL: regsvr32 “C:\Program Files (x86)\Windows Kits\10\bin\xxx\x64\mergemod.dll” If the SDK is present, this COM DLL is automatically registered by the AutoPkg installer. Wix-Toolset: Download, version 3.11 should do it. Although, i always use the latest development version. MSBuild: Download, THE Windows Make! Install commandline: vs_buildtools.exe –add Microsoft.VisualStudio.Workload.MSBuildTools –quiet NANT: Download (Deprecated), this is one of the predecessors of MS-Build (which you should use, when starting with a new build-enviroment). Download the ZIP package, extract it and copy the “nant-0.92” folder to the Tools dir. Quite a long way from very early adventures with AutoPkg on Windows. Nick McSpadden started it on June 2018. See: https://twitter.com/mrnickmcspadden/status/1011422819853324288. In early 2019, I was talking with my colleagues Max and Graham about automating the provisioning of packages in there deployment system (JAMF). Jealously, I had to admit, that such a framework was the missing thing in our work of provisioning software into our Windows deployment system (baramundi.de). That talk did its work and did not let me go… And as a complete novice on Python, i began to poke around the code on https://github.com/autopkg. Naively, I downloaded that stuff and tried it on Windows, which instantly told me, that there were Python functions in use, which were OSX only and not available on Windows. Too bad. ☹ But at the end of February 19, light at the end of the tunnel! Max pointed me at the tweet mentioned earlier. And YES, with the modifications from Nick’s fork, it ran on Windows! From there to the system we have today, it was a long way. Almost 100 recipes and more than 2 dozen processors are doing a great way of saving time and creating much more reliable packages, we ever had before. So, if you want to try it out for yourself, in about half an hour, you can build a machine, that is ready for AutoPkg.
8585
dbpedia
1
75
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/package_module.html
en
ansible.builtin.package module – Generic OS package manager — Ansible Community Documentation
https://cdn2.hubspot.net…cs-left-rail.png
[ "https://docs.ansible.com/ansible/latest/_static/images/Ansible-Mark-RGB_White.png", "https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-left-rail.png" ]
[]
[]
[ "" ]
null
[]
null
en
../../../_static/images/Ansible-Mark-RGB_Black.png
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/package_module.html
Package name, or package specifier with version. Syntax varies with package manager. For example name-1.0 or name=1.0. Package names also vary with package manager; this module will not “translate” them per distribution. For example libyaml-dev, libyaml-devel. To operate on several packages this can accept a comma separated string of packages or a list of packages, depending on the underlying package manager. The required package manager module to use (dnf, apt, and so on). The default auto will use existing facts or try to auto-detect it. You should only use this field if the automatic selection is not working for some reason. Since version 2.17 you can use the ansible_package_use variable to override the automatic detection, but this option still takes precedence. Default: "auto" Forces a ‘global’ task that does not execute per host, this bypasses per host templating and serial, throttle and other loop considerations Conditionals will work as if run_once is being used, variables used will be from the first available host This action will not work normally outside of lockstep strategies
8585
dbpedia
2
36
https://jfrog.com/help/r/jfrog-artifactory-documentation/upload-authenticated-pypi-packages-to-jfrog-artifactory
en
JFrog Help Center
https://jfrog.com/help/favicon.ico
https://jfrog.com/help/favicon.ico
[ "https://jfrog.com/help/internal/api/webapp/splash-image?v=199390e0" ]
[]
[]
[ "" ]
null
[]
null
en
https://jfrog.com/help/favicon.ico
null
8585
dbpedia
0
97
https://www.elliotjordan.com/posts/docklib-outset/
en
Deploying and running docklib scripts using Outset
https://www.elliotjordan…avicon-32x32.png
https://www.elliotjordan…avicon-32x32.png
[ "https://www.elliotjordan.com/img/logo.svg", "https://www.elliotjordan.com/posts/images/docklib-outset-jamf-policy.png", "https://www.elliotjordan.com/img/back.svg", "https://www.elliotjordan.com/img/next.svg" ]
[]
[]
[ "" ]
null
[]
null
Guide for running docklib scripts on your Mac fleet using Outset, along with recommendations for packaging and deployment.
en
/favicon.ico
Elliot Jordan
https://www.elliotjordan.com/posts/docklib-outset/
My post on writing docklib scripts walked you through how to use Python and docklib to create a script for managing the macOS Dock, but stopped short of explaining the details of deploying and running the script on your Mac fleet. This post will cover one method for running your script: Outset. Outset is a Python-based tool that kicks off tasks at login or startup. In relation to macOS Dock management, Outset’s key benefits include easy execution in user context and flexible control of timing and recurrence. If you’re unfamiliar with creating your own LaunchAgents or if you’re managing multiple dissimilar login or startup scripts, Outset may be worthy of your consideration. Contents Our goal is to run your Dock script at user login on your managed Macs. The steps we’ll follow towards that goal include: Package your script into an installer Gather dependencies: Outset and Python 3 Deploy installers via your software management tool Plan for maintenance and future adjustments I’ll provide examples for Jamf and Munki in step 3, but you should be able to leverage any similar tool to achieve the same result. Let’s get started! Package your script First, you’ll need to build an installer that places your Dock management script at the desired location on your managed Macs. For this, you won’t be surprised that I recommend MunkiPkg. Dock scripts are ideal candidates for MunkiPkg due to their small size, text contents, and the many benefits of version control. A note about the name Perennial reminder: you can use MunkiPkg even if you're not using Munki to deploy the package. Packaging granularity Your script should be packaged separate from Outset itself, so that you can update either the script or Outset independently of each other. If you intend to run additional login or startup scripts with Outset, it’s up to you whether you create separate packages for those scripts or combine them with your Dock script. My advice is to package your scripts separately if the scripts might target different groups of Macs in the future. If all the scripts equally apply to all your Macs, a unified package will work fine. Create a MunkiPkg project First, navigate to where you store your package sources and create a new MunkiPkg project. cd ~/Developer/pkgs/ munkipkg --create outset_dock cd outset_dock Create a README.md file with a basic explanation of your package. Here’s an example from a previous post: # Outset - Dock Script The installer produced by this source project installs an [Outset](https://github.com/chilcote/outset) script that sets the standard Dock items at user login. This folder is a [MunkiPkg](https://github.com/munki/munki-pkg) project. After making changes, be sure to increment the version in build-info.plist, then build the project with the `munkipkg` tool. The resulting pkg file will appear in the build folder. Delete the scripts folder. rm -r scripts The Dock script will go in the payload rather than executing as a preinstall or postinstall script. This will allow Outset to execute our script with the proper timing and context. Create the needed payload structure for Outset to run your script. If your Dock script is idempotent and will run at every login, create a login-every folder. If your script should only run at the first/next login, create a login-once folder as shown below. mkdir -p payload/usr/local/outset/login-once/ Move your script into the login-every or login-once folder you just created. Your project’s file system will now look like this: outset_dock ├── .gitignore ├── README.md ├── build ├── build-info.plist └── payload └── usr └── local └── outset └── login-once └── dock.py Make your script executable. This is required for Outset to run it. chmod +x payload/usr/local/outset/login-once/dock.py Customize the build-info.plist file. I typically adjust the bundle identifier to match my company’s reverse-domain convention, like com.pretendco.cpe.outset_dock. You can optionally configure signing if you have a signing certificate, but this step is not essential when deploying with a software management tool like Munki or Jamf Pro. Build the package. munkipkg . Inspect the package with Suspicious Package or another installer inspection utility. open -a "Suspicious Package" build/outset_dock-1.0.pkg Make sure the payload contains your Dock script (and nothing it shouldn’t). Gather dependencies Now that you’ve built an installer for your script, gather the installers for your script’s dependencies. Outset: You can get the latest release from GitHub. Python 3 with the docklib module: Outset requires Python 3. Deploying the MacAdmins Python “recommended” package is the easiest way to satisfy this requirement. If you plan on using a different Python installer, you’ll need to ensure the docklib module is installed, and you’ll need to modify your script’s shebang accordingly. Deploy installers Now it’s time to import all the installers into your software management tool. I’ll provide suggestions for deploying your installers via Munki and Jamf, but you should be able to use any similar software management tool to achieve the desired result. Munki steps are below; click here to skip to the Jamf steps. Blueprint for Munki deployment The following steps provide a template for deploying your script and its dependencies with Munki. This is only meant to be an example; please modify as needed for your environment. Import each of your installers into your Munki repo using munkiimport. Use the requires key to define the dependency relationships. munkiimport ~/Downloads/python_recommended-3.9.5.09222021234106.pkg munkiimport ~/Downloads/outset-3.0.3.pkg \ --requires=python_recommended munkiimport build/outset_dock-1.0.pkg \ --requires=python_recommended \ --requires=outset AutoPkg recipes available If you're an AutoPkg user, recipes are available to import new versions of these into Munki automatically: outset.munki.recipe MacAdminsPython.munki.recipe.yaml Rebuild your Munki catalogs. makecatalogs Add outset_dock as a managed install for a Munki manifest associated with one or more test Macs. manifestutil add-pkg outset_dock \ --manifest=dock_testers \ --section=managed_installs Use Managed Software Center or managedsoftwareupdate to check and install for Munki items on your test Mac(s). Log out of your test Mac, then log back in. Your Dock script should run. (Keep in mind that the script may not actually modify the Dock when it runs, depending on how you’ve designed your script. See the Outset logs for troubleshooting.) Also verify the script runs during your setup workflow, if you have a Mac or virtual machine you can use for factory-fresh provisioning testing. Once you are satisfied your script is working as designed on your test Mac(s), promote your outset_dock package to stable. /usr/libexec/PlistBuddy -c "Add :catalogs: string 'stable'" pkgsinfo/outset_dock-1.0.plist Add outset_dock to the desired production manifest. manifestutil add-pkg outset_dock \ --manifest=provisioning \ --section=managed_installs Verify intended behavior by provisioning a production Mac and observing the Dock configuration upon first login. The Jamf steps are below; click here to skip ahead. Blueprint for Jamf deployment The following steps provide a template for deploying your script and its dependencies with Jamf Pro. This is only meant to be an example; please modify as needed for your environment. Import each of your installers into your distribution points using Jamf Admin. Within Jamf Admin, you can set the Priority of the packages to control the order in which the packages install. For example: Package Priority Python 6 Outset 8 outset_dock 10 (default) Create a policy that installs all three packages, and set the scope to a group containing one or more test Macs. Set the policy triggers to Check-In and Enrollment, and the frequency to Once per computer. Know your triggers Remember that the timing of the policy only determines when your script, Outset, and Python are installed — not when your script actually runs. Therefore you should not use Jamf's Login trigger. Provisioning helpers If you use DEPNotify or SplashBuddy you can set a custom trigger for your policy to ensure it runs during the initial provisioning process. Use sudo jamf policy to install the items on your test Mac(s). Log out of your test Mac, then log back in. Your Dock script should run. (Keep in mind that the script may not actually modify the Dock when it runs, depending on how you’ve designed your script. See the Outset logs for troubleshooting.) Also verify the script runs during your setup workflow, if you have a Mac or virtual machine you can use for factory-fresh provisioning testing. Once you are satisfied your script is working as designed on your test Mac(s), scope your policy to your desired group of Macs, or to All Computers. Verify intended behavior by provisioning a production Mac and observing the Dock configuration upon first login. Plan for maintenance You’ll inevitably need to make adjustments to your script in the future. It’s important to lay the foundation now for smooth maintenance and collaboration later. Putting the MunkiPkg project for your Dock script in a Git repository and following a peer review process will improve visibility across your team and reduce likelihood of errors. See my earlier post for tips on collaborating on MunkiPkg projects in Git. When you make changes to your script, the process will look like this:
8585
dbpedia
0
4
https://checkmarx.com/blog/automatic-execution-of-code-upon-package-download-on-python-package-manager/
en
Automatic Execution of Code Upon Package Download on Python Package Manager
https://checkmarx.com/wp…663349748796.png
https://checkmarx.com/wp…663349748796.png
[ "https://px.ads.linkedin.com/collect/?pid=6477&fmt=gif", "https://checkmarx.com/wp-content/uploads/2024/01/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/uploads/2024/05/CXone.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SAST.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SCA.svg", "https://checkmarx.com/wp-content/uploads/2024/05/AI.svg", "https://checkmarx.com/wp-content/uploads/2024/05/API-Security.svg", "https://checkmarx.com/wp-content/uploads/2024/05/ASPM-icon.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Codebashing.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Container-Security.svg", "https://checkmarx.com/wp-content/uploads/2024/05/DAST.svg", "https://checkmarx.com/wp-content/uploads/2024/05/IaC-Security.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SBOM.svg", "https://checkmarx.com/wp-content/uploads/2024/05/SSCS.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Code-to-Cloud.svg", "https://checkmarx.com/wp-content/uploads/2024/05/DevEx.svg", "https://checkmarx.com/wp-content/uploads/2024/05/DigTrans.svg", "https://checkmarx.com/wp-content/uploads/2024/05/Component-35.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/logo.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/icon-search.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/icon-search-mob.svg", "https://checkmarx.com/wp-content/themes/checkmarx//assets/images/icon-search-mob.svg", "https://checkmarx.com/wp-content/uploads/2024/06/avatar_66.jpg", "https://checkmarx.com/wp-content/uploads/2022/08/Blog_python_automatic-execution.jpg", "https://checkmarx.com/wp-content/uploads/2022/08/carbon-990x1024-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/carbon-990x1024-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Group-2433-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Group-2433-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Picture1-2-1.png", "https://checkmarx.com/wp-content/uploads/2022/08/Picture1-2-1.png", "https://checkmarx.com/wp-content/uploads/2024/01/icon-x.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-x.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-yb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-yb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-ln.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-ln.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-fb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/icon-fb.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-citi-2.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-citi-2.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cisco-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cisco-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-accenture-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-accenture-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-wipro-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-wipro-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2021-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2021-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-gartner-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-gartner-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2022-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-cyber-2022-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-dev-insider-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo-dev-insider-1.svg", "https://checkmarx.com/wp-content/uploads/2024/01/logo.svg" ]
[]
[]
[ "" ]
null
[ "Yehuda Gelb" ]
2022-08-26T10:00:00+00:00
A worrying feature in pip/PyPi allows code to automatically run when developers are merely downloading a package. Also, this feature is alarming due to the fact that a great deal of the malicious packages we are finding in the wild use this feature of code execution upon installation to achieve higher infection rates.
en
https://checkmarx.com/wp…vicon-32x32.webp
Checkmarx
https://checkmarx.com/blog/automatic-execution-of-code-upon-package-download-on-python-package-manager/
Automatic code execution is triggered upon downloading approximately one third of the packages on PyPi. A worrying feature in pip/PyPi allows code to automatically run when developers are merely downloading a package. Also, this feature is alarming due to the fact that a great deal of the malicious packages we are finding in the wild use this feature of code execution upon installation to achieve higher infection rates. It is important that python developers understand that package downloading can expose them to an increased risk of a supply chain attack. Intro When executing the well-known “pip install <package_name>” command, users may expect code to be run on their machine as part of the installation process. One source of such code usually resides in the setup.py file of python packages. When a python package is installed, pip, python’s package manager, tries to collect and process the metadata of this package, such as its version and the dependencies it needs in order to work properly. This process occurs automatically in the background by pip running the main setup.py script that comes as part of the package structure. setup.py example The purpose of setup.py is to provide a data structure for the package manager to understand how to handle the package. However, the setup.py file is still a regular python script that can contain any code the developer of the package would like. An attacker who understands this process can plant malicious code in the setup.py file, which would then execute automatically during the package’s installation. In fact, much of the malicious packages we are detecting contain malicious code in the setup.py file. What if we just download the package rather than install it? In addition to the “install” command, pip provides several more options, among them is the “download” command. This command is intended to allow users to download packages’ files without the need to install them. There could be various reasons someone would need this. For example, a developer may want to look into the package’s code before using it. A user may want or need to perform a security check, or perhaps even observe the setup.py file for any anomalies. As it turns out, executing the command “Pip download <package_name>” will run the setup.py file, as well as any potentially malicious code contained within it. It may surprise you, but this behavior is not a bug but rather a feature in the pip design. Users who intentionally only download a package do not expect code to run on their system automatically. As a matter of fact, this concern was expressed in an issue from 2014 on the pypa project https://github.com/pypa/pip/issues/1884, yet it was not addressed, and the issue continues to exist to this day. The .whl file type Python wheels are essentially .whl files that are part of the Python ecosystem and bring various performance benefits to the package installation process. But that is not the only thing that wheels bring to the table. In the past, when python code was built into a package, the result would be a tar.gz file that would then be published to the PyPi platform. tar.gz files include the setup.py file which is run upon download and installation. But suppose you’ve recently tried downloading or installing a Python package using pip. In that case, you may have noticed Python supplying you with a .whl file. The reason for this is when developers build a python package using, for example, the “pip -m build” command, in newer pip versions, pip automatically tries to create a secondary .whl file in addition to the tar.gz file, which is then published together to the Python Package manager platform. When a user downloads or installs this package, PIP will by default deliver the .whl file to the user’s machine. The way wheels work cuts the setup.py execution out of the equation. Why is the setup.py still relevant? Even though pip defaults to using wheels instead of tar.gz files, malicious actors can still intentionally publish python packages without a .whl file. When a user downloads a python package from PyPi, pip will preferentially use the .whl file, but will fall back to the tar.gz file if the .whl file is lacking. Is there anything you can do about this? Currently, there are actions users can take to prevent automatic execution upon package download. One action is checking the package file contents at https://pypi.org/project/<package>/#files and observing if a .whl file is present. If there is a .whl file, the user can feel confident they will receive the .whl file, and no code will be executed on their machine. If there is only a tar.gz present, a user can use a safe method of download such as working directly with PyPi’s “simple” API: https://pypi.org/simple/<package-name>/. For example, when using the package listed above, prp1, a user can download it from the following link https://pypi.org/simple/prp1/. Conclusion Code execution upon installation is one of the features attackers use the most in open-source attacks. Developers opting to download, instead of installing packages, are reasonably expecting that no code will run on the machine upon downloading the files. However, PyPi includes a feature allowing just that—code execution on the user’s machine when all that was requested was a file download. It is possible to protect yourselves from suspicious package by following the steps detailed above. As always, we are releasing similar blogs to help keep the open source ecosystem safe and raise the awareness of python developers to this issue so they can avoid unwanted consequences.
8585
dbpedia
0
78
https://derflounder.wordpress.com/2024/02/04/building-distribution-packages-using-autopkg/
en
Building distribution packages using AutoPkg
https://derflounder.word…ipe_workflow.png
https://derflounder.word…ipe_workflow.png
[ "https://derflounder.wordpress.com/wp-content/uploads/2024/02/recipe_workflow.png?w=595", "https://derflounder.wordpress.com/wp-content/uploads/2024/02/recipe_workflow2.png?w=595", "https://1.gravatar.com/avatar/d678374fabfd2ce5e42a8d2ee219c878fe28d4d27ba3bdfe0905bcdd49a78f9f?s=48&d=identicon&r=G", "https://0.gravatar.com/avatar/9a6eb242728c9344e6078f49f7297e7bbe7b5c5af0b3f99952f35686499ef79c?s=48&d=identicon&r=G", "https://0.gravatar.com/avatar/9851bc7e13a6a30c801e72cd65e1fcc49818a778abfbfc923093a7ae8d60564a?s=48&d=identicon&r=G", "https://1.gravatar.com/avatar/d01b71732017a03705b60dcd6ba6669a9b5148633fa12b8ae7531c3143604cc9?s=48&d=identicon&r=G", "https://1.gravatar.com/avatar/da3a0520ed1bfc83e1f3baa3c3947cf7f0ebb511790f996d7eabad8310adcdb1?s=48&d=identicon&r=G", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2024-02-04T00:00:00
I've been thinking about the issue of building installer packages using AutoPkg which are ready for installation using MDM commands. Installing an installer package via MDM command requires packages to have the following attributes: Signed with an Apple Developer ID Installer certificate Be a distribution installer package For criteria #2, this references the fact that…
en
https://s1.wp.com/i/favicon.ico
Der Flounder
https://derflounder.wordpress.com/2024/02/04/building-distribution-packages-using-autopkg/
I’ve been thinking about the issue of building installer packages using AutoPkg which are ready for installation using MDM commands. Installing an installer package via MDM command requires packages to have the following attributes: Signed with an Apple Developer ID Installer certificate Be a distribution installer package For criteria #2, this references the fact that there are two kinds of modern installer packages for macOS: Component packages: these are the standard type of installer package, which contain an archive of files to install and the information on where the files should be installed. Distribution packages: These packages can contain one or more component packages, and may also include additional resources to customize and control the user interface shown in the Installer application. By default, AutoPkg will build component packages using the PkgCreator processor or the AppPkgCreator processor. But there is a relatively straightforward way to create a a distribution package while using an existing component package as a source, using the productbuild command. To create a distribution installer package from an existing component installer package, you would use a command similar to the one shown below: Note: If using a signed component installer package as a source, the resulting new distribution package will not be signed. If needed, you will need to sign the distribution package following its creation. For those who want to create distribution packages as part of an AutoPkg workflow, I’ve written a DistributionPackageCreator AutoPkg processor which is designed to perform the following tasks: Rename the existing AutoPkg-generated component package. Create a new distribution package from the AutoPkg-generated component package. Set the newly-created distribution package to have the original name of the AutoPkg-generated component package. For more details, please see below the jump. The DistributionPackageCreator processor is shown below, as well as being available via the following link: https://github.com/rtrouton/AutoPkg_Processors/tree/main/DistributionPackageCreator Update – 2-5-2024: It turns out that both myself and @davidbpirie wrote practically identical AutoPkg processors. His processor is FlatToDistPkg (written in 2022) and it is available in his repo: https://github.com/autopkg/davidbpirie-recipes/tree/main/SharedProcessors When included in an AutoPkg recipe, the DistributionPackageCreator processor will locate AutoPkg-generated component packages by using the pkg_path variable and do the following: Rename the AutoPkg-generated component package from /path/to/package_name_here.pkg to /path/to/package_name_here-component.pkg Create a new distribution package from the AutoPkg-generated component package. Save the distribution package as /path/to/package_name_here.pkg, so that the name matches the original package. Note: Setting the distribution package’s name to match the original component package’s name allows AutoPkg to continue to work with the distribution installer package. To assist folks who want to use this processor, but don’t want to rewrite their existing .pkg recipes, I’ve written an example recipe to assist with this: the .distpkg recipe. The .distpkg recipe uses the DistributionPackageCreator processor and is designed to be placed in the AutoPkg workflow between a .pkg recipe and whatever else came next. In this case, the .pkg recipe would be a parent recipe for the .distpkg recipe. In turn, the .distpkg recipe would be used as the parent recipe for whatever came next in the workflow. A good example would be if you wanted to create a signed distribution package. In that case, you could combine a .pkg recipe, a .distpkg recipe and a .sign recipe into the same workflow to produce a signed distribution package, which should meet all the necessary requirements to install the package via an MDM command. For those who want to use .distpkg recipes, there is an example recipe available via the link below: https://github.com/autopkg/rtrouton-recipes/blob/master/SharedProcessors/Example.distpkg.recipe
8585
dbpedia
1
18
https://maclabs.jazzace.ca/2019/09/14/core-or-custom-autopkg-processors.html
en
Anthony’s Mac Labs Blog
[]
[]
[]
[ "" ]
null
[]
2019-09-14T00:00:00
Anthony’s Mac Labs Blog : Anthony Reimer’s blog for Mac Admins that shares what he’s learned recently
null
Posted 2019 September 14 The core AutoPkg processors pack a lot of punch. Everyone who uses AutoPkg depends on them. But sometimes you need something more — or, if you know how to write Python, you can see a much easier and/or elegant solution if you just write some code. The Microsoft Office sets of recipes, which I have written about in many previous posts, provide examples of how you can do the same task with a custom processor or without. This post will look at downloading and gathering version information using both methods. As in previous Office recipe posts, I will refer to three different recipe solutions: The core AutoPkg recipes [github.com/autopkg/recipes in the MSOfficeUpdates folder]; Rich Trouton’s recipes [github.com/autopkg/rtrouton-recipes in product-specific folders whose name starts with “Microsoft” or “Office”, with child recipes for Munki from Ben Toms in the datajar-recipes repo], excluding the recipes for the full Office Suite; The “SKUless” recipes in Allister Banks’ personal (non-project) GitHub repo [github.com/arubdesu/office-recipes]. In previous articles, I referred to these as the core, rtrouton, and arubdesu recipes respectively, so I will continue that usage here. However, you may notice that I have added significant qualifiers this time when I specified which recipes are being referenced, primarily to keep the discussion tidier. I am only referring to the SKUless recipes in the arubdesu/office-recipes (the ones designed to download the entire suite) because they offer an approach that is different than the other two major recipe sets and thus are useful for study. (The remainder of that repo is a smörgåsbord of different techniques, sharing code in some cases with the core recipes.) Conversely, I’ve taken the rtrouton Office Suite recipes out of the mix because they are essentially rebranded arubdesu SKUless recipes; the product-specific recipes have a unified approach that is different than the Suite and the core recipes. So with those qualifiers out of the way, let’s look at how each set downloads the desired Office apps. Downloading and Processors The most common workflow we would see in any download recipe is: Determine the URI of the item we want to download; Download a copy of the item (if we don’t have the current version in hand already); Check the code signature of the downloaded item. The aforementioned recipes for Office all conform to that. They also do the right thing by inserting an EndOfCheckPhase processor in-between Steps 2 and 3 to properly handle running the recipe with the -c or --check option. The difference is that the core AutoPkg recipes use a custom-built processor to determine the download, and the arubdesu and rtrouton recipes use a core AutoPkg processor. So what are some of the advantages and disadvantages to using a custom processor over core processors (and vice versa)? Custom Processor Core Processors Advantages Can deal with complex/unique download and/or versioning situations Customized for that product Can be coded to use human-friendly Input values Can be more efficient Allows addition of features not currently covered by core processors Processors have already been vetted by hundreds of users Processors are well-documented (including changes) and perform common tasks Recipe author does not need to be able to code in Python Easier to audit recipes for trust (especially if you don’t know Python) Disadvantages Requires knowledge of Python to write a custom processor Requires knowledge of Python or a good testing scheme to audit a custom processor for trust If you can’t write Python and the custom processor requires an update, you have to wait for someone else to do it Limited by what existing processors can do May require extra steps to do the same thing as a custom processor does (if possible at all) Often less efficient, code-wise (if you care about such things) Sidebar: While not applicable in this case, there is another variety of processor called a Shared Processor. It is a custom processor (usually general-purpose in nature) that is not part of the Core processors but is posted in GitHub and meant to be shared amongst recipes. Its advantages and disadvantages sit between Core and Custom. For more information on Shared Processors, see the AutoPkg wiki. In this case, the reason these recipes selected a custom processor over a core processor or vice versa boiled down to the source used to determine the location of the desired download. What’s Your Source? When writing AutoPkg recipes, we want to get as an authoritative source as possible for our downloads (and versioning, for that matter). If the application has an updating mechanism built in, our recipes are less likely to break if we use the same data source as that mechanism. This explains the presence of the GitHubReleasesInfoProvider and SparkleUpdateInfoProvider in the core processors. Both of those parse an update feed which will provide appropriate download links and version information for downloads hosted by GitHub or managed by Sparkle respectively. Microsoft rolls their own update mechanism: Microsoft AutoUpdate (MAU). The core recipes figured out how to parse the feed that MAU uses in order to download the software requested by the user — definitely an authoritative source. Using this feed gave the authors a lot of flexibility in supporting test builds such as Insider Slow and Insider Fast. Basically, as long as the processor authors were willing to write the code to support selecting those options via Input variables, users could access them with AutoPkg. This accounts for the large number of lines of code in the core recipes’ processor. This also gives the recipe user the most straightforward usage: they can use a combination of meaningful words like “Production”, “latest”, and “Excel2019” as input values to direct what to download. While the original Office 2011 recipes focussed on updaters (expecting that you would be manually downloading the full installer from your volume license portal and deploying that first), the current set of recipes supports downloading full installers for the most common individual apps. (A full chart is available in my May 2019 post.) The rtrouton and arubdesu recipes use a different source, but arguably just as authoritative. Microsoft has assigned a number to each product in its arsenal (called an FWLink), such that if you type https://go.microsoft.com/fwlink/?linkid= and then the appropriate 6- or 7-digit FWLink number into your browser, it will download the installer for the most current version of the appropriate product.[1] The rtrouton and arubdesu recipes leverage this, and can therefore use the core URLDownloader processor. This methodology came in handy during the transition to Office 365/2019, when new FWLink numbers came into existence and the numbers you had been using may or may not have been pointing to the variant (2016 or 365/2019) that you needed or expected. With the arubdesu SKUless recipes, you could just change one input key in your override to download the correct product. In contrast, the core recipes required code changes to the custom processor. To summarize, here’s how each recipe set obtains their download: Download Collects Via Source core Custom processor Microsoft AutoUpdate XML rtrouton Core processor Microsoft FWLink arubdesu Core processor Microsoft FWLink Versioning The next thing to look at is obtaining version information for your download. There is a bit of a difference of opinion in the community about where in the recipe chain you should collect such information. From a purely philosophical point of view, it has been my position that download recipes should just do the steps I outlined earlier, and the AutoPkg documentation generally supports this stance: download recipes download, pkg recipes package, etcetera. Since most pkg recipes add version information to the package name, it is common to collect that information in the pkg recipe. But if you use a management system like Munki that can install items using formats other than packages (e.g., from an app inside a disk image), a pkg recipe may not be necessary. In those cases, collecting version information inside the download recipe seems sensible. It’s because of this that I have softened my stance on this issue, since one of the real powers of AutoPkg is feeding your management system. Collecting version information in a download recipe may add inefficiency, but it’s one less thing other users have to worry about when writing a child recipe for their management system. Regardless, you will see version information being collected in both download and pkg recipes out in the wild. Let’s look at how the three sets of Office recipes we are examining collect version information: Versioning Collects Via Source Format Recipe core Custom processor Microsoft AutoUpdate XML 16.x.build download rtrouton Core processors pkg contents 16.x.build pkg arubdesu Custom processor macadmins.software XML 16.x.x download Microsoft provides their downloads in pkg format[2] — not even wrapped in a disk image — and these do not have application version information available to be easily parsed (e.g., by the Versioner processor). So we either need another source or we have to do some spelunking. In the case of the core recipes, the MAU XML file that provided the download link also has a version number field, so the custom processor picks that information up along the way — that’s a sensible, efficient way to do it. The other two recipe sets do not parse that XML file, so they need another method. The arubdesu recipes chose to write a custom processor whose sole raison d’être is to collect the version information. It parses a different XML file, manually being maintained by Paul Bowden of Microsoft, that gives the simpler 16.x.x version number, and since Microsoft doesn’t do silly things like have more than one release of a point update (like a particularly fruit company with their OS updates), this value should also work well with management systems. The main objection I’ve heard to the use of this source for version numbers is that it is manually (not automatically) generated. That means it could be out of sync with the actual package you are downloading. In both the core recipes and arubdesu download recipes, gathering the version number via a custom processor allows those recipes to name the package with the version number included.[3] This is why the arubdesu download recipe gathers the version information before actually downloading the installer. For the core recipes, both those functions are within the same processor, so from a user perspective they happen simultaneously. The rtrouton recipes take another common approach: examine the download and get the version information from there. As long as the vendor hasn’t done something stupid with version numbers (by commission or omission), this is the most authoritative source. In the case of the main Office suite apps (Word, Excel, PowerPoint, etc.), you have to dig down a fair distance into the installer to get the version information, but it is there and it is in a repeatable, specific location. And what else is AutoPkg for if not to automate repetitive tasks? Rich cleverly figured out a way to extract that information using just the core processors. As an example, let’s take a look at his steps to download Microsoft Excel 365 and the processors he used: Step Recipe Processor Notes 1 download URLDownloader download pkg installer; name it Microsoft_Excel.pkg by default 2 EndOfCheckPhase included for those using the --check option 3 CodeSignatureVerifier verifies code signature of download 4 pkg FlatPkgUnpacker unpacks the installer into downloads/unpack directory inside the recipe cache 5 FileFinder find the filename of the pkg installer that has the Excel app inside of it 6 PkgPayloadUnpacker unpack the payload of the pkg installer located in the previous step into downloads/payload 7 Versioner extract the version information from the Excel app revealed by the previous processor (16.x.buildnumber format)[4] 8 PkgCopier copy the pkg originally downloaded, renaming it with the version number appended 9 PathDeleter delete the originally-downloaded pkg and all the unpacked versions, leaving just the renamed pkg The split between .download and .pkg recipes makes great sense here. The download recipe does fetch a pkg, but it is not in the desired format for Rich’s management system. So if you don’t need version information, you could use his download recipe as a parent. If you do, the pkg recipe can be your parent. And since the pkg recipes only use core processors, you don’t have to write any Python. Take Your Pick
8585
dbpedia
0
42
https://managingosx.wordpress.com/category/python/
en
Python – Managing OS X
https://s0.wp.com/i/blank.jpg
https://s0.wp.com/i/blank.jpg
[ "https://managingosx.wordpress.com/wp-content/uploads/2014/04/xcode.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://s2.wp.com/i/logo/wpcom-gray-white.png", "https://pixel.wp.com/b.gif?v=noscript" ]
[]
[]
[ "" ]
null
[]
2023-07-20T03:40:17-07:00
Posts about Python written by GregN
en
https://s1.wp.com/i/favicon.ico
Managing OS X
https://managingosx.wordpress.com/category/python/
Overview Here are some early notes on making and restoring a High Sierra deployment image to an iMac Pro. “Wait, I thought imaging was dead! Especially imaging the iMac Pro with Secure Boot!” you may be thinking. My reply: “We’ll see, won’t we?” It’s early days here: we’re experimenting. Our experiments might lead to dead ends, or they might lead to useful results. Continue reading “Early notes on deploying images to iMac Pro” → If you will be attending my session at MacADUK 2017, you might find it useful to have copies of the sample Python code and scripts I’ll be talking about and demonstrating. I’ve set up a GitHub repo. The sample code is basically complete, but I might make some minor changes over the next several days. You can download the code samples here: https://github.com/gregneagle/macaduk2017/archive/master.zip or if you are familiar with Git, you can clone them locally: git clone https://github.com/gregneagle/macaduk2017.git Hope to see you in London! Like many people tasked with managing OS X/macOS machines, I use VMware Fusion to do a lot of testing. Fusion enables me to test in various versions of OS X, and to easily make changes and revert to a prior state. It’s a great tool. For some of the testing I do, it’s important to be able to quickly and easily build a VM that is configured just like the “real” machines I manage. There are a few way to do that. Since we build our machines by booting into a NetBoot image and using Graham Gilbert’s excellent Imagr (https://github.com/grahamgilbert/imagr) to restore an image, it’s great that we can also boot Fusion VMs from a NetBoot image. Continue reading “Stupid Tricks with createOSXinstallPkg and VMware Fusion” → A few days ago I made a simple tool for building packages available: munkipkg. https://github.com/munki/munki-pkg I got many comments and suggestions for additional features and all sorts of cool additions. Some have even been added to the tool already. But I would like to keep munkipkg a pretty simple, basic tool. The Luggage (https://github.com/unixorn/luggage) has been around for a while; if munkipkg is too simple for your needs, please have look at that. I also suggested to several people that if they had more complex needs than munkipkg could handle, it might make more sense to use autopkg, which supports very complex, customizable workflows. I could tell by the awkward silence that my suggestion was confusing to some — that they had trouble grokking how to use autopkg to build packages “from scratch”, using files and scripts on the local disk. So I created a GitHub repo demonstrating how to use autopkg in this manner. It’s here: https://github.com/gregneagle/autopkg-packaging-demo munkipkg comes with three demo package projects. Two of the packages install files, the third is a “payload-free” package that simply runs a script when installed. The autopkg-packaging-demo duplicates these packages, but uses autopkg to build them instead of munkipkg. (One could also imagine building these packages using either tool: the payload and scripts directories would be the same — in other words, you could have both a build-info.plist for munkipkg and a recipe for autopkg in the same package project directory.) Assuming you have autopkg installed, you can `git clone` the repo, or download and expand the zip file, and run the autopkg recipes within. I hope this clears up some confusion, and sparks some new ideas! https://github.com/munki/munki-pkg munkipkg is a simple tool for building packages in a consistent, repeatable manner from source files and scripts in a project directory. Files, scripts, and metadata are stored in a way that is easy to track and manage using a version control system like git. Another tool that solves a similar problem is Joe Block’s The Luggage (https://github.com/unixorn/luggage). If you are happily using The Luggage, you can probably safely ignore this tool. Though this tool may eventually be added to the set of tools installed with the Munki command-line tools, it’s not currently tied to Munki and can be run completely standalone. Learn more here. This post is based on a column I wrote for MacTech magazine in 2012. MacTech used to make older columns available online, but they haven’t done that for the past several years for some reason. I’m planning to go through my older columns and dust off and republish some that I think are still relevant or useful. Recently, we built a command-line tool using Python and the PyObjC bridge to control display mirroring. PyObjC supports a lot of OS X frameworks “out-of-the-box”, and accessing them from Python can be as simple as: include CoreFoundation But what if the problem you want to solve requires a framework that isn’t included with the PyObjC bindings? In turns out that you can create your own bindings. In this post we’ll explore this aspect of working with Python and OS X frameworks. OUR SAMPLE PROBLEM In my organization, we sometimes have a need to set displays to a certain ColorSync profile. The ColorSync profile to use for a given display is a per-user preference, so if you need to set it for all users of a machine, you can’t just manually set it while logged in as one user and call it good. If you are managing display profiles for a group of machines, or a conference room machine that has network logins, you need a way to manage display profiles for all users. Using MCX or doing some defaults scripting might come to mind. Let’s look at that possibility. Continue reading “Accessing More Frameworks with Python” → This post is based on a column I wrote for MacTech magazine in 2012. MacTech used to make older columns available online, but they haven’t done that for the past several years for some reason. I’m planning to go through my older columns and dust off and republish some that I think are still relevant or useful. Cocoa-Python, also referred to as PyObjC, is a set of Python modules and glue code that allow Python programmers to access many of Apple’s Cocoa frameworks. This allows you to do many things from Python scripting that might otherwise require compiling code in C/Objective-C. To access the Cocoa frameworks, you import them by name, just as you might import a regular Python module. A quick example: the CoreFoundation framework contains methods to work with user preferences, a bit like the /usr/bin/defaults tool. We can use the CFPreferencesCopyAppValue function in Python simply by importing CoreFoundation, and then calling it like we would a function from a “regular” Python module: #!/usr/bin/python import CoreFoundation print CoreFoundation.CFPreferencesCopyAppValue( "HomePage", "com.apple.Safari") If you run the above code, it will print the current home page you have set in Safari. We’ve successfully used an OS X framework from Python! Continue reading “Command-line tools via Python and Cocoa” →
8585
dbpedia
3
17
https://www.pling.com/s/ALL/browse%3Fcat%3D388%26page%3D44%26ord%3Dlatest
en
Programming
https://www.pling.com/st…g/pling-logo.png
https://www.pling.com/st…g/pling-logo.png
[ "https://pik.pling.com/pik.php?idsite=32&rec=1" ]
[]
[]
[ "" ]
null
[ "pling.com" ]
null
Planet Linux'ing Groups
en
/favicon.ico
https://www.pling.com/s/ALL/browse?cat=388&page=44&ord=latest
https://www.pling.com/s/ALL/browse?cat=388&page=44&ord=latest
Put the headline here
8585
dbpedia
0
15
https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
en
setuptools 72.2.0.post20240814 documentation
https://setuptools.pypa.…-symbol-only.svg
https://setuptools.pypa.…-symbol-only.svg
[ "https://setuptools.pypa.io/en/latest/_static/logo.svg" ]
[]
[]
[ "" ]
null
[]
null
en
../_static/logo-symbol-only.svg
null
src-layout¶ The project should contain a src directory under the project root and all modules and packages meant for distribution are placed inside this directory: project_root_directory ├── pyproject.toml # AND/OR setup.cfg, setup.py ├── ... └── src/ └── mypkg/ ├── __init__.py ├── ... ├── module.py ├── subpkg1/ │ ├── __init__.py │ ├── ... │ └── module1.py └── subpkg2/ ├── __init__.py ├── ... └── module2.py This layout is very handy when you wish to use automatic discovery, since you don’t have to worry about other Python files or folders in your project root being distributed by mistake. In some circumstances it can be also less error-prone for testing or when using PEP 420-style packages. On the other hand you cannot rely on the implicit PYTHONPATH=. to fire up the Python REPL and play with your package (you will need an editable install to be able to do that). flat-layout¶ (also known as “adhoc”) The package folder(s) are placed directly under the project root: project_root_directory ├── pyproject.toml # AND/OR setup.cfg, setup.py ├── ... └── mypkg/ ├── __init__.py ├── ... ├── module.py ├── subpkg1/ │ ├── __init__.py │ ├── ... │ └── module1.py └── subpkg2/ ├── __init__.py ├── ... └── module2.py This layout is very practical for using the REPL, but in some situations it can be more error-prone (e.g. during tests or if you have a bunch of folders or Python files hanging around your project root). To avoid confusion, file and folder names that are used by popular tools (or that correspond to well-known conventions, such as distributing documentation alongside the project code) are automatically filtered out in the case of flat-layout: Reserved package names Reserved top-level module names Warning If you are using auto-discovery with flat-layout, setuptools will refuse to create distribution archives with multiple top-level packages or modules. This is done to prevent common errors such as accidentally publishing code not meant for distribution (e.g. maintenance-related scripts). Users that purposefully want to create multi-package distributions are advised to use Custom discovery or the src-layout. There is also a handy variation of the flat-layout for utilities/libraries that can be implemented with a single Python file: single-module distribution¶ A standalone module is placed directly under the project root, instead of inside a package folder: project_root_directory ├── pyproject.toml # AND/OR setup.cfg, setup.py ├── ... └── single_file_lib.py Finding simple packages¶ Let’s start with the first tool. find: (find_packages()) takes a source directory and two lists of package name patterns to exclude and include, and then returns a list of str representing the packages it could find. To use it, consider the following directory: mypkg ├── pyproject.toml # AND/OR setup.cfg, setup.py └── src ├── pkg1 │ └── __init__.py ├── pkg2 │ └── __init__.py ├── additional │ └── __init__.py └── pkg └── namespace └── __init__.py To have setuptools to automatically include packages found in src that start with the name pkg and not additional: setup.cfg [options] packages=find: package_dir= =src [options.packages.find] where=src include=pkg* # alternatively: `exclude = additional*` Note pkg does not contain an __init__.py file, therefore pkg.namespace is ignored by find: (see find_namespace: below). setup.py setup( # ... packages=find_packages( where='src', include=['pkg*'], # alternatively: `exclude=['additional*']` ), package_dir={"": "src"} # ... ) Note pkg does not contain an __init__.py file, therefore pkg.namespace is ignored by find_packages() (see find_namespace_packages() below). pyproject.toml [tool.setuptools.packages.find] where=["src"] include=["pkg*"]# alternatively: `exclude = ["additional*"]` namespaces=false Note When using tool.setuptools.packages.find in pyproject.toml, setuptools will consider implicit namespaces by default when scanning your project directory. To avoid pkg.namespace from being added to your package list you can set namespaces = false. This will prevent any folder without an __init__.py file from being scanned. Important include and exclude accept strings representing glob patterns. These patterns should match the full name of the Python module (as if it was written in an import statement). For example if you have util pattern, it will match util/__init__.py but not util/files/__init__.py. The fact that the parent package is matched by the pattern will not dictate if the submodule will be included or excluded from the distribution. You will need to explicitly add a wildcard (e.g. util*) if you want the pattern to also match submodules. Finding namespace packages¶ setuptools provides find_namespace: (find_namespace_packages()) which behaves similarly to find: but works with namespace packages. Before diving in, it is important to have a good understanding of what namespace packages are. Here is a quick recap. When you have two packages organized as follows: /Users/Desktop/timmins/foo/__init__.py /Library/timmins/bar/__init__.py If both Desktop and Library are on your PYTHONPATH, then a namespace package called timmins will be created automatically for you when you invoke the import mechanism, allowing you to accomplish the following: >>> import timmins.foo >>> import timmins.bar as if there is only one timmins on your system. The two packages can then be distributed separately and installed individually without affecting the other one. Now, suppose you decide to package the foo part for distribution and start by creating a project directory organized as follows: foo ├── pyproject.toml # AND/OR setup.cfg, setup.py └── src └── timmins └── foo └── __init__.py If you want the timmins.foo to be automatically included in the distribution, then you will need to specify: setup.cfg [options] package_dir= =src packages=find_namespace: [options.packages.find] where=src find: won’t work because timmins doesn’t contain __init__.py directly, instead, you have to use find_namespace:. You can think of find_namespace: as identical to find: except it would count a directory as a package even if it doesn’t contain __init__.py file directly. setup.py setup( # ... packages=find_namespace_packages(where='src'), package_dir={"": "src"} # ... ) When you use find_packages(), all directories without an __init__.py file will be ignored. On the other hand, find_namespace_packages() will scan all directories. pyproject.toml [tool.setuptools.packages.find] where=["src"] When using tool.setuptools.packages.find in pyproject.toml, setuptools will consider implicit namespaces by default when scanning your project directory. After installing the package distribution, timmins.foo would become available to your interpreter. Warning Please have in mind that find_namespace: (setup.cfg), find_namespace_packages() (setup.py) and find (pyproject.toml) will scan all folders that you have in your project directory if you use a flat-layout. If used naïvely, this might result in unwanted files being added to your final wheel. For example, with a project directory organized as follows: foo ├── docs │ └── conf.py ├── timmins │ └── foo │ └── __init__.py └── tests └── tests_foo └── __init__.py final users will end up installing not only timmins.foo, but also docs and tests.tests_foo. A simple way to fix this is to adopt the aforementioned src-layout, or make sure to properly configure the include and/or exclude accordingly. Tip After building your package, you can have a look if all the files are correct (nothing missing or extra), by running the following commands: tar tf dist/*.tar.gz unzip -l dist/*.whl This requires the tar and unzip to be installed in your OS. On Windows you can also use a GUI program such as 7zip. pkg_resource style namespace package¶ This is the method setuptools directly supports. Starting with the same layout, there are two pieces you need to add to it. First, an __init__.py file directly under your namespace package directory that contains the following: __import__("pkg_resources").declare_namespace(__name__) And the namespace_packages keyword in your setup.cfg or setup.py: setup.cfg [options] namespace_packages=timmins setup.py setup( # ... namespace_packages=['timmins'] ) And your directory should look like this foo ├── pyproject.toml# AND/OR setup.cfg, setup.py └── src └── timmins ├── __init__.py └── foo └── __init__.py Repeat the same for other packages and you can achieve the same result as the previous section.
8585
dbpedia
2
6
https://dev.splunk.com/enterprise/docs/releaseapps/packageapps/packagingtoolkit
en
[]
[]
[]
[ "" ]
null
[]
null
en
null
8585
dbpedia
3
40
https://python-poetry.org/
en
Python dependency management and packaging made easy
https://python-poetry.or…n-origami-32.png
https://python-poetry.or…n-origami-32.png
[ "https://python-poetry.org/images/logo-origami.svg", "https://python-poetry.org/images/logo-origami.svg", "https://python-poetry.org/images/logo-origami.svg" ]
[]
[]
[ "" ]
null
[]
null
Python dependency management and packaging made easy
/images/favicon-origami-32.png
https://python-poetry.org/
Libraries This chapter will tell you how to make your library installable through Poetry. Versioning Poetry requires PEP 440-compliant versions for all projects. While Poetry does not enforce any release convention, it used to encourage the use of semantic versioning within the scope of PEP 440 and supports version constraints that are especially suitable for semver. Note As an example, 1.0.0-hotfix.1 is not compatible with PEP 440. Configuration Poetry can be configured via the config command (see more about its usage here) or directly in the config.toml file that will be automatically created when you first run that command. Repositories Poetry supports the use of PyPI and private repositories for discovery of packages as well as for publishing your projects. By default, Poetry is configured to use the PyPI repository, for package installation and publishing. So, when you add dependencies to your project, Poetry will assume they are available on PyPI. This represents most cases and will likely be enough for most users. Private Repository Example Installing from private package sources By default, Poetry discovers and installs packages from PyPI.. Dependency specification Dependencies for a project can be specified in various forms, which depend on the type of the dependency and on the optional constraints that might be needed for it to be installed. Version constraints Caret requirements Caret requirements allow SemVer compatible updates to a specified version. Plugins Poetry supports using and building plugins if you wish to alter or expand Poetry’s functionality with your own. For example if your environment poses special requirements on the behaviour of Poetry which do not apply to the majority of its users or if you wish to accomplish something with Poetry in a way that is not desired by most users. In these cases you could consider creating a plugin to handle your specific logic.. Contributing to Poetry First off, thanks for taking the time to contribute! The following is a set of guidelines for contributing to Poetry on GitHub. FAQ Why is the dependency resolution process slow? While the dependency resolver at the heart of Poetry is highly optimized and should be fast enough for most cases, with certain sets of dependencies it can take time to find a valid solution. This is due to the fact that not all libraries on PyPI have properly declared their metadata and, as such, they are not available via the PyPI JSON API..
8585
dbpedia
1
6
https://github.com/AutoPackAI/autopack
en
AutoPackAI/autopack: Python library and CLI for AI tools
https://opengraph.githubassets.com/f6a945573a4e720beea18ff8ea0059132d5f8ce3d7eb1c886a6a843406733365/AutoPackAI/autopack
https://opengraph.githubassets.com/f6a945573a4e720beea18ff8ea0059132d5f8ce3d7eb1c886a6a843406733365/AutoPackAI/autopack
[]
[]
[]
[ "" ]
null
[]
null
Python library and CLI for AI tools. Contribute to AutoPackAI/autopack development by creating an account on GitHub.
en
https://github.com/fluidicon.png
GitHub
https://github.com/AutoPackAI/autopack
AutoPack is a Python library and CLI designed to interact with the AutoPack repository repository, a collection of tools for AI. It is designed to be agent-neutral with a simple interface. You can install AutoPack using pip: pip install autopack-tools AutoPack provides both a CLI and a Python library for interacting with the AutoPack repository. Search for Packs: autopack search {query} Install Packs: autopack install {Pack ID} The autopack Python library allows you to work with Packs programmatically. Key functionalities include: Search for Packs: pack_search(query) Get a Pack: get_pack(pack_id) Get all installed Packs: get_all_installed_packs() Install a Pack: install_pack(pack_id) Select packs using an LLM: select_packs(task_description, llm) For detailed examples and more information, refer to the AutoPack documentation. We welcome contributions to the AutoPack ecosystem. Here are some ways you can help: Create new tools! Expand the AutoPack repository by developing and submitting your own tools. Share your ideas and solutions with the AutoPack community. Try it out for yourself: Test AutoPack in your projects and provide feedback. Share your experiences, report bugs, and suggest improvements by opening issues on GitHub. Contribute code: Help improve AutoPack by opening pull requests. You can choose to work on unresolved issues or implement new features that you believe would enhance the functionality of the library. Please note that the AutoPack library is intentionally designed to be compact and straightforward. We appreciate your contributions and look forward to your involvement in making AutoPack a vibrant and valuable resource for the autonomous AI community.
8585
dbpedia
2
50
https://forum.elivelinux.org/t/calamares-vs-eliveinstaller-vs-others-brainstorming/1839
en
Calamares VS eliveinstaller VS others, brainstorming
https://forum.elivelinux…1e6c78d8ac6b.png
https://forum.elivelinux…1e6c78d8ac6b.png
[ "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thanatermesis/48/3_2.png", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/rebel450/48/2737_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/rebel450/48/2737_2.png", "https://forum.elivelinux.org/images/emoji/apple/wink.png?v=9", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/triantares/48/2550_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/triantares/48/2550_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thanatermesis/48/3_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/stoppy98/48/2337_2.png", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/stoppy98/48/2337_2.png", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://forum.elivelinux.org/images/emoji/apple/thinking.png?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thanatermesis/48/3_2.png", "https://forum.elivelinux.org/uploads/default/original/1X/9561182338289c3cf885635afd0549997b298477.gif?v=12", "https://forum.elivelinux.org/images/emoji/apple/thinking.png?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/triantares/48/2550_2.png", "https://forum.elivelinux.org/images/emoji/apple/thinking.png?v=12", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://github.githubassets.com/favicons/favicon.svg", "https://avatars1.githubusercontent.com/u/1890328?s=280&v=4", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thanatermesis/48/3_2.png", "https://forum.elivelinux.org/uploads/default/original/1X/ac29afbd127f0e054c55ada4bc676ed09bde96aa.gif?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/rebel450/48/2737_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/rebel450/48/2737_2.png", "https://forum.elivelinux.org/uploads/default/original/1X/ac29afbd127f0e054c55ada4bc676ed09bde96aa.gif?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/stoppy98/48/2337_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/triantares/48/2550_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/triantares/48/2550_2.png", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thanatermesis/48/3_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/rebel450/48/2737_2.png", "https://forum.elivelinux.org/images/emoji/apple/slight_smile.png?v=12", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/triantares/48/2550_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thetechrobo/48/3394_2.png", "https://forum.elivelinux.org/user_avatar/forum.elivelinux.org/thanatermesis/48/3_2.png", "https://main.elivecd.org/images/misc/e-tree-crop.png", "https://main.elivecd.org/images/misc/logo_e_big_static.png" ]
[]
[]
[ "" ]
null
[]
2020-02-18T09:20:57+00:00
Seriously, a Debian based distro without Synaptik oob. No offense, but you are a bit crzy, no ? Especially cuz you deny Calamares, too. Am afraid am getting tired of you, @Thanatermesis, because your behavior ain&#39;t &hellip;
en
https://forum.elivelinux…5b03_2_32x32.png
Elive Forums
https://forum.elivelinux.org/t/calamares-vs-eliveinstaller-vs-others-brainstorming/1839
Seriously, a Debian based distro without Synaptik oob. No offense, but you are a bit crzy, no ? Especially cuz you deny Calamares, too. Am afraid am getting tired of you, @Thanatermesis, because your behavior ain't any easy to support, just to be honest. You even seem not to think about the time we spend here for the project. CC mentioned to @yoda, @triantares, @Thanatermesis is not calamares a distro installer? if yes, i don't see the need to look at calamares because elive has its own (needed) installer just trying to install calamares in Live mode (fortunately its on the repos), first impressions: it takes some considerable extra size / dependencies to install doesn't launches at all, errors about conf files the only thing that I can see is its icon with name "install system", so... @Rebel450 are you suggesting calamares as a replacement of the package manager ? (and so it is a package manager or a distro-installer?), in such case, how much stable and userfriendly it works ? @Rebel450 basically the problem with Calamares is very simple: it is not the elive installer, and elive needs the elive installer to use it will require many many work and so many bugs to fix, and bugs that will appear to fix later but again, since is not the elive installer it will not have the elive features that are needed the elive installer is easy, anybody can install elive with it, and if is not, then it can (and should) be improved don't think that I deny because no reasons now, you are free to "not believe me" and trying it yourself, install calamares in Live mode in a virtual machine and try to use it for install elive, maybe you succeed! but for sure you will not install a same elive as we have (not finetune steps, not smart questions about specific things like a needed EFI partition if needed, not upgrade mode, not migration mode, not users updating, not packages-to-maintain-installed, not nvidia drivers included, not cleanups to make the system lighter, not many many many things), because there will be many things missing @Rebel450 and just for mention these things to others to know: @triantares @yoda I think there is a point to be made though and that is: Elive installation scripts gives a linear, ad hoc impression which "calamares" doesn't. I think we can contemplate beautifying the installer into an overall single application, giving a more "in control" feeling and oversight. Using python (@Thanatermesis I know, I know!!) and webkit come to mind here...... hailing @stoppy98 there. Well, in fact is half linear half dynamic, and there comes the bigger problem about port it to other platform In other words, the installer of elive is "very smart" changing the behaviour during the installation, for example: I just did an install before in a vbox, on which it was using reiserfs before (from 3.8.4 which is available), but since i installed the 32-bit version which includes and older kernel where the reiserfs support is buggy, the installer already knew that and told me that is going to proceed with a clean install instead, the migration mode "code" then changed to a new install There's many smart things in the installer that cannot happen in other words, and which of course doesn't has interfaces for that, for example if you manually partition your system, other installers will simply proceed with the installation, but the elive installer checks if everything satisfy the correct installation, for example: if you are using an encrypted partition, it requires a /boot partition (so it warns you asking for it) if you are using a filesystem like reiser4 which grub cannot read, same thing if you booted the computer with UEFI and you don't have an EFI partition, it will require it if you are using a GPT disk and you don't have a BIOS partition, it will suggest (not required) to include it there's too many things that cannot be done in a traditional single-interface installer (unless many many work involved on it, which of course, i don't have the resources to make it) Yeah I agree. One thing that may also be annoying is the installation interrupting at times to ask you something. The best thing would be asking everything in the beginning, so the user just let the installation run, being free to mind his business and come back later with everything ready to reboot. I could try to help. Btw, we should move this topic Ok about these, but for example prompting for "ehy do you want this software" at the end could be avoided i think. Same for "ehy do you want to keep this stuff", or even "ehy why the hell are you using a vga, we're in 2020" could be asked earlier (am i wrong?) agree, and in fact many things has been improved for that in the past actually maybe we can also include: asking for hostname asking for which partition to install grub (not sure if this one is possible because maybe depends of some installed files in the installed system) its just some extra things in the big TODO's we can btw try to locate these exact messages and make a list, somewhere in Research , and so will be more easy to improve the installer based on the possible ones yep i think that there's a Calamares related thread around? not sure... in suggestions maybe is a good place Another big BIG issue by using calamares (which i don't say that is bad software) or other installer, is that I constantly read reports from installations and fastly improve the installe with fixes or usability improvements, only a few ones (the most important to tell) are mentioned on the changelogs of the releases This means basically: i know the code i know where to search i know where it can be the cause i know the language and im very fluent with it im entirely familiarized with the naming conventions and the style of the code in the case of using a different installer, how will be possible to maintain the application always updated with fixes and improvements and fastly included in the repo ? (oh btw, another unique feature: the elive installer automatically updates itself with minimal resources requirements (since doesn't uses apt) everytime an updated version is available on the repo) so in short, if we want a bugged, outdated and of course less featured installer, let's switch to calamares or any other remember that elive has made many of own apps when others doesnt' satisfy the needs, examples: a wrapper for arandr to support "remember", startup loading, and primary screens an entire touchpad configurator (do we have any available? not at all), with some amazing features too many other things that i dont remember mentions @Rebel450 LOL And using calamares will put us in the pond of all those other Ubuntu derivates.........says a guy that even felt "back home" when confronted with the latest Slackware installer. All laughs aside, I do think we could try and get all those "zenity" pop-ups (visually) under control. It's all a matter of perception ....UX. hey, its not all about a simple (single) message popup in zenity there's more than that! see this example from the old installers VS updated versions, more exactly about the "create a new user" step, it was before like that: POPUP insert an username verify that the chars are correct, or repeat POPUP insert a password verify correct characters POPUP repeat verify they are the same POPUP autologin ? POPUP admin privileges? which have been moved to a single one, not using zenity but "yad" instead: POPUP set all the users data (including more features which we didn't had before) - that's all, all in a single one same example with the first "welcome" window which checkboxes of advanced-settings and guided install etc... Good point. I was super annoyed when I couldn't do something because elive was installing and I had to be there to answer its questions before the battery of the laptop died (I didn't have the charger then.) Problem: @Thanatermesis doesn't know Python IIRC so he can't maintain it. Gitea's another option, it looks a heck ton like github Actually there's a publicly hosted server (in case you dont want to rent a very cheap (gitea's very light) server) for gitea and it's free at gitea.com. Couldn't there be a way to ask these questions at the start of install, put in a temporary file, and pipe it into user-manager? Or is it not designed for a headless mode? (Might be smart to add it, it'd be useful for IT people bulk-creating users) Here's a link to the english one Or EFLs...considering we're using enlightenment it'd just make everything look in sync... I rest my case. Tho we couldn't put the password in a temporary file, that would be insecure. I find myself wondering how other distros do it.
8585
dbpedia
1
13
https://pypi.org/project/autopack-tools/0.1.1/
en
autopack-tools
https://pypi.org/static/…er.abaf4b19.webp
https://pypi.org/static/…er.abaf4b19.webp
[ "https://pypi.org/static/images/logo-small.8998e9d1.svg", "https://pypi-camo.freetls.fastly.net/fbc6597ba932a390c0e63d6a8396ae0523f33874/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f35623161323961643937376237333963316434373962643163613264623935393f73697a653d3530", "https://pypi-camo.freetls.fastly.net/fbc6597ba932a390c0e63d6a8396ae0523f33874/68747470733a2f2f7365637572652e67726176617461722e636f6d2f6176617461722f35623161323961643937376237333963316434373962643163613264623935393f73697a653d3530", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/blue-cube.572a5bfb.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi.org/static/images/white-cube.2351a86c.svg", "https://pypi-camo.freetls.fastly.net/ed7074cadad1a06f56bc520ad9bd3e00d0704c5b/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6177732d77686974652d6c6f676f2d7443615473387a432e706e67", "https://pypi-camo.freetls.fastly.net/8855f7c063a3bdb5b0ce8d91bfc50cf851cc5c51/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f64617461646f672d77686974652d6c6f676f2d6668644c4e666c6f2e706e67", "https://pypi-camo.freetls.fastly.net/df6fe8829cbff2d7f668d98571df1fd011f36192/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f666173746c792d77686974652d6c6f676f2d65684d3077735f6f2e706e67", "https://pypi-camo.freetls.fastly.net/420cc8cf360bac879e24c923b2f50ba7d1314fb0/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f676f6f676c652d77686974652d6c6f676f2d616734424e3774332e706e67", "https://pypi-camo.freetls.fastly.net/524d1ce72f7772294ca4c1fe05d21dec8fa3f8ea/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f6d6963726f736f66742d77686974652d6c6f676f2d5a443172685444462e706e67", "https://pypi-camo.freetls.fastly.net/d01053c02f3a626b73ffcb06b96367fdbbf9e230/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f70696e67646f6d2d77686974652d6c6f676f2d67355831547546362e706e67", "https://pypi-camo.freetls.fastly.net/67af7117035e2345bacb5a82e9aa8b5b3e70701d/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f73656e7472792d77686974652d6c6f676f2d4a2d6b64742d706e2e706e67", "https://pypi-camo.freetls.fastly.net/b611884ff90435a0575dbab7d9b0d3e60f136466/68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f707970692d6173736574732f73706f6e736f726c6f676f732f737461747573706167652d77686974652d6c6f676f2d5467476c6a4a2d502e706e67" ]
[]
[]
[ "" ]
null
[]
2023-06-25T23:11:49+00:00
Package Manager for AI Agent tools
en
/static/images/favicon.35549fe8.ico
PyPI
https://pypi.org/project/autopack-tools/
AutoPack Tools AutoPack is a repository for tools that are specifically designed for AI Agents. The autopack Python package is designed to facilitate the installation and usage of tools hosted in the AutoPack repository. Tools in AutoPack are called Packs. Note This is still in the alpha stage. It's roughly at MVP level, and things will not work, features aren't complete, and things will change. Be forewarned. Installation Install the autopack package from PyPI using pip: pip install autopack Or Poetry: poetry add autopack Usage Pack IDs Each pack in the AutoPack repository is identified by a fully qualified path based on its GitHub repository. This format ensures uniqueness, prevents namespace collisions, and allows for easy identification of the source code location. Importantly, it enables us to uniquely refer to a pack while keeping pack names intuitive and understandable for an LLM. For example, the ID of a pack named web_search hosted in the GitHub repository erik-megarad/my_packs would be: erik-megarad/my_packs/web_search This format allows us to use the pack's name, web_search, within an Agent, making it convenient and straightforward to reference the desired tool. Manual Tool Installation You can manually install a pack using the following command: autopack install author/repo_name/tool_name Using with LangChain To use a tool with LangChain, you can retrieve it using the get_pack function from the autopack module: from autopack import get_pack tool = get_pack("author/repo_name/pack_name") # Add the tool to the 'packs' argument when instantiating your AgentExecutor agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=[tool()]) Using with Auto-GPT We are actively working on improving the integration with Auto-GPT. Stay tuned for updates! To use autopack with Auto-GPT, you can edit the file autogpt/main.py and add autopack to the COMMAND_CATEGORIES list. TODOs This project is still in its early stages, and there are several features and enhancements that need to be implemented. If you are interested and willing to contribute, the following list provides a good starting point: Tool search functionality within the CLI Tool search capability from Python Optional automatic pack installation in the get_pack function Tools for Agents to independently search for, install, and utilize other Tools Tool for utilizing the pack selection API of the AutoPack repository Optional contribution of feedback back to the AutoPack repository Development For information on how to contribute to AutoPack, please refer to the CONTRIBUTING.md file. Feel free to modify this README to provide more specific details about your project and its functionalities.
8585
dbpedia
2
11
https://www.activestate.com/blog/the-top-10-automl-python-packages-to-automate-your-machine-learning-tasks/
en
Top 10 AutoML Python packages to automate your machine learning tasks
https://cdn.activestate.…tools-Python.jpg
https://cdn.activestate.…tools-Python.jpg
[ "https://cdn.activestate.com/wp-content/uploads/2023/10/ActiveState.png", "https://cdn.activestate.com/wp-content/uploads/2021/03/AutoML-tools-Python-1024x512.jpg", "https://cdn.activestate.com//wp-content/uploads/2021/03/CRISP-DM.png", "https://cdn.activestate.com//wp-content/uploads/2021/03/InstallRuntime-1.png", "https://cdn.activestate.com//wp-content/uploads/2021/03/pandasProfiling.png", "https://cdn.activestate.com//wp-content/uploads/2021/03/LudwigSentiment.png", "https://cdn.activestate.com//wp-content/uploads/2021/03/LudwigExperiment.png", "https://cdn.activestate.com//wp-content/uploads/2021/03/NNI.png", "https://cdn.activestate.com/wp-content/uploads/2024/08/FinServRisks.jpg", "https://cdn.activestate.com/wp-content/uploads/2024/08/CloudSecurity.jpg", "https://cdn.activestate.com/wp-content/uploads/2024/08/FoxHenhouse.jpg" ]
[]
[]
[ "" ]
null
[ "Nicolas Bohorquez" ]
2021-03-18T21:49:12+00:00
Automate many of the most time and resource consuming machine learning tasks with these 10 best AutoML Python tools for ML engineers.
en
ActiveState
https://www.activestate.com/blog/the-top-10-automl-python-packages-to-automate-your-machine-learning-tasks/
#1–Pandas Profiling Pandas profiling allows you to perform a quick EDA with just a few lines of code, and it’s a useful way to start the AutoML process. The results are easy to read and share, but it won’t replace the detailed analysis that an experienced data scientist could produce from the same dataset. The EDA takes raw data and correlates datasets in addition to identifying variables, types, ranges, and missing values. Pandas profiling creates a report (from a pandas dataframe) that contains several descriptive statistics for each variable, as shown below: import pandas as pd from pandas_profiling import ProfileReport df = pd.read_csv("titanic.csv") profile = ProfileReport(df, title="Pandas Profiling Titanic Report") profile.to_file("eda_titanic.html") profile.to_file("eda_titanic.json") Figure 2: a variable descriptive analysis of the Titanic dataset using pandas profiling #2–Snorkel Snorkel is useful for classification tasks that start with data that is incomplete or that have a complete lack of target labels. As such, snorkel provides a set of tools for: Automatically labeling data Transforming data for data augmentation purposes Slicing data in order to monitor specific subsets of the dataset All of these come in handy in a variety of situations. For example, you may have a problem with a particular dataset because you have a set of variables but no target. It would take ages to label things by hand, but Snorkel can do it automatically using Labeling Functions (LFs), which are functions based on heuristic and programmatic rules that assign labels to datasets. This process is known as a weak supervision approach: from snorkel.labeling import labeling_function from snorkel.labeling import LFAnalysis from utils import load_spam_dataset df_train, df_test = load_spam_dataset() @labeling_function() def check(x): return SPAM if "check" in x.text.lower() else ABSTAIN @labeling_function() def check_out(x): return SPAM if "check out" in x.text.lower() else ABSTAIN lfs = [check_out, check] applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) LFAnalysis(L=L_train, lfs=lfs).lf_summary() In the previous example, we define two “toy” LFs that are applied to the spam dataset. The L_train dataframe contains the target value calculated from the LFs per row. A great use case for snorkel is the analysis of assigned labels, which includes measures for coverage, correct/incorrect and empirical accuracy. #3–MLBox Suppose that you have a clean dataset ready for some supervised learning. In this case you can use a single box solution to: Preprocess some variables (like encoding the categorical ones or deal with missing values) Test some algorithms and tune the hyperparameters Well, all this work can be automated using MLBox, which can: Detect the kind of job required (regression or classification) Select the best algorithm Try several hyperparameter combinations in order to maximize the power of the algorithm import pandas as pd from mlbox.preprocessing import * from mlbox.optimisation import * from mlbox.prediction import * from sklearn.datasets import load_boston dataset = load_boston() opt = Optimiser() space = {'fs__strategy':{"search":"choice","space":["variance","rf_feature_importance"]}, 'est__colsample_bytree':{"search":"uniform", "space":[0.3,0.7]} } data = {"train" : pd.DataFrame(dataset.data), "target" : pd.Series(dataset.target)} best = opt.optimise(space, data, max_evals = 5) Predictor().fit_predict(best, data) In the above example, I’ve chosen some simple configurations to be tried for the optimization process. With the default configuration, the above code will generate a ‘save folder’ containing the predictions and feature importances. The downside of MLBox is that it only works on supervised tasks, and that the feature engineering is quite basic. #4–H20 While the CRISP-DM process helps automate the ML process, there’s another solution that provides the option to automate just about everything to do with creating and deploying a model from just about any data. H2O.ai is a complete suite of tools that manages the entire cycle of data analysis, including: Data cleaning Model evaluation Deployment The AutoML module even allows you to get a leaderboard of the algorithms ‘automagically,’ and includes visualization and interpretability of the results. H2O provides both Python and R clients. The comprehensive documentation and tutorials (also available in Spanish) walk you through the entire process. The AutoML module also employs a web Graphical User Interface (GUI) so you can just point and click to choose parameters. And it even scales very well to enterprise-level deployments (including Hadoop, Spark and Kubernetes, of course). And best of all it’s open source, so you have no excuse not to try it. #5–TPOT Another way to automate ML is to use a Data Science Assistant like TPOT, which stands for Tree-based Pipeline Optimization Tool. After you have cleansed your data, TPOT can help with: Feature engineering (preprocessing, selection, construction) Model selection Hyperparameter tuning from tpot import TPOTClassifier from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split digits = load_digits() X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, train_size=0.75, test_size=0.25) pipeline_optimizer = TPOTClassifier(generations=5, population_size=20, cv=5,random_state=42, verbosity=2) pipeline_optimizer.fit(X_train, y_train) print(pipeline_optimizer.score(X_test, y_test)) pipeline_optimizer.export('tpot_exported_pipeline.py') After setup, TPOT will explore a large number of combinations, and show partial results as it runs. But unless your problem is a trivial one (like in our example), don’t expect to get results instantly. Optimization Progress: 13%|████████████▎| 16/120 [00:38<03:51, 2.23s/pipeline] Optimization Progress: 71%|█████████████████████████████████████████████████████████████████▏ | 85/120 [03:53<00:27, 1.26pipeline/s] Best pipeline: KNeighborsClassifier(input_matrix, n_neighbors=2, p=2, weights=distance) 0.9955555555555555 A nice feature of TPOT is that it integrates with Dask to parallelize the training tasks. This will definitely help to accelerate the results, while optimizing the use of your machine resources. Also, the team behind TPOT is experimenting with supporting neural networks and deep learning through PyTorch. The main disadvantage of TPOT is that it is not able to process categorical data automatically. Instead, you’ll need to encode it first. #6–Autokeras If you have some experience with deep learning, you’ve probably heard about Keras, which provides an abstraction layer on top of TensorFlow. Autokeras includes building blocks for classification and regression of text, images and structured data. After you choose a high level architecture for your model, Autokeras will tune the model for you. It’s designed to be as simple as the scikit-learn API. The best part, in my opinion, is that it can be integrated with two key resources: TensorFlow Cloud, so you can effortlessly run your models in Google’s cloud, and TRAINS, an ML experiment manager that helps to track and share your models. The downside is that it includes neither visualizations for the performance of the model nor interpretability, like other tools. #7–Ludwig Let’s consider the case in which you don’t want to code at all, but you understand the concepts related to ML tasks. Well, Uber has released an AutoML tool named Ludwig just for you. It deploys on top of TensorFlow, and allows you to build deep learning models from it’s Command Line Interface (CLI) with a simple text file and some data as input. Ludwig supports several data types as input and output. The model architecture is defined by the combination of input and output types, and several combinations are allowed: audio input + binary output = speaker verification category, numerical and binary inputs + numerical output = regression category, numerical and binary inputs + binary output = fraud detection image input + category output = image classifier image input + text output = image captioning text input + category output = text classifier text input + sequence output = named entity recognition / summarization timeseries input + numerical output = forecasting model For example, to build a simple model for sentiment analysis using the IMDB dataset, we could create a config.yml like this: input_features: - name: review type: text level: word encoder: parallel_cnn output_features: - name: sentiment type: category training: epochs: 5 And run a full experiment with a single command: ludwig experiment --dataset IMDB\ Dataset.csv --config_file config.yml Ludwig will create a results folder for our experiment that contains a description, probabilities and statistics. Another great feature is that you can just as easily visualize the results: ludwig visualize --visualization learning_curves --training_statistics results/experiment_run_0/training_statistics.json The above code produces some nice matplotlib charts with metrics for the experiment: If you decide you do want to do some coding after all, Ludwig also provides a Python interface that’s as powerful and simple as it’s CLI tool. The nice thing about Ludwig is that people with just a business perspective and basic ML concepts can experiment with deep learning without ever learning to code. #8–AutoGluon Amazon released AutoGluon in order to “Truly democratize machine learning, and make the power of deep learning available to all developers,” which means that it will help with some of the most complicated tasks of building a deep learning model in exchange for just a few lines of code. Those tasks include: Classifying the data Formatting vectors Defining the number of layers Defining the model architecture Hyperparameter optimization The package provides examples for Natural Language Processing, image classification and object detection, among others. It doesn’t include visualizations or experiment statistics, but the API is designed to be similar to scikit-learn, so it’s easy to understand. #9–Neural Network Intelligence Microsoft also built a tool to democratize access to ML: Neural Network Intelligence (NNI), which provides you with access to a complete suite of tools including: Functions to automate feature engineering (using gradient-based search algorithms) Model architecture Hyperparameter tuning Model compression Experiment dispatch And, it can run across different environments, including local machines, remote machines, Kubeflow, Azure Machine Learning, and other hybrid clouds. NNI works on top of several ML frameworks and libraries (including scikit-learn, TensorFlow, PyTorch, MXNet XGBoost, etc.), and includes a CLI, a Python API, and a Web GUI. It also provides several example scenarios that are pretty easy to run and try out with a simple command. As you can see, an experiment launched from a local machine can be configured using the web GUI, with results being ‘automagically’ displayed in the web console. It doesn’t take a lot more effort to launch it to a more sophisticated environment. #10–AutoGL Tsinghua University developed an AutoGL as a tool to tackle graph-based problems. AutoGL makes it possible to automate: Feature engineering Model training Hyperparameter optimization Model ensemble AutoGL works on top of PyTorch and creates a solver, which has to be parametrized with a time limit in which to finish the entire process. Coming features like Graph Boosting & Bagging, as well as link prediction, make this tool both interesting to use now and promising for the future. Conclusions ML projects are both complicated and time-intensive. But the community has addressed several repetitive tasks with automation tools that can not only save time but also significantly streamline just about any analytical pipeline. Of course, no tool will be able to replace business knowledge, common sense and data experience. That’s why AutoML tools are so powerful: they allow those with business and data expertise to exploit the power of ML without needing to learn the nuts and bolts of algorithms and hyperparameters. For those that don’t have the business or data expertise, a final word of caution: before using these powerful tools, you’ll definitely want to develop at least an intuition about the data available, and the kinds of results you might expect. Otherwise, you might end up with garbage in; garbage out. Test out the AutoML tools in this post by downloading and installing our AutoML Tools runtime environment for Windows or Linux. If you’re one of the many engineers using Python to build your algorithms, ActivePython is the right choice for your projects. ActivePython comes bundled with the most popular machine learning Python packages so you don’t waste time on configuration – just install ActivePython and you’re ready to go. Related Reads Top 10 Python Packages for Machine Learning
8585
dbpedia
2
85
https://www.snapgene.com/
en
Software for everyday molecular biology
https://cdn.snapgene.com…ne-dotmatics.png
https://cdn.snapgene.com…ne-dotmatics.png
[ "https://cdn.snapgene.com/assets/17.12.5/assets/images/common/dotmatics-logo-white.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/platform.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-Prism.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-Geneious.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-SnapGene.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-LabArchives.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-Protein-Metrics.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-OMIQ.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-FCS-Express.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/dotmatics/lab-archives/top-bar/PreNav-Logo-nQuery.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/logo/logo-snapgene.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-hero-1.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-hero-2.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-hero-3.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/marketing/snapgene-academy-logo.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-homepage-new-3.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-homepage-new-2.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-plasmid-files.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-video.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-user-guide.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-viewer.svg", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-northwestern-university.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-university-of-cambridge.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-fred-hutchinson-cancer-center.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-emory-university.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/homepage/homepage-university-of-north-carolina.png", "https://cdn.snapgene.com/assets/17.12.5/assets/images/snapgene/logo/logo-snapgene.svg" ]
[]
[]
[ "" ]
null
[ "GSL Biotech LLC" ]
null
SnapGene offers the fastest and easiest way to plan, visualize, and document DNA cloning and PCR. You can easily annotate features and design primers.
en
https://cdn.snapgene.com…e-icon-57x57.png
SnapGene
https://www.snapgene.com/
2 Visualize Your Process Cloning is easier when you can see what you are doing. The intuitive interface offers you unparalleled visibility into your work, simplifying often complex tasks. Learn More Explore SnapGene Academy Master SnapGene and key concepts in cloning with our new online learning center, SnapGene Academy. Containing over 50 video tutorials taught by scientific experts, SnapGene Academy helps you advance your skills across multiple molecular biology courses. Learn More Discover What’s New in SnapGene 7.2. SnapGene 7.2 provides a new visualization of primer homodimer structures and enhancements to file management, allowing tabs to be organized in multiple windows using drag and drop, and improvements to interacting with files in projects. Learn More Coronavirus Resources Download genomes of common coronaviruses including SARS, MERS and COVID-19, as well as primers and probes for the SARS-CoV-2 genome. Learn More User Guide Comprehensive knowledge base with step by step guides showing you how to perform key tasks in SnapGene.
8585
dbpedia
0
73
https://blogs.ethz.ch/heim/2022/05/04/installing-autopkg-on-windows/
en
Installing AutoPkg on Windows at Nick Heim
[]
[]
[]
[ "" ]
null
[]
2022-05-04T00:00:00
Nick’s comments on Windows Deployment
en
https://blogs.ethz.ch/he…list/favicon.png
https://blogs.ethz.ch/heim/2022/05/04/installing-autopkg-on-windows/
Download the actual Windows release. Get the MSI. But first, install all the prequisites! A packaging machine is exposed to the internet and reaches out to dozens of server on the net every day and should therefore be hardened and downlocked. Recommended installation is per user into the profile, which is used to run AutoPkg. This user profile should have not more than standard user rights. For this to work, the MSI have to be advertised with admin rights and the following command: msiexec /jm AutoPkgWin.msi CAUTION: This needs an elevated CMD-shell! PS-console does not work! After this, the Installer can be run with standard user rights. AutoPkg for Windows requires Windows 10 / Server 2016 or newer, Windows 32 or 64bit and to have Git installed is highly recommended, so managing recipe repositories is possible. Knowledge of Git itself is not required but helps. Tested only on 64bit! Easy route: With this script (AutoPkg-PreReq-Installer), you can install everything needed, in one run. Step by step instruction: The following software and tools are needed as prequisites to run AutoPkg on Windows: Python 3.8.x: or 3.10.x Download (Caution: pythonnet is still not compatible with Python 3.9/3.10) (Python 3.10.x works with pythonnet v3.0.0-alpha2 with: pip install pythonnet –pre) Needed libraries: pyyaml, appdirs, msl.loadlib, pythonnet, comtypes, pywin32, certify If Python is present, those libs are automatically installed by the AutoPkg installer. Git (highly recomended): Download 7zip: Download Windows-Installer-SDK: Download, You have to select a version, that fits your OS. This is necessary for some of the MSI-related processors. Download the webinstaller, choose a download directory and select at least: “MSI Tools”, “Windows SDK for Desktop C++ x86 Apps” and on x64 systems also “Windows SDK for Desktop C++ x64 Apps”, (there will be some additional selections). Then install at minimum: “Windows SDK Desktop Tools x86-x86_en-us.msi” and “Windows SDK Desktop Tools x64-x86_en-us.msi” (x64 only). Find the install location (Somewhere under C:\Program Files (x86)\Windows Kits…) Copy the Wi*.vbs and Msi*.exe files over to your MSITools folder. Register the 64bit mergemod DLL: regsvr32 “C:\Program Files (x86)\Windows Kits\10\bin\xxx\x64\mergemod.dll” If the SDK is present, this COM DLL is automatically registered by the AutoPkg installer. Wix-Toolset: Download, version 3.11 should do it. Although, i always use the latest development version. MSBuild: Download, THE Windows Make! Install commandline: vs_buildtools.exe –add Microsoft.VisualStudio.Workload.MSBuildTools –quiet
8585
dbpedia
3
26
https://www.jetbrains.com/help/idea/creating-and-optimizing-imports.html
en
IntelliJ IDEA Documentation
https://resources.jetbra…meta/preview.png
https://resources.jetbra…meta/preview.png
[ "https://resources.jetbrains.com/help/img/idea/2024.2/add-imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.quickfixBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.actions.more.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/wildcard-imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.codeInsight.intentionBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.general.add.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.codeInsight.intentionBulb.png", "https://resources.jetbrains.com/help/img/idea/2024.2/optimize-imports.png", "https://resources.jetbrains.com/help/img/idea/2024.2/app.expui.general.settings.svg", "https://resources.jetbrains.com/help/img/idea/2024.2/optimize-imports-before-commit.png", "https://resources.jetbrains.com/help/img/idea/2024.2/reformat-file-dialog.png" ]
[]
[]
[ "" ]
null
[]
null
Basic procedures to create and optimize imports in IntelliJ IDEA. Learn more how to import the missing import or XML namespace.
en
https://jetbrains.com/ap…e-touch-icon.png
IntelliJ IDEA Help
https://www.jetbrains.com/help/idea/creating-and-optimizing-imports.html
Auto import If you're using a class, a static method, or a static field that you haven't imported yet, the IDE shows you a tooltip prompting to add a missing import statement so that you don't have to add it manually. Press Alt+Enter to accept the suggestion. If there's more than one possible source of import, pressing Alt+Enter will open the list of suggestions. To change the background color for import tooltip, press Ctrl+Alt+S and go to Editor | Color Scheme | General | Popups and Hints | Question hint. Automatically add import statements You can configure the IDE to automatically add import statements if there are no options to choose from. Press Ctrl+Alt+S to open settings and then select Editor | General | Auto Import. Select the Add unambiguous imports on the fly checkbox, and apply the changes. When you are pasting blocks of code that contain references to classes or static methods and fields that are not yet imported, the IDE automatically inserts the missing import statements. If you want to change that, from the Insert imports on paste list, select Ask to confirm every insertion or Never to insert import statements manually. Import packages instead of single classes IntelliJ IDEA suggests importing single classes by default. You can change the settings to import entire packages instead. Press Ctrl+Alt+S to open settings and then select Editor | Code Style | Java | Imports. Clear the Use single class import checkbox, and apply the changes. Disable auto import If you want to completely disable auto-import, make sure that: All import tooltips are disabled. The automatic insertion of import statements is disabled. Exclude classes and packages from auto import The list of import suggestions may include classes and packages that you don't need. You can exclude redundant entries from automatic import so that the list of suggestions contains only relevant items. Press Ctrl+Alt+S to open settings and then select Editor | General | Auto Import. In the Exclude from auto-import and completion section, click Alt+Insert, and specify a class or a package that you want to exclude. You can also select whether you want to exclude items from the current project or from all projects (globally). Exclude a class or a package on the fly Press Alt+Enter on a missing class to open the list of import suggestions. Click the right arrow next to a package and select an item (a class or an entire package) that you want to exclude. In the Exclude from auto-import and completion section of the Auto Import dialog, select whether you want to exclude items from the current project or from all projects, and apply the changes. Optimize imports The Optimize Imports feature helps you remove unused imports and organize import statements in the current file or in all files in a directory at once according to the rules specified in Settings | Editor | Code Style | <language> | Imports. Optimize all imports Select a file or a directory in the Project tool window (View | Tool Windows | Project). Do any of the following: In the main menu, go to Code | Optimize Imports (or press Ctrl+Alt+O). From the context menu, select Optimize Imports. (If you've selected a directory) Choose whether you want to optimize imports in all files in the directory, or only in locally modified files (if your project is under version control), and click Run. Remove unused imports Place the caret at the unused import statement and press Alt+Enter or use the icon. Unused statements are greyed out by default. From the list of suggestions, select Remove unused imports. Optimize imports when committing changes to Git If your project is under version control, you can instruct IntelliJ IDEA to optimize imports in modified files before committing them to VCS. Press Ctrl+K or select Git | Commit from the main menu. Click and in the commit message area, select the Optimize imports checkbox. Automatically optimize imports on save You can configure the IDE to optimize imports in modified files automatically when your changes are saved. Press Ctrl+Alt+S to open settings and then select Tools | Actions on Save. Enable the Optimize imports option. Additionally, from the All file types list, select the types of files in which you want to optimize imports. Apply the changes and close the dialog. Optimize imports when reformatting a file You can tell IntelliJ IDEA to optimize imports in a file every time it is reformatted. Open the file in the editor, press Ctrl+Alt+Shift+L, and make sure the Optimize imports checkbox is selected in the Reformat File dialog that opens. After that every time you press Ctrl+Alt+L in this project, IntelliJ IDEA will optimize its imports automatically. Optimize imports on the fly You can also configure the IDE to automatically optimize imports. IntelliJ IDEA will remove or modify import statements according to the rules specified in Settings | Editor | Code Style | <language> | Imports as you work in the editor. Press Ctrl+Alt+S to open settings and then select Editor | General | Auto Import. Enable the Optimize imports on the fly option and apply the changes. Last modified: 28 June 2024
8585
dbpedia
3
4
https://github.com/coapp/coapp.org/blob/master/src/dynamic/tutorials/building-a-package.html.md
en
coapp.org/src/dynamic/tutorials/building-a-package.html.md at master · coapp/coapp.org
https://opengraph.githubassets.com/89fd9a6899c4f3d99c16761dbbab1ba90968753a9300c62d686285de05d4ff4a/coapp/coapp.org
https://opengraph.githubassets.com/89fd9a6899c4f3d99c16761dbbab1ba90968753a9300c62d686285de05d4ff4a/coapp/coapp.org
[]
[]
[]
[ "" ]
null
[]
null
Website. Contribute to coapp/coapp.org development by creating an account on GitHub.
en
https://github.com/fluidicon.png
GitHub
https://github.com/coapp/coapp.org/blob/master/src/dynamic/tutorials/building-a-package.html.md
layout title version article How to Build a Package 1.0 This tutorial tells you how to set up a CoApp build environment and how to create NuGet packages for your libraries and other software components. The requirements to produce native NuGet packages using CoApp are as follows: Windows Vista, Windows 7, or Windows 8 You need these or later versions of Windows because you need Visual Studio 2012 and PowerShell 3.0 Visual Studio 2012 or 2010 PowerShell 3.0 Windows 8 - Installed by default Windows 7 or Windows Vista - Install from http://www.microsoft.com/en-us/download/details.aspx?id=34595 NuGet 2.5 or later 2.5 Release Candidate : https://nuget.codeplex.com/releases/view/104451 Install the Visual Studio Integration component (V6) CoApp PowerShell Tools Beta : http://downloads.coapp.org/files/CoApp.Tools.Powershell.msi Optional: Notepad++ : http://notepad-plus-plus.org/download/v6.3.2.html Language File : http://downloads.coapp.org/files/autopackage.xml AutoPackage is the CoApp tool you use to create native NuGet packages. You get AutoPackage when you install the "CoApp PowerShell Tools" described above in Requirements. AutoPackage is a PowerShell module. It contains Powershell cmdlets that you can use from either the command line or batch files. The primary cmdlet used in this tutorial is: Installation is simple -- as long as you have PowerShell 3.0 installed, just download and run the CoApp PowerShell tools MSI installer. Then, close that PowerShell Window, and open a new one. This step is not needed on updates. Updating the tools to the latest version Once you have the CoApp PowerShell tools installed, you can update to the latest stable version: Or update to the latest beta version: Input to Write-NuGetPackage is an AutoPackage script that you must provide. AutoPackage scripts are written in a 'PropertySheet' domain-specific language similar to Cascading Style Sheets. Refer to the AutoPackage Script Format in the Reference tab of CoApp.org for a complete description of AutoPackage scripts. Together, AutoPackage and AutoPackage scripts greatly simplify the package creation process by handling most of the complexity involved. The package creation process using AutoPackage is developer friendly and requires no XML file editing. The C++ REST SDK provides a good example to demonstate the packaging process using CoApp. First step is to download and unpack the source code for the SDK, which you can find at http://downloads.coapp.org/files/CPP_Rest_SDK_Example.zip When you've finished the unpacking process, you'll see that the SDK comes in three variants one each for Visual Studio 2010, Visual Studio 2012 and the Windows Store Apps. Traverse down any of the variants and you'll find large numbers of header and library files. AutoPackage's reason for being is to manage all of these components for you during the packaging process. The first thing we need to do is create the AutoPackage script. Use your editor and begin with two simple nodes. First the nuget node. This is our high-level node that defines everything we put into our NuGet package. The second is the nuspec node. Nuspec is the designation that the NuGet project uses to specify all of the metadata needed to build and manage a package. Now, let's begin filling in the metadata we need to define for the project. First we need to include basic information including identification (name, version), links (project, license, etc), description, summary, icon, and so on. A few things are worth noting in this early version of the AutoPackage script. First, quotes around strings are not necessary unless the string contains a comma or a semicolon. So, the following assignment is a valid alternative to the one in the example: Second If you need to write a string the extends more than a single line, use the @" .... "; string literal. And finally, make sure you identify your package as native code as opposed to managed code. You do this by including "native" in the tag node definition as shown in the example. Making this designation helps users find your libraries more easily. Now that we've defined all of the basic metadata needed for the project, let's look at the complexity that is inherent in creating native packages. Almost all C/C++ libraries have many flavors depending on a large set of factors, such as the platforms you intend to target, whether you're building a debug or production release, what toolset you're targeting and so on. The following list shows some of the variables, that we refer to as pivots. * Platform : x86, x64, Arm ... * Configuration : Debug, Release * Toolset: VC11, VC10, VC9, ... VC6 * Linkage: dynamic, static, LTCG, SxS * Calling Convention: cdecl, stdcall * Application Type: Win8, Win8 phone, Desktop ... * Character Set : UTF8, UTF16, Unicode ... Let's look at a few of these pivots more closely. Linkage, for example: you can specify whether you want your output library to be a dynamic link type, which is a popular format, static, which is useful for some things, Link-Time Compiler Generated (LTCG), which is useful for improving performance using Profile-Guided Optimization, or Side-by-Side (SxS). Calling conventions are used less commonly now, but sometimes libraries are packaged using cdecl, stdcall or both. Application types include Windows 8 Server, Windows 8 Phone, and a variety of desktops. The point is, you can define as many pivots as you need to provide the widest usefulness of your packages. The list of pivots is not a finite set, so AutoPackage is designed to let you define however many nodes you need to cover all the pivots of your particular packages. Just define all the nodes you need, specify the conditions for each build (i.e., what combination of pivot values to use), and AutoPackage handles everything else. You can see the commonly handled configurations in the AutoPackage reference The files node is what lets you specify the contents of your packages. So let's take a look at the AutoPackage script to see where the files node fits in the scheme of things. You can see that it resides at the same level as the nuspec node. Where nuspec specifies the metadata for the NuGet packages, the files node specifies the contents. Now let's fill in the build variables we want to use to create our C++ REST SDK packages. The files node is what lets you make sure all of your files are placed in the correct location, for example, all of your header files, all of your binaries, they all need to go into the correct location at the time of build and deployment. First, we'll define some location macros that will make the script more readable and easier to work with. With these macros, we're identifying that we want to create packages for the three SDKs supported by C++. Remember, these are the Windows Store App, the Visual Studio 2010 and the Visual Studio 2012 SDKs. Now let's gather all of the include files we need to conduct our builds. With some investigation, you'll see that the include files for all three SDKs are the same, so we only need to specify one of them. In this case, all of the header files are located in the include directory itself. If they had been organized in folders underneath the include directory, you would need to specify this using '**' as shown in the following example: If your project uses header files from multiple locations, you can also specify this in the include node as follows: However, for this tutorial, none of these additional designations is neccessary. Now, let's add a node to specify what documentation to include with the release: Next, we need to define the specific conditions for which we want to build. Here's where we return to the concept of pivots. For this example, let's start to build our packages to be deployed on x64 platforms, using the Visual Studio 2010 toolset and let's make our output a "debug" release for the project. These three conditions are the pivots for our first package. The following code example shows how to do this. Note that the variables $(Platform), $(PlatfromToolset), $(Configuration) shown in the initial comment correspond to environment variables used by Visual Studio and are not used directly in this script. In the above segment, the conditional statement is defined by the line: For this particular configuration, you can then set the locations for where to store the library, symbols and binary files for the package you're creating. In this example, we're instructing AutoPackage to ensure that the library file ${SDK_2010}lib\x64\Debug\casablanca100.lib is stored in the proper lib directory. The same is true for the symbols file, which can later by uploaded to a symbols server, and the binary DLL file. Let go ahead now and specify all the remaining configurations for the Visual Studio 2010 family of packages. This means creating pivots for all the combinations of platforms and releases that can be built by Visual Studio 2010. Now let's do the same thing for all of the Visual Studio 2012 builds. Note in these examples that we've added the desktop pivot value which is for desktop (i.e., non-Windows RT) applications. The difference here is that under Visual Studio 2010, building for Windows RT isn't an option. The compilations are by default for the desktop. With Visual Studio 2012, you now have the choice of building for either the desktop or for Windows RT. Now that we've covered all of the desktop builds we can do the same for Windows RT, as you can see in the following script segment. So now we've covered all of the variants we can handle for the set of pivot points: platform, toolset, configuration and target application type. The final piece we want to add to the file is a targets node. For now that only includes a definition that can be used later to help software pick up what it actually needs. This will be covered in a later tutorial, so just include this node for now. This completes creation of the AutoPackage script for the C++ REST SDK packages. ####Producing the Outputs In the process we're following for this tutorial, three packages get generated. Main package - contains source files, headers, binaries and is used by developers Redist package - contains binaries and are used by developers and those who are installing packages Symbols package - contains symbol information and is used by developers for debugging To produce the package use the command from a PowerShell prompt: where cpprestssdk.autopkg is the script you've just written. The resulting output follows: These .nupkg files are all just zip file that you can uncompress to see their contents. ####Consuming NuGet Packages Consuming NuGet packages from Visual Studio is straightforward: Click on "Manage NuGet References" Choose the packages you want to consume Start coding Let's go through the steps together and create a Win32 console application. Begin by starting a new project in Visual Studio and call it "TestApp." Now, go to your Solution Explorer on the right-hand side of your screen and open "Manage NuGet References." You'll only see this if you're running NuGet 2.5 or later as stated in the Requirements section of this document. @Menu Item Go to the online section and have it search different packages sources. Choose the C++ REST SDK package. On the right-hand side of your screen, you'll see all of the text we added to the AutoPackage script. @C++ Rest SDK Package Include the package into your project. You'll see that NuGet handles any dependencies you'll need, in this case the redist package. Close that window, which will take you back to your coding window. Next drop the following code into you workspace. It's the source from one of the examples from the SDK itself. Notice that Visual C++ is not complaining about missing references like http_client. Such references are resolved automatically through external dependencies. Click on the external dependencies link on the right-hand side of your screen and you see all of the references that have been included from the C++ REST SDK package that we created. We didn't have to set up anything manually it was all set up by the package itself and it set it up for the conditions under which we are compiling, namely, Win32 and debug. If you have a special set of includes that only works for one set of conditions, they will be there. Now select Build. Once your build is complete, cd to the output directory and look at the files. @Menu Item You can see the file TestApp.exe and the library casablanca110.dll. This means that the package did the "right thing" and made sure the appropriage DLL was in our build directory. That happened because the following line in our AutoPackage script made sure this version of the DLL was properly stored: And the next line from our AutoPackage script made sure we linked against the correct library. Now, let's test the program. Enter the command: This tells TestApp to search for the term "coapp" and store the result in the file Test.html. Finally, run the command: Show search results You see that the search results returned by TestApp represent a valid search output that includes CoApp.org and other references to coapp and that the display is being pulled from the file we told TestApp to create, namely Test.html. This demonstrates that we were able to build, compile, link and run a program all without ever having to look at the Visual Studio project files and without having to modify any of the properties by hand including not having to specify a source directory, library files, or include directories. All we relied on was the contents of the package and everything worked. ####Looking Under the Covers If you're curious you can see how the process works. Go back to the TestApp directory and cd to the packages directory. Show its contents, then drill down through the build directory to the native directory. Use your editor to look at cpprestsdk.0.6.0.7.targets to see the complexity that AutoPackage has managed for you. This is a Visual Studio properties file and you can see that AutoPackage has taken care of a lot of detail you never need to worry about. All you need to do is use it. ####What's Next? Look for additional tutorials soon covering:
8585
dbpedia
1
68
https://technology.siprep.org/autopkg-recipe-writing-things-to-look-out-for/
en
AutoPkg recipe writing: things to look out for
https://i0.wp.com/techno…=512%2C623&ssl=1
https://i0.wp.com/techno…=512%2C623&ssl=1
[ "https://secure.gravatar.com/avatar/5f3c8e98c10dc7e9d805d08a8c8e65c8?s=32&d=mm&r=g" ]
[]
[]
[ "" ]
null
[]
2017-05-11T17:16:56+00:00
AutoPkg is a cool project for Mac admins (in theory, Windows admins could use it, too, and there are even a few Windows recipes). Although it’s a flexible framework that can be applied in man…
en
https://i0.wp.com/techno…it=26%2C32&ssl=1
St. Ignatius College Prep Tech Blog
https://technology.siprep.org/autopkg-recipe-writing-things-to-look-out-for/
AutoPkg is a cool project for Mac admins (in theory, Windows admins could use it, too, and there are even a few Windows recipes). Although it’s a flexible framework that can be applied in many different ways, what it’s most useful for is automating the tedious process of going to a website, downloading a new version of the software, and then importing that download into whatever you’re using to push updates out to your Mac clients. For a while, I was using existing recipes (there are many, so this is a totally valid approach), but eventually there was software I didn’t see recipes for, so I started writing my own recipes. At first, I just started by copying existing templates and just modifying certain parts (the download URL, or the regular expressions to search for within the search URL). Here are some things I noticed, in case you ever want to write your own recipes and run into these issues. Arguments need to be separate I ran into this issue where I was trying to purge the destination before unarchiving a .zip file, but it didn’t seem to be working. Even though the archive_path and destination_path seemed to work fine without being in the Arguments dictionary, the purge_destination key wasn’t registering until I put them all into the Arguments dictionary, as I should have from the start… so, remember to always put all arguments in an actual Arguments dictionary. Example: <dict> <key>Processor</key> <string>Unarchiver</string> <key>Arguments</key> <dict> <key>purge_destination</key> <true/> <key>archive_path</key> <string>%RECIPE_CACHE_DIR%/downloads/%NAME%.zip</string> <key>destination_path</key> <string>%RECIPE_CACHE_DIR%/%NAME%/</string> </dict> </dict> Code signature verification within disk images When you’re doing code signature verification on a disk image, you don’t have to explicitly use the DmgMounter processor to mount the disk image. Instead, you can just treat the .dmg as a folder that includes the bundle to be verified. Here’s an example (where %pathname% refers to the downloaded .dmg): <dict> <key>Processor</key> <string>CodeSignatureVerifier</string> <key>Arguments</key> <dict> <key>input_path</key> <string>%pathname%/DiskMaker*.app</string> <key>requirement</key> <string>identifier “net.gete.diskmakerx” and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = “2U4ZFMT67D”</string> </dict> </dict> Dealing with regular expressions If you’re not a regex expert, some of the regular expression searches for the URLTextSearcher processor may look like gibberish to you. A few tips to help with that, apart from (or maybe in addition to?) reading up on all the details of the Python regex documentation:
8585
dbpedia
0
32
https://ask.replit.com/t/python-problem-with-new-template/61456
en
Python: Problem with new template
https://global.discourse…2_2_1024x497.png
https://global.discourse…2_2_1024x497.png
[ "https://global.discourse-cdn.com/business7/uploads/replitteams/optimized/3X/2/7/27a4dcb5bc739ac03552834983bfeb0ab9c9c472_2_517x251.png", "https://emoji.discourse-cdn.com/twitter/frowning_face.png?v=12", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://sea2.discourse-cdn.com/business7/user_avatar/ask.replit.com/firepup650/48/4680_2.png", "https://avatars.discourse-cdn.com/v4/letter/n/e9c0ed/48.png", "https://avatars.discourse-cdn.com/v4/letter/t/e68b1a/48.png" ]
[]
[]
[ "" ]
null
[ "system Closed" ]
2023-09-07T16:05:56+00:00
Problem description: After transferring an old Python repl to the new template (cloning from Github repository into a new repl) I’m running into a somewhat old problem again: When I hit the Run button poetry starts to &hellip;
en
https://global.discourse…7693_2_32x32.png
Replit Ask
https://ask.replit.com/t/python-problem-with-new-template/61456
Problem description: After transferring an old Python repl to the new template (cloning from Github repository into a new repl) I’m running into a somewhat old problem again: When I hit the Run button poetry starts to install a package gardenlinux: After the successful installation nothing happens: The program doesn’t run and the the Run button stays activated, ie. it’s actually a Stop button. There’s also a huge CPU utilisation. Running the file from the shell works fine. As with the old problem, the strange thing is that the repl is a pure Python repl, so no installation of packages is needed. And I don’t even know what gardenlinux does. Unfortunately, the old solution doesn’t work. Expected behavior: Hitting Run just runs the selected file (current.py). No installation of the gardenlinux package. Actual behavior: See above. Steps to reproduce: I don’t know how to replicate except trying to run the repl in question. Bug appears at this link: https://replit.com/@TimsbimPython/Morsels Browser: Safari OS: MacOS Device (Android, iOS, n/a leave blank): MacBook Pro Plan (Free, Hacker, Pro Plan): Teams Pro Hey all! I was able to reproduce the issue and let the team know. I believe this is to do with our new Modules system that encapsulates configuration for languages such as Python onto our own infrastructure. This seems to prevent users from configuring the packager in the .replit file. I will follow up once I have an update.
8585
dbpedia
3
88
https://coteditor.com/
en
CotEditor
https://coteditor.com/im…picon/512@2x.png
https://coteditor.com/im…picon/512@2x.png
[ "https://coteditor.com/img/appicon/128@2x.png", "https://coteditor.com/img/MacAppStore.svg", "https://coteditor.com/img/screenshots/screenshot@2x.png", "https://coteditor.com/img/screenshots/darkmode@2x.png", "https://coteditor.com/img/screenshots/tools@2x.png", "https://coteditor.com/img/screenshots/verticalOrientation@2x.png", "https://coteditor.com/img/screenshots/preferences@2x.png", "https://coteditor.com/img/icons/osx.svg", "https://coteditor.com/img/icons/speed.svg", "https://coteditor.com/img/icons/opensource.svg", "https://coteditor.com/img/icons/syntax.svg", "https://coteditor.com/img/icons/find.svg", "https://coteditor.com/img/icons/gui.svg", "https://coteditor.com/img/icons/autobackup.svg", "https://coteditor.com/img/icons/outline.svg", "https://coteditor.com/img/icons/split_view.svg", "https://coteditor.com/img/icons/char_inspector.svg", "https://coteditor.com/img/icons/script.svg", "https://coteditor.com/img/icons/incompatibles.svg", "https://coteditor.com/img/icons/cjk.svg" ]
[]
[]
[ "" ]
null
[]
null
Text Editor for macOS
en
favicon.png
https://coteditor.com
Syntax Highlighting Colorize more than 50 pre-installed major languages like HTML, PHP, Python, Ruby or Markdown. You can also create your own settings. Powerful Find & Replace Super powerful find and replace using the ICU regular expression engine. Setting via Click There are no complex configuration files that require geek knowledge. You can access all your settings including syntax definitions and themes from a standard settings window. Auto Backup You don't need to lose your unsaved data anymore. CotEditor backups your documents automatically while editing. Outline Menu Extract specified lines with the predefined syntax, and you can jump to the corresponding line. Split Editor Split a window into multiple panes to see different parts of your document at the same time. Character Inspector Inspect Unicode character data of each selected character in your document and display them in a popover. Scriptable Make your own macro in your favorite language, whether it is Python, Ruby, Perl, PHP, UNIX shell, AppleScript or JavaScript. Incompatible Characters Check and list-up the characters in your document that cannot convert into the desired encoding.
8585
dbpedia
2
93
https://docs.ros.org/en/foxy/How-To-Guides/Ament-CMake-Documentation.html
en
cmake user documentation — ROS 2 Documentation: Foxy documentation
[ "https://docs.ros.org/en/foxy/_static/foxy-small.png" ]
[]
[]
[ "" ]
null
[]
null
en
../_static/favicon.ico
https://docs.ros.org/en/foxy/How-To-Guides/Ament-CMake-Documentation.html
Basics A basic CMake outline can be produced using ros2 pkg create <package_name> on the command line. The basic build information is then gathered in two files: the package.xml and the CMakeLists.txt. The package.xml must contain all dependencies and a bit of metadata to allow colcon to find the correct build order for your packages, to install the required dependencies in CI as well as provide the information for a release with bloom. The CMakeLists.txt contains the commands to build and package executables and libraries and will be the main focus of this document. Basic project outline The basic outline of the CMakeLists.txt of an ament package contains: cmake_minimum_required(VERSION3.5) project(my_project) ament_package() The argument to project will be the package name and must be identical to the package name in the package.xml. The project setup is done by ament_package() and this call must occur exactly once per package. ament_package() installs the package.xml, registers the package with the ament index, and installs config (and possibly target) files for CMake so that it can be found by other packages using find_package. Since ament_package() gathers a lot of information from the CMakeLists.txt it should be the last call in your CMakeLists.txt. Although it is possible to follow calls to ament_package() by calls to install functions copying files and directories, it is simpler to just keep ament_package() the last call. ament_package can be given additional arguments: CONFIG_EXTRAS: a list of CMake files (.cmake or .cmake.in templates expanded by configure_file()) which should be available to clients of the package. For an example of when to use these arguments, see the discussion in Adding resources. For more information on how to use template files, see the official documentation. CONFIG_EXTRAS_POST: same as CONFIG_EXTRAS, but the order in which the files are added differs. While CONFIG_EXTRAS files are included before the files generated for the ament_export_* calls the files from CONFIG_EXTRAS_POST are included afterwards. Instead of adding to ament_package, you can also add to the variable ${PROJECT_NAME}_CONFIG_EXTRAS and ${PROJECT_NAME}_CONFIG_EXTRAS_POST with the same effect. The only difference is again the order in which the files are added with the following total order: files added by CONFIG_EXTRAS files added by appending to ${PROJECT_NAME}_CONFIG_EXTRAS files added by appending to ${PROJECT_NAME}_CONFIG_EXTRAS_POST files added by CONFIG_EXTRAS_POST Adding files and headers There are two main targets to build: libraries and executables which are built by add_library and add_executable respectively. With the separation of header files and implementation in C/C++, it is not always necessary to add both files as argument to add_library/ add_executable. The following best practice is proposed: if you are building a library, put all headers which should be usable by clients and therefore must be installed into a subdirectory of the include folder named like the package, while all other files (.c/.cpp and header files which should not be exported) are inside the src folder. only cpp files are explicitly referenced in the call to add_library or add_executable allow to find headers via target_include_directories(my_target PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include> $<INSTALL_INTERFACE:include>) This adds all files in the folder ${CMAKE_CURRENT_SOURCE_DIR}/include to the public interface during build time and all files in the include folder (relative to ${CMAKE_INSTALL_DIR}) when being installed. In principle, using generator expressions here is not necessary if both folders are called include and top-level with respect to ${CMAKE_CURRENT_SOURCE_DIR} and ${CMAKE_INSTALL_DIR}, but it is very common. Adding Dependencies There are two ways to link your packages against a new dependency. The first and recommended way is to use the ament macro ament_target_dependencies. As an example, suppose we want to link my_target against the linear algebra library Eigen3. find_package(Eigen3REQUIRED) ament_target_dependencies(my_targetEigen3) It includes the necessary headers and libraries and their dependencies to be correctly found by the project. It will also ensure that the include directories of all dependencies are ordered correctly when using overlay workspaces. The second way is to use target_link_libraries. The recommended way in modern CMake is to only use targets, exporting and linking against them. CMake targets are namespaced, similar to C++. For instance, Eigen3 defines the target Eigen3::Eigen. At least until Crystal Clemmys target names are not supported in the ament_target_dependencies macro. Sometimes it will be necessary to call the target_link_libaries CMake function. In the example of Eigen3, the call should then look like find_package(Eigen3REQUIRED) target_link_libraries(my_targetEigen3::Eigen) This will also include necessary headers, libraries and their dependencies, but in contrast to ament_target_dependencies it might not correctly order the dependencies when using overlay workspaces. Note It should never be necessary to find_package a library that is not explicitly needed but is a dependency of another dependency that is explicitly needed. If that is the case, file a bug against the corresponding package. Building a Library When building a reusable library, some information needs to be exported for downstream packages to easily use it. ament_export_targets(my_libraryTargetsHAS_LIBRARY_TARGET) ament_export_dependencies(some_dependency) install( DIRECTORYinclude/ DESTINATIONinclude ) install( TARGETSmy_library EXPORTmy_libraryTargets LIBRARYDESTINATIONlib ARCHIVEDESTINATIONlib RUNTIMEDESTINATIONbin INCLUDESDESTINATIONinclude ) Here, we assume that the folder include contains the headers which need to be exported. Note that it is not necessary to put all headers into a separate folder, only those that should be included by clients. Here is what’s happening in the snippet above: The ament_export_targets macro exports the targets for CMake. This is necessary to allow your library’s clients to use the target_link_libraries(client my_library::my_library) syntax. ament_export_targets can take an arbitrary list of targets named as EXPORT in an install call and an additional option HAS_LIBRARY_TARGET, which adds potential libraries to environment variables. The ament_export_dependencies exports dependencies to downstream packages. This is necessary so that the user of the library does not have to call find_package for those dependencies, too. The first install commands installs the header files which should be available to clients. Warning Calling ament_export_targets, ament_export_dependencies, or other ament commands from a CMake subdirectory will not work as expected. This is because the CMake subdirectory has no way of setting necessary variables in the parent scope where ament_package is called. The last large install command installs the library. Archive and library files will be exported to the lib folder, runtime binaries will be installed to the bin folder and the path to installed headers is include. Note Windows DLLs are treated as runtime artifacts and installed into the RUNTIME DESTINATION folder. It is therefore advised to not leave out the RUNTIME install even when developing libraries on Unix based systems. Regarding the include directory, the install command only adds information to CMake, it does not actually install the includes folder. This is done by copying the headers via install(DIRECTORY <dir> DESTINATION <dest>) as described above. The EXPORT notation of the install call requires additional attention: It installs the CMake files for the my_library target. It is named exactly like the argument in ament_export_targets and could be named like the library. However, this will then prohibit using the ament_target_dependencies way of including your library. To allow for full flexibility, it is advised to prepend the export target with something like <target>Targets. All install paths are relative to CMAKE_INSTALL_PREFIX, which is already set correctly by colcon/ament There are two additional functions which can be used but are superfluous for target based installs: ament_export_include_directories(include) ament_export_libraries(my_library) The first macro marks the directory of the exported include directories (this is achieved by INCLUDES DESTINATION in the target install call). The second macro marks the location of the installed library (this is done by the HAS_LIBRARY_TARGET argument in the call to ament_export_targets). Some of the macros can take different types of arguments for non-target exports, but since the recommended way for modern Make is to use targets, we will not cover them here. Documentation of these options can be found in the source code itself. Compiler and linker options ROS 2 targets compilers which comply with the C++14 and C99 standard until at least Crystal Clemmys. Newer versions might be targeted in the future and are referenced here. Therefore it is customary to set the corresponding CMake flags: if(NOTCMAKE_C_STANDARD) set(CMAKE_C_STANDARD99) endif() if(NOTCMAKE_CXX_STANDARD) set(CMAKE_CXX_STANDARD14) endif() To keep the code clean, compilers should throw warnings for questionable code and these warnings should be fixed. It is recommended to at least cover the following warning levels: For Visual Studio, the default W1 warnings are kept For GCC and Clang: -Wall -Wextra -Wpedantic are required and -Wshadow -Werror are advisable (the latter makes warnings errors). Although modern CMake advises to add compiler flags on a target basis, i.e. call target_compile_options(my_targetPRIVATE-Wall) it is at the moment recommended to use the directory level function add_compile_options(-Wall) to not clutter the code with target-based compile options for all executables and tests. Building libraries on Windows Since Linux, Mac and Windows are all officially supported platforms, to have maximum impact any package should also build on Windows. The Windows library format enforces symbol visibility: Every symbol which should be used from a client has to be explicitly exported by the library (and data symbols need to be implicitly imported). To keep this compatible with Clang and GCC builds, it is advised to use the logic in the GCC wiki. To use it for a package called my_library: Copy the logic in the link into a header file called visibility_control.hpp. Replace DLL by MY_LIBRARY (for an example, see visibility control of rviz_rendering). Use the macros “MY_LIBRARY_PUBLIC” for all symbols you need to export (i.e. classes or functions). In the project CMakeLists.txt use: target_compile_definitions(my_libraryPRIVATE"MY_LIBRARY_BUILDING_LIBRARY") For more details, see Windows Symbol Visibility in the Windows Tips and Tricks document. Adding resources Especially when developing plugins or packages which allow plugins it is often essential to add resources to one ROS package from another (e.g. a plugin). Examples can be plugins for tools using the pluginlib. This can be achieved using the ament index (also called “resource index”). The ament index explained For details on the design and intentions, see here In principle, the ament index is contained in a folder within the install/share folder of your package. It contains shallow subfolders named after different types of resources. Within the subfolder, each package providing said resource is referenced by name with a “marker file”. The file may contain whatever content necessary to obtain the resources, e.g. relative paths to the installation directories of the resource, it may also be simply empty. To give an example, consider providing display plugins for RViz: When providing RViz plugins in a project named my_rviz_displays which will be read by the pluginlib, you will provide a plugin_description.xml file, which will be installed and used by the pluginlib to load the plugins. To achieve this, the plugin_description.xml is registered as a resource in the resource_index via pluginlib_export_plugin_description_file(rviz_commonplugins_description.xml) When running colcon build, this installs a file my_rviz_displays into a subfolder rviz_common__pluginlib__plugin into the resource_index. Pluginlib factories within rviz_common will know to gather information from all folders named rviz_common__pluginlib__plugin for packages that export plugins. The marker file for pluginlib factories contains an install-folder relative path to the plugins_description.xml file (and the name of the library as marker file name). With this information, the pluginlib can load the library and know which plugins to load from the plugin_description.xml file. As a second example, consider the possibility to let your own RViz plugins use your own custom meshes. Meshes get loaded at startup time so that the plugin owner does not have to deal with it, but this implies RViz has to know about the meshes. To achieve this, RViz provides a function: register_rviz_ogre_media_exports(DIRECTORIES<my_dirs>) This registers the directories as an ogre_media resource in the ament index. In short, it installs a file named after the project which calls the function into a subfolder called rviz_ogre_media_exports. The file contains the install folder relative paths to the directories listed in the macros. On startup time, RViz can now search for all folders called rviz_ogre_media_exports and load resources in all folders provided. These searches are done using ament_index_cpp (or ament_index_py for Python packages). In the following sections we will explore how to add your own resources to the ament index and provide best practices for doing so. Querying the ament index If necessary, it is possible to query the ament index for resources via CMake. To do so, there are three functions: ament_index_has_resource: obtain a prefix path to the resource if it exists with the following parameters: var: the output parameter: fill this variable with FALSE if the resource does not exist or the prefix path to the resource otherwise resource_type: The type of the resource (e.g. rviz_common__pluginlib__plugin) resource_name: The name of the resource which usually amounts to the name of the package having added the resource of type resource_type (e.g. rviz_default_plugins) ament_index_get_resource: Obtain the content of a specific resource, i.e. the contents of the marker file in the ament index. var: the output parameter: filled with the content of the resource marker file if it exists. resource_type: The type of the resource (e.g. rviz_common__pluginlib__plugin) resource_name: The name of the resource which usually amounts to the name of the package having added the resource of type resource_type (e.g. rviz_default_plugins) PREFIX_PATH: The prefix path to search for (usually, the default ament_index_get_prefix_path() will be enough). Note that ament_index_get_resource will throw an error if the resource does not exist, so it might be necessary to check using ament_index_has_resource. ament_index_get_resources: Get all packages which registered resources of a specific type from the index var: Output parameter: filled with a list of names of all packages which registered a resource of resource_type resource_type: The type of the resource (e.g. rviz_common__pluginlib__plugin) PREFIX_PATH: The prefix path to search for (usually, the default ament_index_get_prefix_path() will be enough). Adding to the ament index Defining a resource requires two bits of information: a name for the resource which must be unique, a layout of the marker file, which can be anything and could also be empty (this is true for instance for the “package” resource marking a ROS 2 package) For the RViz mesh resource, the corresponding choices were: rviz_ogre_media_exports as name of the resource, install path relative paths to all folders containing resources. This will already enable you to write the logic for using the corresponding resource in your package. To allow users to easily register resources for your package, you should furthermore provide macros or functions such as the pluginlib function or rviz_ogre_media_exports function. To register a resource, use the ament function ament_index_register_resource. This will create and install the marker files in the resource_index. As an example, the corresponding call for rviz_ogre_media_exports is the following: ament_index_register_resource(rviz_ogre_media_exportsCONTENT${OGRE_MEDIA_RESOURCE_FILE}) This installs a file named like ${PROJECT_NAME} into a folder rviz_ogre_media_exports into the resource_index with content given by variable ${OGRE_MEDIA_RESOURCE_FILE}. The macro has a number of parameters that can be useful: the first (unnamed) parameter is the name of the resource, which amounts to the name of the folder in the resource_index CONTENT: The content of the marker file as string. This could be a list of relative paths, etc. CONTENT cannot be used together with CONTENT_FILE. CONTENT_FILE: The path to a file which will be use to create the marker file. The file can be a plain file or a template file expanded with configure_file(). CONTENT_FILE cannot be used together with CONTENT. PACKAGE_NAME: The name of the package/library exporting the resource, which amounts to the name of the marker file. Defaults to ${PROJECT_NAME}. AMENT_INDEX_BINARY_DIR: The base path of the generated ament index. Unless really necessary, always use the default ${CMAKE_BINARY_DIR}/ament_cmake_index. SKIP_INSTALL: Skip installing the marker file. Since only one marker file exists per package, it is usually a problem if the CMake function/macro gets called twice by the same project. However, for large projects it might be best to split up calls registering resources. Therefore, it is best practice to let a macro registering a resource such as register_rviz_ogre_media_exports.cmake only fill some variables. The real call to ament_index_register_resource can then be added within an ament extension to ament_package. Since there must only ever be one call to ament_package per project, there will always only be one place where the resource gets registered. In the case of rviz_ogre_media_exports this amounts to the following strategy: The macro register_rviz_ogre_media_exports takes a list of folders and appends them to a variable called OGRE_MEDIA_RESOURCE_FILE. Another macro called register_rviz_ogre_media_exports_hook calls ament_index_register_resource if ${OGRE_MEDIA_RESOURCE_FILE} is non-empty. The register_rviz_ogre_media_exports_hook.cmake file is registered as an ament extension in a third file register_rviz_ogre_media_exports_hook-extras.cmake via calling ament_register_extension("ament_package""rviz_rendering" "register_rviz_ogre_media_exports_hook.cmake") The files register_rviz_ogre_media_exports.cmake and register_rviz_ogre_media_exports_hook-extra.cmake are registered as CONFIG_EXTRA with ament_package().
8585
dbpedia
3
30
https://pmd.github.io/
en
PMD
https://pmd.github.io/favicon.ico
https://pmd.github.io/favicon.ico
[ "https://pmd.github.io/img/pmd-logo-white-600px.png" ]
[]
[]
[ "PMD", "Java", "Salesforce.com Apex", "Code Analyzer", "Clean Code", "Software Development" ]
null
[]
null
PMD is a source code analyzer. It finds unused variables, empty catch blocks, unnecessary object creation, and so forth.
en
/favicon.ico
null
About PMD PMD is an extensible multilanguage static code analyzer. It finds common programming flaws like unused variables, empty catch blocks, unnecessary object creation, and so forth. It's mainly concerned with Java and Apex, but supports 16 other languages. It comes with 400+ built-in rules. It can be extended with custom rules. It uses JavaCC and Antlr to parse source files into abstract syntax trees (AST) and runs rules against them to find violations. Rules can be written in Java or using a XPath query. Currently, PMD supports Java, JavaScript, Salesforce.com Apex and Visualforce, Kotlin, Swift, Modelica, PLSQL, Apache Velocity, JSP, WSDL, Maven POM, HTML, XML and XSL. Scala is supported, but there are currently no Scala rules available. Additionally, it includes CPD, the copy-paste-detector. CPD finds duplicated code in Coco, C/C++, C#, Dart, Fortran, Gherkin, Go, Groovy, HTML, Java, JavaScript, JSP, Julia, Kotlin, Lua, Matlab, Modelica, Objective-C, Perl, PHP, PLSQL, Python, Ruby, Salesforce.com Apex and Visualforce, Scala, Swift, T-SQL, Typescript, Apache Velocity, WSDL, XML and XSL.
8585
dbpedia
2
1
https://github.com/autopkg/autopkg
en
autopkg/autopkg: Automating packaging and software distribution on macOS.
https://opengraph.githubassets.com/67518126e50698c690451b23ac900eb1a273ed27c3a5d64ae241b4e7f545b5b3/autopkg/autopkg
https://opengraph.githubassets.com/67518126e50698c690451b23ac900eb1a273ed27c3a5d64ae241b4e7f545b5b3/autopkg/autopkg
[ "https://camo.githubusercontent.com/7d770c433d6198d89f8c1e2f187b904a9721d176259d0e97157337741cc8e837/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f636f64652532307374796c652d626c61636b2d3030303030302e737667", "https://github.com/autopkg/autopkg/actions/workflows/tests.yaml/badge.svg", "https://avatars.githubusercontent.com/u/700560?s=64&v=4", "https://avatars.githubusercontent.com/u/1202655?s=64&v=4", "https://avatars.githubusercontent.com/u/119358?s=64&v=4", "https://avatars.githubusercontent.com/u/7801391?s=64&v=4", "https://avatars.githubusercontent.com/u/3882687?s=64&v=4", "https://avatars.githubusercontent.com/u/2439367?s=64&v=4", "https://avatars.githubusercontent.com/u/24377?s=64&v=4", "https://avatars.githubusercontent.com/u/19357?s=64&v=4", "https://avatars.githubusercontent.com/u/694298?s=64&v=4", "https://avatars.githubusercontent.com/u/3740088?s=64&v=4", "https://avatars.githubusercontent.com/u/969572?s=64&v=4", "https://avatars.githubusercontent.com/u/2464974?s=64&v=4", "https://avatars.githubusercontent.com/u/1134568?s=64&v=4", "https://avatars.githubusercontent.com/u/202334?s=64&v=4" ]
[]
[]
[ "" ]
null
[]
null
Automating packaging and software distribution on macOS. - autopkg/autopkg
en
https://github.com/fluidicon.png
GitHub
https://github.com/autopkg/autopkg
Latest release is here. AutoPkg is an automation framework for macOS software packaging and distribution, oriented towards the tasks one would normally perform manually to prepare third-party software for mass deployment to managed clients. These tasks typically involve at least several of the following steps: downloading an application and/or updates for it, usually via a web browser extracting them from a multitude of archive formats adding site-specific configuration adding sane versioning information "fixing" poorly-written installer scripts saving these modifications back to a compressed disk image or installer package importing these into a software distribution system like Munki, Jamf Pro, FileWave, etc. customizing the associated metadata for such a system with site-specific data, post-installation scripts, version info or other metadata Often these tasks follow similar patterns for each individual application, and when managing many applications this becomes a daily task full of sub-tasks that one must remember (and/or maintain documentation for) about exactly what had to be done for a successful deployment of every update for every managed piece of software. With AutoPkg, we define these steps in a "Recipe" file in plist or yaml format, run automatically instead of by hand, and shared with others. Install the latest release. AutoPkg requires macOS, and Git is highly recommended to have installed so that autopkg can use git to can manage recipe repositories. Knowledge of Git itself is not required. AutoPkg is tested on the current macOS release. It may work on older releases, but is not actively tested on older releases. Git can be installed via Apple's command-line developer tools package, which can be prompted for installation by simply typing 'git' in a Terminal window (OS X 10.9 or later). Since AutoPkg 2.0, Python 2 is no longer supported. The installer linked above contains a bundled version of Python 3 and all needed dependencies. A getting started guide is available here. Frequently Asked Questions (and answers!) are here. See the wiki for more documentation.
8585
dbpedia
3
47
https://learn.microsoft.com/en-us/mem/configmgr/apps/deploy-use/packages-and-programs
en
Packages and programs - Configuration Manager
https://learn.microsoft.…-graph-image.png
https://learn.microsoft.…-graph-image.png
[]
[]
[]
[ "" ]
null
[]
2022-10-04T00:00:00+00:00
Support deployments that use legacy packages and programs with Configuration Manager.
en
https://learn.microsoft.com/en-us/mem/configmgr/apps/deploy-use/packages-and-programs
Packages and programs in Configuration Manager Applies to: Configuration Manager (current branch) Configuration Manager continues to support packages and programs that were used in Configuration Manager 2007. A deployment that uses packages and programs might be more suitable than an application when you deploy any of the following tools or scripts: Administrative tools that don't install an application on a computer "One-off" scripts that don't need to be continually monitored Scripts that run on a recurring schedule and can't use global evaluation When you migrate packages from an earlier version of Configuration Manager, you can deploy them in your Configuration Manager hierarchy. After migration is complete, the packages appear in the Packages node in the Software Library workspace. You can modify and deploy these packages in the same way you did by using software distribution. The Import Package from Definition Wizard remains in Configuration Manager to import legacy packages. Advertisements are converted to deployments when you migrate from Configuration Manager 2007 to a Configuration Manager hierarchy. Packages can use some new features of Configuration Manager, including distribution point groups and monitoring. You can't deploy Microsoft Application Virtualization (App-V) applications with packages and programs in Configuration Manager. To distribute virtual applications, create them as Configuration Manager applications. For more information, see Deploy App-V virtual applications. Create a package and program Use the Create Package and Program wizard In the Configuration Manager console, go to the Software Library workspace, expand Application Management, and select the Packages node. In the Home tab of the ribbon, in the Create group, choose Create Package. On the Package page of the Create Package and Program Wizard, specify the following information: Name: Specify a name for the package with a maximum of 50 characters. Description: Specify a description for this package with a maximum of 128 characters. Manufacturer (optional): Specify a manufacturer name to help you identify the package in the Configuration Manager console. This name can be a maximum of 32 characters. Language (optional): Specify the language version of the package with a maximum of 32 characters. Version (optional): Specify a version number for the package with a maximum of 32 characters. This package contains source files: This setting indicates whether the package requires source files to be present on client devices. By default, the wizard doesn't enable this option, and Configuration Manager doesn't use distribution points for the package. When you select this option, specify the package content to distribute to distribution points. Source folder: If the package contains source files, choose Browse to open the Set Source Folder dialog box, and then specify the location of the source files for the package. Note The computer account of the site server must have read access permissions to the source folder that you specify. Windows limits the source path to 256 characters or less. This limit applies to package source as well as applications. For more information, see Naming Files, Paths, and Namespaces. If you want to pre-cache content on a client, specify the Architecture and Language of the package. For more information, see Configure pre-cache content. On the Program Type page of the Create Package and Program Wizard, select the Standard program type for computers. Or you can skip this step and create a program later. Tip To create a new program for an existing package, first select the package. Then, in the Home tab, in the Package group, choose Create Program to open the Create Program Wizard. The Program for device type is a legacy option that only applies to mobile devices, which aren't currently managed by Configuration Manager. Custom icons for packages Starting in version 2203, add custom icons for packages. These icons appear in Software Center when you deploy the package and program. Instead of a default icon, a custom icon can improve the user experience to better identify the software. On the General tab of package properties, in the section for the icon, select Browse. Select an icon from the default shell library, or browse to another file in a local or network path. It supports the following file types: Programs (.exe) Libraries (.dll) Icons (.ico) Images (.png, .jpeg, .jpg) The file doesn't need to be on clients that you target with the deployment. Configuration Manager includes the image with the deployment policy. The maximum file size for an image is 256 KB. Icons can have pixel dimensions of up to 512 x 512. When clients receive the deployment policy, they'll display the icon in Software Center. Note To take full advantage of new Configuration Manager features, after you update the site, also update clients to the latest version. While new functionality appears in the Configuration Manager console when you update the site and console, the complete scenario isn't functional until the client version is also the latest. Create a program On the Program Type page of the Create Package and Program Wizard, choose Standard Program, and then choose Next. On the Standard Program page, specify the following information: Name: Specify a name for the program with a maximum of 50 characters. Note The program name must be unique within a package. After you create a program, you can't modify its name. Command Line: Enter the command line to use to start this program, or choose Browse to browse to the file location. If you don't specify an extension for a file name, Configuration Manager attempts to use .com, .exe, and .bat as possible extensions. When the client runs the program, Configuration Manager searches for the file in the following locations: Within the package The local Windows folder The local %path% If it can't find the file, the program fails. Startup folder (optional): Specify the folder from which the program runs, up to 127 characters. This folder can be an absolute path on the client. It can also be a path that's relative to the distribution point folder that contains the package. Run: Specify the mode in which the program runs on client computers. Select one of the following options: Normal: The program runs in the normal mode based on system and program defaults. This mode is the default. Minimized: The program runs minimized on client devices. Users might see installation activity in the notification area or on the taskbar. Maximized: The program runs maximized on client devices. Users see all installation activity. Hidden: The program runs hidden on client devices. Users don't see any installation activity. Program can run: Specify whether the program runs only when a user is signed in, only when no user is signed in, or regardless of whether a user is signed in to the client computer. Run mode: Specify whether the program runs with administrative permissions or with the permissions of the user who's currently signed in. Allow users to view and interact with the program installation: Use this setting, if available, to specify whether to allow users to interact with the program installation. This option is only available if the following conditions are met: Program can run setting is Only when a user is logged on or Whether or not a user is logged on Run mode setting is to Run with administrative rights Drive mode: Specify information about how this program runs on the network. Choose one of the following options: Runs with UNC name: Specify that the program runs with a Universal Naming Convention (UNC) name. This setting is the default. Requires drive letter: Specify that the program requires a drive letter to fully qualify its location. For this setting, Configuration Manager can use any available drive letter on the client. This setting requires the deployment to use the Deployment option Run program from distribution point and the package's Data Access option enabled to Copy the content in this package to a package share on distribution points. Requires specific drive letter: Specify that the program requires a specific drive letter that you specify to fully qualify its location. For example, Z:. If the client is already using the specified drive letter, the program doesn't run. This setting requires the deployment to use the Deployment option Run program from distribution point and the package's Data Access option enabled to Copy the content in this package to a package share on distribution points. Reconnect to distribution point at log on: Indicate whether the client reconnects to the distribution point when the user signs in. By default, the wizard doesn't enable this option. On the Requirements page of the Create Package and Program Wizard, specify the following information: Run another program first: Identify a package and program that runs before this package and program runs. Platform requirements: Select This program can run on any platform or This program can run only on specified platforms. Then choose the OS versions that clients must have to install this package and program. Note When you run a task sequence from boot media or PXE, Configuration Manager ignores this option. The task sequence runs as though the option This program can run on any platform is selected. Estimated disk space: Specify the amount of disk space that the program requires to run on the computer. The default setting is Unknown. If necessary, specify a whole number greater than or equal to zero. If you set a value, also select units for the value. Maximum allowed run time (minutes): Specify the maximum time that you expect the program to run on the client computer. The default value is 120 minutes. Only use whole numbers greater than zero. Important If the targeted computers to which you deploy this program have a maintenance window, a conflict could occur if the Maximum allowed run time is longer than the scheduled maintenance window. If you set the maximum run time to Unknown, the program starts to run during the maintenance window. It then continues to run as needed after the maintenance window is closed. If you set the maximum run time to a specific period that's greater than the length of any available maintenance window, then the client doesn't run the program. If you set this value to Unknown, Configuration Manager sets the maximum allowed run time as 12 hours (720 minutes). Note If the program exceeds the maximum run time, Configuration Manager stops it if the following conditions are met: You enable the option to Run with administrative rights You don't enable the option to Allow users to view and interact with the program installation Deploy packages and programs In the Configuration Manager console, go to the Software Library workspace, expand Application Management, and select the Packages node. Select the package that you want to deploy. In the Home tab of the ribbon, in the Deployment group, choose Deploy. On the General page of the Deploy Software Wizard, specify the name of the package and program that you want to deploy. Select the collection to which you want to deploy the package and program, and any optional comments. To store the package content on the collection's default distribution point group, select the option to Use default distribution point groups associated to this collection. If you didn't associate this collection with a distribution point group, this option is unavailable. On the Content page, choose Add. Select the distribution points or distribution point groups to which you want to distribute the content for this package and program. On the Deployment Settings page, configure the following settings: Purpose: Choose one of the following options: Available: The user sees the published package and program in Software Center and can install it on demand. Required: The package and program is deployed automatically, according to the configured schedule. In Software Center, you can track its deployment status and install it before the deadline. Note If multiple users are signed into the device, package and task sequence deployments may not appear in Software Center. Send wake-up packets: If you set the deployment purpose to Required and select this option, the site first sends a wake-up packet to computers at the installation deadline time. Before you can use this option, configure computers for Wake On LAN. For more information, see How to configure Wake on LAN. Allow clients on a metered Internet connection to download content after the installation deadline, which might incur additional costs Note When you deploy a package and program, the option to Pre-deploy software to the user's primary device isn't available. On the Scheduling page, configure when to deploy this package and program to client devices. The options on this page vary depending on whether you set the deployment action to Available or Required. For Required deployments, configure the rerun behavior for the program from the Rerun behavior drop-down menu. Choose from the following options: Rerun behavior Description Never rerun deployed program The client won't rerun the program. This behavior happens even if the program originally failed or if the program files are changed. Always rerun program The client always reruns the program when the deployment is scheduled. This behavior happens even if the program has already successfully run. It's useful with recurring deployments when you update the program. Rerun if failed previous attempt The client reruns the program when the deployment is scheduled, only if it failed on the previous run attempt. Rerun if succeeded on previous attempt The client reruns the program only if it previously ran successfully on the client. This behavior is useful with recurring deployments when you routinely update the program, and each update requires the previous update to be successfully installed. On the User Experience page, specify the following information: Allow users to run the program independently of assignments: Users can install this software from Software Center regardless of any scheduled installation time. Software installation: Allows the software to be installed outside of any configured maintenance windows. System restart (if required to complete the installation): If the software installation requires a device restart to finish, allow this action to happen outside of any configured maintenance windows. Embedded devices: When you deploy packages and programs to Windows Embedded devices that are write-filter-enabled, you can specify that they install packages and programs on the temporary overlay and commit changes later. Alternately, commit the changes on the installation deadline or during a maintenance window. When you commit changes on the installation deadline or during a maintenance window, a restart is required, and the changes persist on the device. Note When you deploy a package or program to a Windows Embedded device, make sure that the device is a member of a collection that has a configured maintenance window. For more information about how maintenance windows are used when you deploy packages and programs to Windows Embedded devices, see Creating Windows Embedded applications. On the Distribution Points page, specify the following information: Deployment options: Specify the action that a client when it uses a distribution point in its current boundary group. Also select the action for the client when it uses a distribution point from a neighbor boundary group or the default site boundary group. Important If you configure the deployment option to Run program from distribution point, make sure to enable the option to Copy the content in this package to a package share on distribution points on the Data Access tab of the package properties. Otherwise the package is unavailable to run from distribution points. Allow clients to use distribution points from the default site boundary group: When this content isn't available from any distribution point in the current or neighbor boundary groups, enable this option to let them try distribution points in the site default boundary group. Complete the wizard. View the deployment in the Deployments node of the Monitoring workspace and in the details pane of the package deployment tab when you select the deployment. For more information, see Monitor packages and programs. Monitor packages and programs To monitor package and program deployments, use the same procedures that you use to monitor applications as detailed in Monitor applications. Packages and programs also include a number of built-in reports, which enable you to monitor information about the deployment status of packages and programs. These reports have the report category of Software Distribution - Packages and Programs and Software Distribution - Package and Program Deployment Status. For more information about how to configure reporting in Configuration Manager, see Introduction to reporting. Manage packages and programs In the Software Library workspace, expand Application Management, and select the Packages node. Select the package that you want to manage, and then choose a management task. Create Prestage Content File Opens the Create Prestaged Content File Wizard, to create a file that contains the package content. Use this file to manually import the package to a remote distribution point. This action is useful when you have low network bandwidth between the site server and the distribution point. Create Program Opens the Create Program Wizard, to create a new program for this package. Export Opens the Export Package Wizard, to export the selected package and its content to a file. Use this file to import the file to another hierarchy. Deploy Opens the Deploy Software Wizard, to deploy the selected package and program to a collection. For more information, see Deploy packages and programs. Distribute content Opens the Distribute Content Wizard, to send the content for a package and program to selected distribution points or distribution point groups. Import Opens the Import Package Wizard, to import a previously exported package from a .zip file. Tip When you import an object in the Configuration Manager console, it imports to the current folder. In earlier versions, Configuration Manager always put imported objects in the root node. Updates distribution points with the latest content for the selected package and program. Next steps
8585
dbpedia
0
12
https://www.elliotjordan.com/posts/autopkg-https/
en
Switch AutoPkg recipes to HTTPS
https://www.elliotjordan…avicon-32x32.png
https://www.elliotjordan…avicon-32x32.png
[ "https://www.elliotjordan.com/img/logo.svg", "https://www.elliotjordan.com/img/next.svg" ]
[]
[]
[ "" ]
null
[]
null
A script that helps AutoPkg recipe authors use HTTPS in download recipes, and context about why using HTTPS is important.
en
/favicon.ico
Elliot Jordan
https://www.elliotjordan.com/posts/autopkg-https/
AutoPkg recipes automate and codify the often tedious tasks involved in packaging and distributing Mac software. Central to AutoPkg’s greatness are the many built-in security measures that verify you’re getting the software you intend — including code signature verification, embedded trust information in overrides, and the autopkg audit command. AutoPkg recipe authors should also follow another important security practice: use HTTPS URLs instead of HTTP whenever possible. Whether downloading actual software or downloading metadata about the software, using an HTTPS URL helps prevent person-in-the-middle attacks and keep your organization’s software pipeline secure. In particular, the arguments and input variables used by the URLDownloader, URLTextSearcher, and SparkleUpdateInfoProvider processors should use HTTPS if the option is available, and recipe authors should perform periodic checks to detect when software developers (or their CDNs) begin offering HTTPS downloads. The security benefits aren’t just theoretical; a few years ago, security researchers demonstrated an attack targeting Mac apps using insecure Sparkle feeds. Ben Toms wrote a good article detailing the Mac admin community’s response to the vulnerability. HTTPS Spotter Checking for the existence of HTTPS URLs can be tedious if you manage more than a handful of AutoPkg recipes, so I’ve written a Python tool called HTTPS Spotter that will automate the process for you. The source code is on GitHub and embedded below. Requirements To use the script, you’ll need Git and AutoPkg installed. Steps Clone the script to your Mac (substitute the path to your source, if not ~/Developer). git clone https://gist.github.com/66d1c8772baf5f731bb8ddf263f33401.git ~/Developer/https_spotter Run the script with --help to see usage information. /usr/local/autopkg/python ~/Developer/https_spotter/https_spotter.py --help Now run the script again, pointing it to your repository of AutoPkg recipes: /usr/local/autopkg/python ~/Developer/https_spotter/https_spotter.py ~/Developer/your-autopkg-recipes You’ll see output that might look like this: ../homebysix-recipes/NeoFinder/NeoFinder.download.recipe Replace: http://www.cdfinder.de/en/downloads.html With: https://www.cdfinder.de/en/downloads.html ../homebysix-recipes/FontFinagler/FontFinagler.download.recipe Replace: http://www.markdouma.com/fontfinagler/version.xml With: https://www.markdouma.com/fontfinagler/version.xml 2 suggested changes. To apply, run again with --auto. Run the script again with the --auto flag in order to automatically apply the changes, or apply the changes manually in your preferred text editor. Test the modified recipes prior to committing/pushing the changes to your public repo on GitHub. tip Here's a one-liner that will run recently-modified recipes in "check only" mode: find * -iname "*.recipe" -mtime -1 -exec autopkg run -vvcq "{}" '+' Source code The script is below. Suggestions or improvements are welcome!
8585
dbpedia
1
48
https://docs.python.org/3/howto/logging-cookbook.html
en
Logging Cookbook
https://docs.python.org/…tic/og-image.png
https://docs.python.org/…tic/og-image.png
[ "https://docs.python.org/3/_static/py.svg", "https://docs.python.org/3/_static/py.svg", "https://docs.python.org/3/_static/py.svg" ]
[]
[]
[ "" ]
null
[]
null
Author, Vinay Sajip <vinay_sajip at red-dove dot com>,. This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference info...
en
../_static/py.svg
Python documentation
https://docs.python.org/3/howto/logging-cookbook.html
Logging Cookbook¶ Author: Vinay Sajip <vinay_sajip at red-dove dot com> This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference information, please see Other resources. Using logging in multiple modules¶ Multiple calls to logging.getLogger('someLogger') return a reference to the same logger object. This is true not only within the same module, but also across modules as long as it is in the same Python interpreter process. It is true for references to the same object; additionally, application code can define and configure a parent logger in one module and create (but not configure) a child logger in a separate module, and all logger calls to the child will pass up to the parent. Here is a main module: import logging import auxiliary_module # create logger with 'spam_application' logger = logging.getLogger('spam_application') logger.setLevel(logging.DEBUG) # create file handler which logs even debug messages fh = logging.FileHandler('spam.log') fh.setLevel(logging.DEBUG) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(logging.ERROR) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) logger.info('creating an instance of auxiliary_module.Auxiliary') a = auxiliary_module.Auxiliary() logger.info('created an instance of auxiliary_module.Auxiliary') logger.info('calling auxiliary_module.Auxiliary.do_something') a.do_something() logger.info('finished auxiliary_module.Auxiliary.do_something') logger.info('calling auxiliary_module.some_function()') auxiliary_module.some_function() logger.info('done with auxiliary_module.some_function()') Here is the auxiliary module: import logging # create logger module_logger = logging.getLogger('spam_application.auxiliary') class Auxiliary: def __init__(self): self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary') self.logger.info('creating an instance of Auxiliary') def do_something(self): self.logger.info('doing something') a = 1 + 1 self.logger.info('done doing something') def some_function(): module_logger.info('received a call to "some_function"') The output looks like this: 2005-03-23 23:47:11,663 - spam_application - INFO - creating an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO - creating an instance of Auxiliary 2005-03-23 23:47:11,665 - spam_application - INFO - created an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,668 - spam_application - INFO - calling auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO - doing something 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO - done doing something 2005-03-23 23:47:11,670 - spam_application - INFO - finished auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,671 - spam_application - INFO - calling auxiliary_module.some_function() 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO - received a call to 'some_function' 2005-03-23 23:47:11,673 - spam_application - INFO - done with auxiliary_module.some_function() Logging from multiple threads¶ Logging from multiple threads requires no special effort. The following example shows logging from the main (initial) thread and another thread: import logging import threading import time def worker(arg): while not arg['stop']: logging.debug('Hi from myfunc') time.sleep(0.5) def main(): logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d%(threadName)s%(message)s') info = {'stop': False} thread = threading.Thread(target=worker, args=(info,)) thread.start() while True: try: logging.debug('Hello from main') time.sleep(0.75) except KeyboardInterrupt: info['stop'] = True break thread.join() if __name__ == '__main__': main() When run, the script should print something like the following: 0 Thread-1 Hi from myfunc 3 MainThread Hello from main 505 Thread-1 Hi from myfunc 755 MainThread Hello from main 1007 Thread-1 Hi from myfunc 1507 MainThread Hello from main 1508 Thread-1 Hi from myfunc 2010 Thread-1 Hi from myfunc 2258 MainThread Hello from main 2512 Thread-1 Hi from myfunc 3009 MainThread Hello from main 3013 Thread-1 Hi from myfunc 3515 Thread-1 Hi from myfunc 3761 MainThread Hello from main 4017 Thread-1 Hi from myfunc 4513 MainThread Hello from main 4518 Thread-1 Hi from myfunc This shows the logging output interspersed as one might expect. This approach works for more threads than shown here, of course. Multiple handlers and formatters¶ Loggers are plain Python objects. The addHandler() method has no minimum or maximum quota for the number of handlers you may add. Sometimes it will be beneficial for an application to log all messages of all severities to a text file while simultaneously logging errors or above to the console. To set this up, simply configure the appropriate handlers. The logging calls in the application code will remain unchanged. Here is a slight modification to the previous simple module-based configuration example: import logging logger = logging.getLogger('simple_example') logger.setLevel(logging.DEBUG) # create file handler which logs even debug messages fh = logging.FileHandler('spam.log') fh.setLevel(logging.DEBUG) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(logging.ERROR) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') ch.setFormatter(formatter) fh.setFormatter(formatter) # add the handlers to logger logger.addHandler(ch) logger.addHandler(fh) # 'application' code logger.debug('debug message') logger.info('info message') logger.warning('warn message') logger.error('error message') logger.critical('critical message') Notice that the ‘application’ code does not care about multiple handlers. All that changed was the addition and configuration of a new handler named fh. The ability to create new handlers with higher- or lower-severity filters can be very helpful when writing and testing an application. Instead of using many print statements for debugging, use logger.debug: Unlike the print statements, which you will have to delete or comment out later, the logger.debug statements can remain intact in the source code and remain dormant until you need them again. At that time, the only change that needs to happen is to modify the severity level of the logger and/or handler to debug. Logging to multiple destinations¶ Let’s say you want to log to console and file with different message formats and in differing circumstances. Say you want to log messages with levels of DEBUG and higher to file, and those messages at level INFO and higher to the console. Let’s also assume that the file should contain timestamps, but the console messages should not. Here’s how you can achieve this: import logging # set up logging to file - see previous section for more details logging.basicConfig(level=logging.DEBUG, format='%(asctime)s%(name)-12s%(levelname)-8s%(message)s', datefmt='%m-%d %H:%M', filename='/tmp/myapp.log', filemode='w') # define a Handler which writes INFO messages or higher to the sys.stderr console = logging.StreamHandler() console.setLevel(logging.INFO) # set a format which is simpler for console use formatter = logging.Formatter('%(name)-12s: %(levelname)-8s%(message)s') # tell the handler to use this format console.setFormatter(formatter) # add the handler to the root logger logging.getLogger('').addHandler(console) # Now, we can log to the root logger, or any other logger. First the root... logging.info('Jackdaws love my big sphinx of quartz.') # Now, define a couple of other loggers which might represent areas in your # application: logger1 = logging.getLogger('myapp.area1') logger2 = logging.getLogger('myapp.area2') logger1.debug('Quick zephyrs blow, vexing daft Jim.') logger1.info('How quickly daft jumping zebras vex.') logger2.warning('Jail zesty vixen who grabbed pay from quack.') logger2.error('The five boxing wizards jump quickly.') When you run this, on the console you will see root : INFO Jackdaws love my big sphinx of quartz. myapp.area1 : INFO How quickly daft jumping zebras vex. myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack. myapp.area2 : ERROR The five boxing wizards jump quickly. and in the file you will see something like 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz. 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim. 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex. 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack. 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly. As you can see, the DEBUG message only shows up in the file. The other messages are sent to both destinations. This example uses console and file handlers, but you can use any number and combination of handlers you choose. Note that the above choice of log filename /tmp/myapp.log implies use of a standard location for temporary files on POSIX systems. On Windows, you may need to choose a different directory name for the log - just ensure that the directory exists and that you have the permissions to create and update files in it. Custom handling of levels¶ Sometimes, you might want to do something slightly different from the standard handling of levels in handlers, where all levels above a threshold get processed by a handler. To do this, you need to use filters. Let’s look at a scenario where you want to arrange things as follows: Send messages of severity INFO and WARNING to sys.stdout Send messages of severity ERROR and above to sys.stderr Send messages of severity DEBUG and above to file app.log Suppose you configure logging with the following JSON: { "version":1, "disable_existing_loggers":false, "formatters":{ "simple":{ "format":"%(levelname)-8s - %(message)s" } }, "handlers":{ "stdout":{ "class":"logging.StreamHandler", "level":"INFO", "formatter":"simple", "stream":"ext://sys.stdout" }, "stderr":{ "class":"logging.StreamHandler", "level":"ERROR", "formatter":"simple", "stream":"ext://sys.stderr" }, "file":{ "class":"logging.FileHandler", "formatter":"simple", "filename":"app.log", "mode":"w" } }, "root":{ "level":"DEBUG", "handlers":[ "stderr", "stdout", "file" ] } } This configuration does almost what we want, except that sys.stdout would show messages of severity ERROR and above as well as INFO and WARNING messages. To prevent this, we can set up a filter which excludes those messages and add it to the relevant handler. This can be configured by adding a filters section parallel to formatters and handlers: { "filters":{ "warnings_and_below":{ "()":"__main__.filter_maker", "level":"WARNING" } } } and changing the section on the stdout handler to add it: { "stdout":{ "class":"logging.StreamHandler", "level":"INFO", "formatter":"simple", "stream":"ext://sys.stdout", "filters":["warnings_and_below"] } } A filter is just a function, so we can define the filter_maker (a factory function) as follows: def filter_maker(level): level = getattr(logging, level) def filter(record): return record.levelno <= level return filter This converts the string argument passed in to a numeric level, and returns a function which only returns True if the level of the passed in record is at or below the specified level. Note that in this example I have defined the filter_maker in a test script main.py that I run from the command line, so its module will be __main__ - hence the __main__.filter_maker in the filter configuration. You will need to change that if you define it in a different module. With the filter added, we can run main.py, which in full is: import json import logging import logging.config CONFIG = ''' { "version": 1, "disable_existing_loggers": false, "formatters": { "simple": { "format": "%(levelname)-8s - %(message)s" } }, "filters": { "warnings_and_below": { "()" : "__main__.filter_maker", "level": "WARNING" } }, "handlers": { "stdout": { "class": "logging.StreamHandler", "level": "INFO", "formatter": "simple", "stream": "ext://sys.stdout", "filters": ["warnings_and_below"] }, "stderr": { "class": "logging.StreamHandler", "level": "ERROR", "formatter": "simple", "stream": "ext://sys.stderr" }, "file": { "class": "logging.FileHandler", "formatter": "simple", "filename": "app.log", "mode": "w" } }, "root": { "level": "DEBUG", "handlers": [ "stderr", "stdout", "file" ] } } ''' def filter_maker(level): level = getattr(logging, level) def filter(record): return record.levelno <= level return filter logging.config.dictConfig(json.loads(CONFIG)) logging.debug('A DEBUG message') logging.info('An INFO message') logging.warning('A WARNING message') logging.error('An ERROR message') logging.critical('A CRITICAL message') And after running it like this: python main.py2>stderr.log >stdout.log We can see the results are as expected: $ more *.log :::::::::::::: app.log :::::::::::::: DEBUG - A DEBUG message INFO - An INFO message WARNING - A WARNING message ERROR - An ERROR message CRITICAL - A CRITICAL message :::::::::::::: stderr.log :::::::::::::: ERROR - An ERROR message CRITICAL - A CRITICAL message :::::::::::::: stdout.log :::::::::::::: INFO - An INFO message WARNING - A WARNING message Configuration server example¶ Here is an example of a module using the logging configuration server: import logging import logging.config import time import os # read initial config file logging.config.fileConfig('logging.conf') # create and start listener on port 9999 t = logging.config.listen(9999) t.start() logger = logging.getLogger('simpleExample') try: # loop through logging calls to see the difference # new configurations make, until Ctrl+C is pressed while True: logger.debug('debug message') logger.info('info message') logger.warning('warn message') logger.error('error message') logger.critical('critical message') time.sleep(5) except KeyboardInterrupt: # cleanup logging.config.stopListening() t.join() And here is a script that takes a filename and sends that file to the server, properly preceded with the binary-encoded length, as the new logging configuration: #!/usr/bin/env python import socket, sys, struct with open(sys.argv[1], 'rb') as f: data_to_send = f.read() HOST = 'localhost' PORT = 9999 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print('connecting...') s.connect((HOST, PORT)) print('sending config...') s.send(struct.pack('>L', len(data_to_send))) s.send(data_to_send) s.close() print('complete') Dealing with handlers that block¶ Sometimes you have to get your logging handlers to do their work without blocking the thread you’re logging from. This is common in web applications, though of course it also occurs in other scenarios. A common culprit which demonstrates sluggish behaviour is the SMTPHandler: sending emails can take a long time, for a number of reasons outside the developer’s control (for example, a poorly performing mail or network infrastructure). But almost any network-based handler can block: Even a SocketHandler operation may do a DNS query under the hood which is too slow (and this query can be deep in the socket library code, below the Python layer, and outside your control). One solution is to use a two-part approach. For the first part, attach only a QueueHandler to those loggers which are accessed from performance-critical threads. They simply write to their queue, which can be sized to a large enough capacity or initialized with no upper bound to their size. The write to the queue will typically be accepted quickly, though you will probably need to catch the queue.Full exception as a precaution in your code. If you are a library developer who has performance-critical threads in their code, be sure to document this (together with a suggestion to attach only QueueHandlers to your loggers) for the benefit of other developers who will use your code. The second part of the solution is QueueListener, which has been designed as the counterpart to QueueHandler. A QueueListener is very simple: it’s passed a queue and some handlers, and it fires up an internal thread which listens to its queue for LogRecords sent from QueueHandlers (or any other source of LogRecords, for that matter). The LogRecords are removed from the queue and passed to the handlers for processing. The advantage of having a separate QueueListener class is that you can use the same instance to service multiple QueueHandlers. This is more resource-friendly than, say, having threaded versions of the existing handler classes, which would eat up one thread per handler for no particular benefit. An example of using these two classes follows (imports omitted): que = queue.Queue(-1) # no limit on size queue_handler = QueueHandler(que) handler = logging.StreamHandler() listener = QueueListener(que, handler) root = logging.getLogger() root.addHandler(queue_handler) formatter = logging.Formatter('%(threadName)s: %(message)s') handler.setFormatter(formatter) listener.start() # The log output will display the thread which generated # the event (the main thread) rather than the internal # thread which monitors the internal queue. This is what # you want to happen. root.warning('Look out!') listener.stop() which, when run, will produce: MainThread: Look out! Note Although the earlier discussion wasn’t specifically talking about async code, but rather about slow logging handlers, it should be noted that when logging from async code, network and even file handlers could lead to problems (blocking the event loop) because some logging is done from asyncio internals. It might be best, if any async code is used in an application, to use the above approach for logging, so that any blocking code runs only in the QueueListener thread. Changed in version 3.5: Prior to Python 3.5, the QueueListener always passed every message received from the queue to every handler it was initialized with. (This was because it was assumed that level filtering was all done on the other side, where the queue is filled.) From 3.5 onwards, this behaviour can be changed by passing a keyword argument respect_handler_level=True to the listener’s constructor. When this is done, the listener compares the level of each message with the handler’s level, and only passes a message to a handler if it’s appropriate to do so. Sending and receiving logging events across a network¶ Let’s say you want to send logging events across a network, and handle them at the receiving end. A simple way of doing this is attaching a SocketHandler instance to the root logger at the sending end: import logging, logging.handlers rootLogger = logging.getLogger('') rootLogger.setLevel(logging.DEBUG) socketHandler = logging.handlers.SocketHandler('localhost', logging.handlers.DEFAULT_TCP_LOGGING_PORT) # don't bother with a formatter, since a socket handler sends the event as # an unformatted pickle rootLogger.addHandler(socketHandler) # Now, we can log to the root logger, or any other logger. First the root... logging.info('Jackdaws love my big sphinx of quartz.') # Now, define a couple of other loggers which might represent areas in your # application: logger1 = logging.getLogger('myapp.area1') logger2 = logging.getLogger('myapp.area2') logger1.debug('Quick zephyrs blow, vexing daft Jim.') logger1.info('How quickly daft jumping zebras vex.') logger2.warning('Jail zesty vixen who grabbed pay from quack.') logger2.error('The five boxing wizards jump quickly.') At the receiving end, you can set up a receiver using the socketserver module. Here is a basic working example: import pickle import logging import logging.handlers import socketserver import struct class LogRecordStreamHandler(socketserver.StreamRequestHandler): """Handler for a streaming logging request. This basically logs the record using whatever logging policy is configured locally. """ def handle(self): """ Handle multiple requests - each expected to be a 4-byte length, followed by the LogRecord in pickle format. Logs the record according to whatever policy is configured locally. """ while True: chunk = self.connection.recv(4) if len(chunk) < 4: break slen = struct.unpack('>L', chunk)[0] chunk = self.connection.recv(slen) while len(chunk) < slen: chunk = chunk + self.connection.recv(slen - len(chunk)) obj = self.unPickle(chunk) record = logging.makeLogRecord(obj) self.handleLogRecord(record) def unPickle(self, data): return pickle.loads(data) def handleLogRecord(self, record): # if a name is specified, we use the named logger rather than the one # implied by the record. if self.server.logname is not None: name = self.server.logname else: name = record.name logger = logging.getLogger(name) # N.B. EVERY record gets logged. This is because Logger.handle # is normally called AFTER logger-level filtering. If you want # to do filtering, do it at the client end to save wasting # cycles and network bandwidth! logger.handle(record) class LogRecordSocketReceiver(socketserver.ThreadingTCPServer): """ Simple TCP socket-based logging receiver suitable for testing. """ allow_reuse_address = True def __init__(self, host='localhost', port=logging.handlers.DEFAULT_TCP_LOGGING_PORT, handler=LogRecordStreamHandler): socketserver.ThreadingTCPServer.__init__(self, (host, port), handler) self.abort = 0 self.timeout = 1 self.logname = None def serve_until_stopped(self): import select abort = 0 while not abort: rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout) if rd: self.handle_request() abort = self.abort def main(): logging.basicConfig( format='%(relativeCreated)5d%(name)-15s%(levelname)-8s%(message)s') tcpserver = LogRecordSocketReceiver() print('About to start TCP server...') tcpserver.serve_until_stopped() if __name__ == '__main__': main() First run the server, and then the client. On the client side, nothing is printed on the console; on the server side, you should see something like: About to start TCP server... 59 root INFO Jackdaws love my big sphinx of quartz. 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim. 69 myapp.area1 INFO How quickly daft jumping zebras vex. 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack. 69 myapp.area2 ERROR The five boxing wizards jump quickly. Note that there are some security issues with pickle in some scenarios. If these affect you, you can use an alternative serialization scheme by overriding the makePickle() method and implementing your alternative there, as well as adapting the above script to use your alternative serialization. Running a logging socket listener in production¶ To run a logging listener in production, you may need to use a process-management tool such as Supervisor. Here is a Gist which provides the bare-bones files to run the above functionality using Supervisor. It consists of the following files: File Purpose The web application uses Gunicorn, which is a popular web application server that starts multiple worker processes to handle requests. This example setup shows how the workers can write to the same log file without conflicting with one another — they all go through the socket listener. To test these files, do the following in a POSIX environment: Download the Gist as a ZIP archive using the Download ZIP button. Unzip the above files from the archive into a scratch directory. In the scratch directory, run bash prepare.sh to get things ready. This creates a run subdirectory to contain Supervisor-related and log files, and a venv subdirectory to contain a virtual environment into which bottle, gunicorn and supervisor are installed. Run bash ensure_app.sh to ensure that Supervisor is running with the above configuration. Run venv/bin/python client.py to exercise the web application, which will lead to records being written to the log. Inspect the log files in the run subdirectory. You should see the most recent log lines in files matching the pattern app.log*. They won’t be in any particular order, since they have been handled concurrently by different worker processes in a non-deterministic way. You can shut down the listener and the web application by running venv/bin/supervisorctl -c supervisor.conf shutdown. You may need to tweak the configuration files in the unlikely event that the configured ports clash with something else in your test environment. Adding contextual information to your logging output¶ Sometimes you want logging output to contain contextual information in addition to the parameters passed to the logging call. For example, in a networked application, it may be desirable to log client-specific information in the log (e.g. remote client’s username, or IP address). Although you could use the extra parameter to achieve this, it’s not always convenient to pass the information in this way. While it might be tempting to create Logger instances on a per-connection basis, this is not a good idea because these instances are not garbage collected. While this is not a problem in practice, when the number of Logger instances is dependent on the level of granularity you want to use in logging an application, it could be hard to manage if the number of Logger instances becomes effectively unbounded. Using LoggerAdapters to impart contextual information¶ An easy way in which you can pass contextual information to be output along with logging event information is to use the LoggerAdapter class. This class is designed to look like a Logger, so that you can call debug(), info(), warning(), error(), exception(), critical() and log(). These methods have the same signatures as their counterparts in Logger, so you can use the two types of instances interchangeably. When you create an instance of LoggerAdapter, you pass it a Logger instance and a dict-like object which contains your contextual information. When you call one of the logging methods on an instance of LoggerAdapter, it delegates the call to the underlying instance of Logger passed to its constructor, and arranges to pass the contextual information in the delegated call. Here’s a snippet from the code of LoggerAdapter: def debug(self, msg, /, *args, **kwargs): """ Delegate a debug call to the underlying logger, after adding contextual information from this adapter instance. """ msg, kwargs = self.process(msg, kwargs) self.logger.debug(msg, *args, **kwargs) The process() method of LoggerAdapter is where the contextual information is added to the logging output. It’s passed the message and keyword arguments of the logging call, and it passes back (potentially) modified versions of these to use in the call to the underlying logger. The default implementation of this method leaves the message alone, but inserts an ‘extra’ key in the keyword argument whose value is the dict-like object passed to the constructor. Of course, if you had passed an ‘extra’ keyword argument in the call to the adapter, it will be silently overwritten. The advantage of using ‘extra’ is that the values in the dict-like object are merged into the LogRecord instance’s __dict__, allowing you to use customized strings with your Formatter instances which know about the keys of the dict-like object. If you need a different method, e.g. if you want to prepend or append the contextual information to the message string, you just need to subclass LoggerAdapter and override process() to do what you need. Here is a simple example: class CustomAdapter(logging.LoggerAdapter): """ This example adapter expects the passed in dict-like object to have a 'connid' key, whose value in brackets is prepended to the log message. """ def process(self, msg, kwargs): return '[%s] %s' % (self.extra['connid'], msg), kwargs which you can use like this: logger = logging.getLogger(__name__) adapter = CustomAdapter(logger, {'connid': some_conn_id}) Then any events that you log to the adapter will have the value of some_conn_id prepended to the log messages. Using objects other than dicts to pass contextual information¶ You don’t need to pass an actual dict to a LoggerAdapter - you could pass an instance of a class which implements __getitem__ and __iter__ so that it looks like a dict to logging. This would be useful if you want to generate values dynamically (whereas the values in a dict would be constant). Using Filters to impart contextual information¶ You can also add contextual information to log output using a user-defined Filter. Filter instances are allowed to modify the LogRecords passed to them, including adding additional attributes which can then be output using a suitable format string, or if needed a custom Formatter. For example in a web application, the request being processed (or at least, the interesting parts of it) can be stored in a threadlocal (threading.local) variable, and then accessed from a Filter to add, say, information from the request - say, the remote IP address and remote user’s username - to the LogRecord, using the attribute names ‘ip’ and ‘user’ as in the LoggerAdapter example above. In that case, the same format string can be used to get similar output to that shown above. Here’s an example script: import logging from random import choice class ContextFilter(logging.Filter): """ This is a filter which injects contextual information into the log. Rather than use actual contextual information, we just use random data in this demo. """ USERS = ['jim', 'fred', 'sheila'] IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1'] def filter(self, record): record.ip = choice(ContextFilter.IPS) record.user = choice(ContextFilter.USERS) return True if __name__ == '__main__': levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL) logging.basicConfig(level=logging.DEBUG, format='%(asctime)-15s%(name)-5s%(levelname)-8s IP: %(ip)-15s User: %(user)-8s%(message)s') a1 = logging.getLogger('a.b.c') a2 = logging.getLogger('d.e.f') f = ContextFilter() a1.addFilter(f) a2.addFilter(f) a1.debug('A debug message') a1.info('An info message with %s', 'some parameters') for x in range(10): lvl = choice(levels) lvlname = logging.getLevelName(lvl) a2.log(lvl, 'A message at %s level with %d%s', lvlname, 2, 'parameters') which, when run, produces something like: 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters Use of contextvars¶ Since Python 3.7, the contextvars module has provided context-local storage which works for both threading and asyncio processing needs. This type of storage may thus be generally preferable to thread-locals. The following example shows how, in a multi-threaded environment, logs can populated with contextual information such as, for example, request attributes handled by web applications. For the purposes of illustration, say that you have different web applications, each independent of the other but running in the same Python process and using a library common to them. How can each of these applications have their own log, where all logging messages from the library (and other request processing code) are directed to the appropriate application’s log file, while including in the log additional contextual information such as client IP, HTTP request method and client username? Let’s assume that the library can be simulated by the following code: # webapplib.py import logging import time logger = logging.getLogger(__name__) def useful(): # Just a representative event logged from the library logger.debug('Hello from webapplib!') # Just sleep for a bit so other threads get to run time.sleep(0.01) We can simulate the multiple web applications by means of two simple classes, Request and WebApp. These simulate how real threaded web applications work - each request is handled by a thread: # main.py import argparse from contextvars import ContextVar import logging import os from random import choice import threading import webapplib logger = logging.getLogger(__name__) root = logging.getLogger() root.setLevel(logging.DEBUG) class Request: """ A simple dummy request class which just holds dummy HTTP request method, client IP address and client username """ def __init__(self, method, ip, user): self.method = method self.ip = ip self.user = user # A dummy set of requests which will be used in the simulation - we'll just pick # from this list randomly. Note that all GET requests are from 192.168.2.XXX # addresses, whereas POST requests are from 192.16.3.XXX addresses. Three users # are represented in the sample requests. REQUESTS = [ Request('GET', '192.168.2.20', 'jim'), Request('POST', '192.168.3.20', 'fred'), Request('GET', '192.168.2.21', 'sheila'), Request('POST', '192.168.3.21', 'jim'), Request('GET', '192.168.2.22', 'fred'), Request('POST', '192.168.3.22', 'sheila'), ] # Note that the format string includes references to request context information # such as HTTP method, client IP and username formatter = logging.Formatter('%(threadName)-11s%(appName)s%(name)-9s%(user)-6s%(ip)s%(method)-4s%(message)s') # Create our context variables. These will be filled at the start of request # processing, and used in the logging that happens during that processing ctx_request = ContextVar('request') ctx_appname = ContextVar('appname') class InjectingFilter(logging.Filter): """ A filter which injects context-specific information into logs and ensures that only information for a specific webapp is included in its log """ def __init__(self, app): self.app = app def filter(self, record): request = ctx_request.get() record.method = request.method record.ip = request.ip record.user = request.user record.appName = appName = ctx_appname.get() return appName == self.app.name class WebApp: """ A dummy web application class which has its own handler and filter for a webapp-specific log. """ def __init__(self, name): self.name = name handler = logging.FileHandler(name + '.log', 'w') f = InjectingFilter(self) handler.setFormatter(formatter) handler.addFilter(f) root.addHandler(handler) self.num_requests = 0 def process_request(self, request): """ This is the dummy method for processing a request. It's called on a different thread for every request. We store the context information into the context vars before doing anything else. """ ctx_request.set(request) ctx_appname.set(self.name) self.num_requests += 1 logger.debug('Request processing started') webapplib.useful() logger.debug('Request processing finished') def main(): fn = os.path.splitext(os.path.basename(__file__))[0] adhf = argparse.ArgumentDefaultsHelpFormatter ap = argparse.ArgumentParser(formatter_class=adhf, prog=fn, description='Simulate a couple of web ' 'applications handling some ' 'requests, showing how request ' 'context can be used to ' 'populate logs') aa = ap.add_argument aa('--count', '-c', type=int, default=100, help='How many requests to simulate') options = ap.parse_args() # Create the dummy webapps and put them in a list which we can use to select # from randomly app1 = WebApp('app1') app2 = WebApp('app2') apps = [app1, app2] threads = [] # Add a common handler which will capture all events handler = logging.FileHandler('app.log', 'w') handler.setFormatter(formatter) root.addHandler(handler) # Generate calls to process requests for i in range(options.count): try: # Pick an app at random and a request for it to process app = choice(apps) request = choice(REQUESTS) # Process the request in its own thread t = threading.Thread(target=app.process_request, args=(request,)) threads.append(t) t.start() except KeyboardInterrupt: break # Wait for the threads to terminate for t in threads: t.join() for app in apps: print('%s processed %s requests' % (app.name, app.num_requests)) if __name__ == '__main__': main() If you run the above, you should find that roughly half the requests go into app1.log and the rest into app2.log, and the all the requests are logged to app.log. Each webapp-specific log will contain only log entries for only that webapp, and the request information will be displayed consistently in the log (i.e. the information in each dummy request will always appear together in a log line). This is illustrated by the following shell output: ~/logging-contextual-webapp$ python main.py app1 processed51 requests app2 processed49 requests ~/logging-contextual-webapp$ wc -l *.log 153 app1.log 147 app2.log 300 app.log 600 total ~/logging-contextual-webapp$ head -3 app1.log Thread-3(process_request) app1 __main__ jim192.168.3.21 POST Request processing started Thread-3(process_request) app1 webapplib jim192.168.3.21 POST Hello from webapplib! Thread-5(process_request) app1 __main__ jim192.168.3.21 POST Request processing started ~/logging-contextual-webapp$ head -3 app2.log Thread-1(process_request) app2 __main__ sheila192.168.2.21 GET Request processing started Thread-1(process_request) app2 webapplib sheila192.168.2.21 GET Hello from webapplib! Thread-2(process_request) app2 __main__ jim192.168.2.20 GET Request processing started ~/logging-contextual-webapp$ head app.log Thread-1(process_request) app2 __main__ sheila192.168.2.21 GET Request processing started Thread-1(process_request) app2 webapplib sheila192.168.2.21 GET Hello from webapplib! Thread-2(process_request) app2 __main__ jim192.168.2.20 GET Request processing started Thread-3(process_request) app1 __main__ jim192.168.3.21 POST Request processing started Thread-2(process_request) app2 webapplib jim192.168.2.20 GET Hello from webapplib! Thread-3(process_request) app1 webapplib jim192.168.3.21 POST Hello from webapplib! Thread-4(process_request) app2 __main__ fred192.168.2.22 GET Request processing started Thread-5(process_request) app1 __main__ jim192.168.3.21 POST Request processing started Thread-4(process_request) app2 webapplib fred192.168.2.22 GET Hello from webapplib! Thread-6(process_request) app1 __main__ jim192.168.3.21 POST Request processing started ~/logging-contextual-webapp$ grep app1 app1.log| wc -l 153 ~/logging-contextual-webapp$ grep app2 app2.log| wc -l 147 ~/logging-contextual-webapp$ grep app1 app.log| wc -l 153 ~/logging-contextual-webapp$ grep app2 app.log| wc -l 147 Imparting contextual information in handlers¶ Each Handler has its own chain of filters. If you want to add contextual information to a LogRecord without leaking it to other handlers, you can use a filter that returns a new LogRecord instead of modifying it in-place, as shown in the following script: import copy import logging def filter(record: logging.LogRecord): record = copy.copy(record) record.user = 'jim' return record if __name__ == '__main__': logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter('%(message)s from %(user)-8s') handler.setFormatter(formatter) handler.addFilter(filter) logger.addHandler(handler) logger.info('A log message') Logging to a single file from multiple processes¶ Although logging is thread-safe, and logging to a single file from multiple threads in a single process is supported, logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python. If you need to log to a single file from multiple processes, one way of doing this is to have all the processes log to a SocketHandler, and have a separate process which implements a socket server which reads from the socket and logs to file. (If you prefer, you can dedicate one thread in one of the existing processes to perform this function.) This section documents this approach in more detail and includes a working socket receiver which can be used as a starting point for you to adapt in your own applications. You could also write your own handler which uses the Lock class from the multiprocessing module to serialize access to the file from your processes. The existing FileHandler and subclasses do not make use of multiprocessing at present, though they may do so in the future. Note that at present, the multiprocessing module does not provide working lock functionality on all platforms (see https://bugs.python.org/issue3770). Alternatively, you can use a Queue and a QueueHandler to send all logging events to one of the processes in your multi-process application. The following example script demonstrates how you can do this; in the example a separate listener process listens for events sent by other processes and logs them according to its own logging configuration. Although the example only demonstrates one way of doing it (for example, you may want to use a listener thread rather than a separate listener process – the implementation would be analogous) it does allow for completely different logging configurations for the listener and the other processes in your application, and can be used as the basis for code meeting your own specific requirements: # You'll need these imports in your own code import logging import logging.handlers import multiprocessing # Next two import lines for this demo only from random import choice, random import time # # Because you'll want to define the logging configurations for listener and workers, the # listener and worker process functions take a configurer parameter which is a callable # for configuring logging for that process. These functions are also passed the queue, # which they use for communication. # # In practice, you can configure the listener however you want, but note that in this # simple example, the listener does not apply level or filter logic to received records. # In practice, you would probably want to do this logic in the worker processes, to avoid # sending events which would be filtered out between processes. # # The size of the rotated files is made small so you can see the results easily. def listener_configurer(): root = logging.getLogger() h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10) f = logging.Formatter('%(asctime)s%(processName)-10s%(name)s%(levelname)-8s%(message)s') h.setFormatter(f) root.addHandler(h) # This is the listener process top-level loop: wait for logging events # (LogRecords)on the queue and handle them, quit when you get a None for a # LogRecord. def listener_process(queue, configurer): configurer() while True: try: record = queue.get() if record is None: # We send this as a sentinel to tell the listener to quit. break logger = logging.getLogger(record.name) logger.handle(record) # No level or filter logic applied - just do it! except Exception: import sys, traceback print('Whoops! Problem:', file=sys.stderr) traceback.print_exc(file=sys.stderr) # Arrays used for random selections in this demo LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] LOGGERS = ['a.b.c', 'd.e.f'] MESSAGES = [ 'Random message #1', 'Random message #2', 'Random message #3', ] # The worker configuration is done at the start of the worker process run. # Note that on Windows you can't rely on fork semantics, so each process # will run the logging configuration code when it starts. def worker_configurer(queue): h = logging.handlers.QueueHandler(queue) # Just the one handler needed root = logging.getLogger() root.addHandler(h) # send all messages, for demo; no other level or filter logic applied. root.setLevel(logging.DEBUG) # This is the worker process top-level loop, which just logs ten events with # random intervening delays before terminating. # The print messages are just so you know it's doing something! def worker_process(queue, configurer): configurer(queue) name = multiprocessing.current_process().name print('Worker started: %s' % name) for i in range(10): time.sleep(random()) logger = logging.getLogger(choice(LOGGERS)) level = choice(LEVELS) message = choice(MESSAGES) logger.log(level, message) print('Worker finished: %s' % name) # Here's where the demo gets orchestrated. Create the queue, create and start # the listener, create ten workers and start them, wait for them to finish, # then send a None to the queue to tell the listener to finish. def main(): queue = multiprocessing.Queue(-1) listener = multiprocessing.Process(target=listener_process, args=(queue, listener_configurer)) listener.start() workers = [] for i in range(10): worker = multiprocessing.Process(target=worker_process, args=(queue, worker_configurer)) workers.append(worker) worker.start() for w in workers: w.join() queue.put_nowait(None) listener.join() if __name__ == '__main__': main() A variant of the above script keeps the logging in the main process, in a separate thread: import logging import logging.config import logging.handlers from multiprocessing import Process, Queue import random import threading import time def logger_thread(q): while True: record = q.get() if record is None: break logger = logging.getLogger(record.name) logger.handle(record) def worker_process(q): qh = logging.handlers.QueueHandler(q) root = logging.getLogger() root.setLevel(logging.DEBUG) root.addHandler(qh) levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] loggers = ['foo', 'foo.bar', 'foo.bar.baz', 'spam', 'spam.ham', 'spam.ham.eggs'] for i in range(100): lvl = random.choice(levels) logger = logging.getLogger(random.choice(loggers)) logger.log(lvl, 'Message no. %d', i) if __name__ == '__main__': q = Queue() d = { 'version': 1, 'formatters': { 'detailed': { 'class': 'logging.Formatter', 'format': '%(asctime)s%(name)-15s%(levelname)-8s%(processName)-10s%(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', }, 'file': { 'class': 'logging.FileHandler', 'filename': 'mplog.log', 'mode': 'w', 'formatter': 'detailed', }, 'foofile': { 'class': 'logging.FileHandler', 'filename': 'mplog-foo.log', 'mode': 'w', 'formatter': 'detailed', }, 'errors': { 'class': 'logging.FileHandler', 'filename': 'mplog-errors.log', 'mode': 'w', 'level': 'ERROR', 'formatter': 'detailed', }, }, 'loggers': { 'foo': { 'handlers': ['foofile'] } }, 'root': { 'level': 'DEBUG', 'handlers': ['console', 'file', 'errors'] }, } workers = [] for i in range(5): wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,)) workers.append(wp) wp.start() logging.config.dictConfig(d) lp = threading.Thread(target=logger_thread, args=(q,)) lp.start() # At this point, the main process could do some useful work of its own # Once it's done that, it can wait for the workers to terminate... for wp in workers: wp.join() # And now tell the logging thread to finish up, too q.put(None) lp.join() This variant shows how you can e.g. apply configuration for particular loggers - e.g. the foo logger has a special handler which stores all events in the foo subsystem in a file mplog-foo.log. This will be used by the logging machinery in the main process (even though the logging events are generated in the worker processes) to direct the messages to the appropriate destinations. Using concurrent.futures.ProcessPoolExecutor¶ If you want to use concurrent.futures.ProcessPoolExecutor to start your worker processes, you need to create the queue slightly differently. Instead of queue = multiprocessing.Queue(-1) you should use queue = multiprocessing.Manager().Queue(-1) # also works with the examples above and you can then replace the worker creation from this: workers = [] for i in range(10): worker = multiprocessing.Process(target=worker_process, args=(queue, worker_configurer)) workers.append(worker) worker.start() for w in workers: w.join() to this (remembering to first import concurrent.futures): with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor: for i in range(10): executor.submit(worker_process, queue, worker_configurer) Deploying Web applications using Gunicorn and uWSGI¶ When deploying Web applications using Gunicorn or uWSGI (or similar), multiple worker processes are created to handle client requests. In such environments, avoid creating file-based handlers directly in your web application. Instead, use a SocketHandler to log from the web application to a listener in a separate process. This can be set up using a process management tool such as Supervisor - see Running a logging socket listener in production for more details. Using file rotation¶ Sometimes you want to let a log file grow to a certain size, then open a new file and log to that. You may want to keep a certain number of these files, and when that many files have been created, rotate the files so that the number of files and the size of the files both remain bounded. For this usage pattern, the logging package provides a RotatingFileHandler: import glob import logging import logging.handlers LOG_FILENAME = 'logging_rotatingfile_example.out' # Set up a specific logger with our desired output level my_logger = logging.getLogger('MyLogger') my_logger.setLevel(logging.DEBUG) # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler( LOG_FILENAME, maxBytes=20, backupCount=5) my_logger.addHandler(handler) # Log some messages for i in range(20): my_logger.debug('i = %d' % i) # See what files are created logfiles = glob.glob('%s*' % LOG_FILENAME) for filename in logfiles: print(filename) The result should be 6 separate files, each with part of the log history for the application: logging_rotatingfile_example.out logging_rotatingfile_example.out.1 logging_rotatingfile_example.out.2 logging_rotatingfile_example.out.3 logging_rotatingfile_example.out.4 logging_rotatingfile_example.out.5 The most current file is always logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the suffix .1. Each of the existing backup files is renamed to increment the suffix (.1 becomes .2, etc.) and the .6 file is erased. Obviously this example sets the log length much too small as an extreme example. You would want to set maxBytes to an appropriate value. Use of alternative formatting styles¶ When logging was added to the Python standard library, the only way of formatting messages with variable content was to use the %-formatting method. Since then, Python has gained two new formatting approaches: string.Template (added in Python 2.4) and str.format() (added in Python 2.6). Logging (as of 3.2) provides improved support for these two additional formatting styles. The Formatter class been enhanced to take an additional, optional keyword parameter named style. This defaults to '%', but other possible values are '{' and '$', which correspond to the other two formatting styles. Backwards compatibility is maintained by default (as you would expect), but by explicitly specifying a style parameter, you get the ability to specify format strings which work with str.format() or string.Template. Here’s an example console session to show the possibilities: >>> import logging >>> root = logging.getLogger() >>> root.setLevel(logging.DEBUG) >>> handler = logging.StreamHandler() >>> bf = logging.Formatter('{asctime}{name}{levelname:8s}{message}', ... style='{') >>> handler.setFormatter(bf) >>> root.addHandler(handler) >>> logger = logging.getLogger('foo.bar') >>> logger.debug('This is a DEBUG message') 2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message >>> logger.critical('This is a CRITICAL message') 2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message >>> df = logging.Formatter('$asctime $name ${levelname} $message', ... style='$') >>> handler.setFormatter(df) >>> logger.debug('This is a DEBUG message') 2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message >>> logger.critical('This is a CRITICAL message') 2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message >>> Note that the formatting of logging messages for final output to logs is completely independent of how an individual logging message is constructed. That can still use %-formatting, as shown here: >>> logger.error('This is an%s%s%s', 'other,', 'ERROR,', 'message') 2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message >>> Logging calls (logger.debug(), logger.info() etc.) only take positional parameters for the actual logging message itself, with keyword parameters used only for determining options for how to handle the actual logging call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using str.format() or string.Template syntax, because internally the logging package uses %-formatting to merge the format string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. There is, however, a way that you can use {}- and $- formatting to construct your individual log messages. Recall that for a message you can use an arbitrary object as a message format string, and that the logging package will call str() on that object to get the actual format string. Consider the following two classes: class BraceMessage: def __init__(self, fmt, /, *args, **kwargs): self.fmt = fmt self.args = args self.kwargs = kwargs def __str__(self): return self.fmt.format(*self.args, **self.kwargs) class DollarMessage: def __init__(self, fmt, /, **kwargs): self.fmt = fmt self.kwargs = kwargs def __str__(self): from string import Template return Template(self.fmt).substitute(**self.kwargs) Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual “message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”. It’s a little unwieldy to use the class names whenever you want to log something, but it’s quite palatable if you use an alias such as __ (double underscore — not to be confused with _, the single underscore used as a synonym/alias for gettext.gettext() or its brethren). The above classes are not included in Python, though they’re easy enough to copy and paste into your own code. They can be used as follows (assuming that they’re declared in a module called wherever): >>> from wherever import BraceMessage as __ >>> print(__('Message with {0}{name}', 2, name='placeholders')) Message with 2 placeholders >>> class Point: pass ... >>> p = Point() >>> p.x = 0.5 >>> p.y = 0.5 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', ... point=p)) Message with coordinates: (0.50, 0.50) >>> from wherever import DollarMessage as __ >>> print(__('Message with $num $what', num=2, what='placeholders')) Message with 2 placeholders >>> While the above examples use print() to show how the formatting works, you would of course use logger.debug() or similar to actually log using this approach. One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call to one of the XXXMessage classes. If you prefer, you can use a LoggerAdapter to achieve a similar effect to the above, as in the following example: import logging class Message: def __init__(self, fmt, args): self.fmt = fmt self.args = args def __str__(self): return self.fmt.format(*self.args) class StyleAdapter(logging.LoggerAdapter): def log(self, level, msg, /, *args, stacklevel=1, **kwargs): if self.isEnabledFor(level): msg, kwargs = self.process(msg, kwargs) self.logger.log(level, Message(msg, args), **kwargs, stacklevel=stacklevel+1) logger = StyleAdapter(logging.getLogger(__name__)) def main(): logger.debug('Hello, {}', 'world!') if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) main() The above script should log the message Hello, world! when run with Python 3.8 or later. Customizing LogRecord¶ Every logging event is represented by a LogRecord instance. When an event is logged and not filtered out by a logger’s level, a LogRecord is created, populated with information about the event and then passed to the handlers for that logger (and its ancestors, up to and including the logger where further propagation up the hierarchy is disabled). Before Python 3.2, there were only two places where this creation was done: Logger.makeRecord(), which is called in the normal process of logging an event. This invoked LogRecord directly to create an instance. makeLogRecord(), which is called with a dictionary containing attributes to be added to the LogRecord. This is typically invoked when a suitable dictionary has been received over the network (e.g. in pickle form via a SocketHandler, or in JSON form via an HTTPHandler). This has usually meant that if you need to do anything special with a LogRecord, you’ve had to do one of the following. Create your own Logger subclass, which overrides Logger.makeRecord(), and set it using setLoggerClass() before any loggers that you care about are instantiated. Add a Filter to a logger or handler, which does the necessary special manipulation you need when its filter() method is called. The first approach would be a little unwieldy in the scenario where (say) several different libraries wanted to do different things. Each would attempt to set its own Logger subclass, and the one which did this last would win. The second approach works reasonably well for many cases, but does not allow you to e.g. use a specialized subclass of LogRecord. Library developers can set a suitable filter on their loggers, but they would have to remember to do this every time they introduced a new logger (which they would do simply by adding new packages or modules and doing logger = logging.getLogger(__name__) at module level). It’s probably one too many things to think about. Developers could also add the filter to a NullHandler attached to their top-level logger, but this would not be invoked if an application developer attached a handler to a lower-level library logger — so output from that handler would not reflect the intentions of the library developer. In Python 3.2 and later, LogRecord creation is done through a factory, which you can specify. The factory is just a callable you can set with setLogRecordFactory(), and interrogate with getLogRecordFactory(). The factory is invoked with the same signature as the LogRecord constructor, as LogRecord is the default setting for the factory. This approach allows a custom factory to control all aspects of LogRecord creation. For example, you could return a subclass, or just add some additional attributes to the record once created, using a pattern similar to this: old_factory = logging.getLogRecordFactory() def record_factory(*args, **kwargs): record = old_factory(*args, **kwargs) record.custom_attribute = 0xdecafbad return record logging.setLogRecordFactory(record_factory) This pattern allows different libraries to chain factories together, and as long as they don’t overwrite each other’s attributes or unintentionally overwrite the attributes provided as standard, there should be no surprises. However, it should be borne in mind that each link in the chain adds run-time overhead to all logging operations, and the technique should only be used when the use of a Filter does not provide the desired result. Subclassing QueueHandler and QueueListener- a ZeroMQ example¶ Subclass QueueHandler¶ You can use a QueueHandler subclass to send messages to other kinds of queues, for example a ZeroMQ ‘publish’ socket. In the example below,the socket is created separately and passed to the handler (as its ‘queue’): import zmq # using pyzmq, the Python binding for ZeroMQ import json # for serializing records portably ctx = zmq.Context() sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value sock.bind('tcp://*:5556') # or wherever class ZeroMQSocketHandler(QueueHandler): def enqueue(self, record): self.queue.send_json(record.__dict__) handler = ZeroMQSocketHandler(sock) Of course there are other ways of organizing this, for example passing in the data needed by the handler to create the socket: class ZeroMQSocketHandler(QueueHandler): def __init__(self, uri, socktype=zmq.PUB, ctx=None): self.ctx = ctx or zmq.Context() socket = zmq.Socket(self.ctx, socktype) socket.bind(uri) super().__init__(socket) def enqueue(self, record): self.queue.send_json(record.__dict__) def close(self): self.queue.close() Subclass QueueListener¶ You can also subclass QueueListener to get messages from other kinds of queues, for example a ZeroMQ ‘subscribe’ socket. Here’s an example: class ZeroMQSocketListener(QueueListener): def __init__(self, uri, /, *handlers, **kwargs): self.ctx = kwargs.get('ctx') or zmq.Context() socket = zmq.Socket(self.ctx, zmq.SUB) socket.setsockopt_string(zmq.SUBSCRIBE, '') # subscribe to everything socket.connect(uri) super().__init__(socket, *handlers, **kwargs) def dequeue(self): msg = self.queue.recv_json() return logging.makeLogRecord(msg) Subclassing QueueHandler and QueueListener- a pynng example¶ In a similar way to the above section, we can implement a listener and handler using pynng, which is a Python binding to NNG, billed as a spiritual successor to ZeroMQ. The following snippets illustrate – you can test them in an environment which has pynng installed. Just for variety, we present the listener first. Subclass QueueListener¶ # listener.py import json import logging import logging.handlers import pynng DEFAULT_ADDR = "tcp://localhost:13232" interrupted = False class NNGSocketListener(logging.handlers.QueueListener): def __init__(self, uri, /, *handlers, **kwargs): # Have a timeout for interruptability, and open a # subscriber socket socket = pynng.Sub0(listen=uri, recv_timeout=500) # The b'' subscription matches all topics topics = kwargs.pop('topics', None) or b'' socket.subscribe(topics) # We treat the socket as a queue super().__init__(socket, *handlers, **kwargs) def dequeue(self, block): data = None # Keep looping while not interrupted and no data received over the # socket while not interrupted: try: data = self.queue.recv(block=block) break except pynng.Timeout: pass except pynng.Closed: # sometimes happens when you hit Ctrl-C break if data is None: return None # Get the logging event sent from a publisher event = json.loads(data.decode('utf-8')) return logging.makeLogRecord(event) def enqueue_sentinel(self): # Not used in this implementation, as the socket isn't really a # queue pass logging.getLogger('pynng').propagate = False listener = NNGSocketListener(DEFAULT_ADDR, logging.StreamHandler(), topics=b'') listener.start() print('Press Ctrl-C to stop.') try: while True: pass except KeyboardInterrupt: interrupted = True finally: listener.stop() Subclass QueueHandler¶ # sender.py import json import logging import logging.handlers import time import random import pynng DEFAULT_ADDR = "tcp://localhost:13232" class NNGSocketHandler(logging.handlers.QueueHandler): def __init__(self, uri): socket = pynng.Pub0(dial=uri, send_timeout=500) super().__init__(socket) def enqueue(self, record): # Send the record as UTF-8 encoded JSON d = dict(record.__dict__) data = json.dumps(d) self.queue.send(data.encode('utf-8')) def close(self): self.queue.close() logging.getLogger('pynng').propagate = False handler = NNGSocketHandler(DEFAULT_ADDR) # Make sure the process ID is in the output logging.basicConfig(level=logging.DEBUG, handlers=[logging.StreamHandler(), handler], format='%(levelname)-8s%(name)10s%(process)6s%(message)s') levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL) logger_names = ('myapp', 'myapp.lib1', 'myapp.lib2') msgno = 1 while True: # Just randomly select some loggers and levels and log away level = random.choice(levels) logger = logging.getLogger(random.choice(logger_names)) logger.log(level, 'Message no. %5d' % msgno) msgno += 1 delay = random.random() * 2 + 0.5 time.sleep(delay) You can run the above two snippets in separate command shells. If we run the listener in one shell and run the sender in two separate shells, we should see something like the following. In the first sender shell: $ python sender.py DEBUG myapp 613 Message no. 1 WARNING myapp.lib2 613 Message no. 2 CRITICAL myapp.lib2 613 Message no. 3 WARNING myapp.lib2 613 Message no. 4 CRITICAL myapp.lib1 613 Message no. 5 DEBUG myapp 613 Message no. 6 CRITICAL myapp.lib1 613 Message no. 7 INFO myapp.lib1 613 Message no. 8 (and so on) In the second sender shell: $ python sender.py INFO myapp.lib2 657 Message no. 1 CRITICAL myapp.lib2 657 Message no. 2 CRITICAL myapp 657 Message no. 3 CRITICAL myapp.lib1 657 Message no. 4 INFO myapp.lib1 657 Message no. 5 WARNING myapp.lib2 657 Message no. 6 CRITICAL myapp 657 Message no. 7 DEBUG myapp.lib1 657 Message no. 8 (and so on) In the listener shell: $ python listener.py Press Ctrl-C to stop. DEBUG myapp 613 Message no. 1 WARNING myapp.lib2 613 Message no. 2 INFO myapp.lib2 657 Message no. 1 CRITICAL myapp.lib2 613 Message no. 3 CRITICAL myapp.lib2 657 Message no. 2 CRITICAL myapp 657 Message no. 3 WARNING myapp.lib2 613 Message no. 4 CRITICAL myapp.lib1 613 Message no. 5 CRITICAL myapp.lib1 657 Message no. 4 INFO myapp.lib1 657 Message no. 5 DEBUG myapp 613 Message no. 6 WARNING myapp.lib2 657 Message no. 6 CRITICAL myapp 657 Message no. 7 CRITICAL myapp.lib1 613 Message no. 7 INFO myapp.lib1 613 Message no. 8 DEBUG myapp.lib1 657 Message no. 8 (and so on) As you can see, the logging from the two sender processes is interleaved in the listener’s output. An example dictionary-based configuration¶ Below is an example of a logging configuration dictionary - it’s taken from the documentation on the Django project. This dictionary is passed to dictConfig() to put the configuration into effect: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname}{asctime}{module}{process:d}{thread:d}{message}', 'style': '{', }, 'simple': { 'format': '{levelname}{message}', 'style': '{', }, }, 'filters': { 'special': { '()': 'project.logging.SpecialFilter', 'foo': 'bar', }, }, 'handlers': { 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'simple', }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'filters': ['special'] } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myproject.custom': { 'handlers': ['console', 'mail_admins'], 'level': 'INFO', 'filters': ['special'] } } } For more information about this configuration, you can see the relevant section of the Django documentation. Using a rotator and namer to customize log rotation processing¶ An example of how you can define a namer and rotator is given in the following runnable script, which shows gzip compression of the log file: import gzip import logging import logging.handlers import os import shutil def namer(name): return name + ".gz" def rotator(source, dest): with open(source, 'rb') as f_in: with gzip.open(dest, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) os.remove(source) rh = logging.handlers.RotatingFileHandler('rotated.log', maxBytes=128, backupCount=5) rh.rotator = rotator rh.namer = namer root = logging.getLogger() root.setLevel(logging.INFO) root.addHandler(rh) f = logging.Formatter('%(asctime)s%(message)s') rh.setFormatter(f) for i in range(1000): root.info(f'Message no. {i+1}') After running this, you will see six new files, five of which are compressed: $ ls rotated.log* rotated.log rotated.log.2.gz rotated.log.4.gz rotated.log.1.gz rotated.log.3.gz rotated.log.5.gz $ zcat rotated.log.1.gz 2023-01-20 02:28:17,767 Message no. 996 2023-01-20 02:28:17,767 Message no. 997 2023-01-20 02:28:17,767 Message no. 998 A more elaborate multiprocessing example¶ The following working example shows how logging can be used with multiprocessing using configuration files. The configurations are fairly simple, but serve to illustrate how more complex ones could be implemented in a real multiprocessing scenario. In the example, the main process spawns a listener process and some worker processes. Each of the main process, the listener and the workers have three separate configurations (the workers all share the same configuration). We can see logging in the main process, how the workers log to a QueueHandler and how the listener implements a QueueListener and a more complex logging configuration, and arranges to dispatch events received via the queue to the handlers specified in the configuration. Note that these configurations are purely illustrative, but you should be able to adapt this example to your own scenario. Here’s the script - the docstrings and the comments hopefully explain how it works: import logging import logging.config import logging.handlers from multiprocessing import Process, Queue, Event, current_process import os import random import time class MyHandler: """ A simple handler for logging events. It runs in the listener process and dispatches events to loggers based on the name in the received record, which then get dispatched, by the logging system, to the handlers configured for those loggers. """ def handle(self, record): if record.name == "root": logger = logging.getLogger() else: logger = logging.getLogger(record.name) if logger.isEnabledFor(record.levelno): # The process name is transformed just to show that it's the listener # doing the logging to files and console record.processName = '%s (for %s)' % (current_process().name, record.processName) logger.handle(record) def listener_process(q, stop_event, config): """ This could be done in the main process, but is just done in a separate process for illustrative purposes. This initialises logging according to the specified configuration, starts the listener and waits for the main process to signal completion via the event. The listener is then stopped, and the process exits. """ logging.config.dictConfig(config) listener = logging.handlers.QueueListener(q, MyHandler()) listener.start() if os.name == 'posix': # On POSIX, the setup logger will have been configured in the # parent process, but should have been disabled following the # dictConfig call. # On Windows, since fork isn't used, the setup logger won't # exist in the child, so it would be created and the message # would appear - hence the "if posix" clause. logger = logging.getLogger('setup') logger.critical('Should not appear, because of disabled logger ...') stop_event.wait() listener.stop() def worker_process(config): """ A number of these are spawned for the purpose of illustration. In practice, they could be a heterogeneous bunch of processes rather than ones which are identical to each other. This initialises logging according to the specified configuration, and logs a hundred messages with random levels to randomly selected loggers. A small sleep is added to allow other processes a chance to run. This is not strictly needed, but it mixes the output from the different processes a bit more than if it's left out. """ logging.config.dictConfig(config) levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] loggers = ['foo', 'foo.bar', 'foo.bar.baz', 'spam', 'spam.ham', 'spam.ham.eggs'] if os.name == 'posix': # On POSIX, the setup logger will have been configured in the # parent process, but should have been disabled following the # dictConfig call. # On Windows, since fork isn't used, the setup logger won't # exist in the child, so it would be created and the message # would appear - hence the "if posix" clause. logger = logging.getLogger('setup') logger.critical('Should not appear, because of disabled logger ...') for i in range(100): lvl = random.choice(levels) logger = logging.getLogger(random.choice(loggers)) logger.log(lvl, 'Message no. %d', i) time.sleep(0.01) def main(): q = Queue() # The main process gets a simple configuration which prints to the console. config_initial = { 'version': 1, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO' } }, 'root': { 'handlers': ['console'], 'level': 'DEBUG' } } # The worker process configuration is just a QueueHandler attached to the # root logger, which allows all messages to be sent to the queue. # We disable existing loggers to disable the "setup" logger used in the # parent process. This is needed on POSIX because the logger will # be there in the child following a fork(). config_worker = { 'version': 1, 'disable_existing_loggers': True, 'handlers': { 'queue': { 'class': 'logging.handlers.QueueHandler', 'queue': q } }, 'root': { 'handlers': ['queue'], 'level': 'DEBUG' } } # The listener process configuration shows that the full flexibility of # logging configuration is available to dispatch events to handlers however # you want. # We disable existing loggers to disable the "setup" logger used in the # parent process. This is needed on POSIX because the logger will # be there in the child following a fork(). config_listener = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'detailed': { 'class': 'logging.Formatter', 'format': '%(asctime)s%(name)-15s%(levelname)-8s%(processName)-10s%(message)s' }, 'simple': { 'class': 'logging.Formatter', 'format': '%(name)-15s%(levelname)-8s%(processName)-10s%(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'formatter': 'simple', 'level': 'INFO' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'mplog.log', 'mode': 'w', 'formatter': 'detailed' }, 'foofile': { 'class': 'logging.FileHandler', 'filename': 'mplog-foo.log', 'mode': 'w', 'formatter': 'detailed' }, 'errors': { 'class': 'logging.FileHandler', 'filename': 'mplog-errors.log', 'mode': 'w', 'formatter': 'detailed', 'level': 'ERROR' } }, 'loggers': { 'foo': { 'handlers': ['foofile'] } }, 'root': { 'handlers': ['console', 'file', 'errors'], 'level': 'DEBUG' } } # Log some initial events, just to show that logging in the parent works # normally. logging.config.dictConfig(config_initial) logger = logging.getLogger('setup') logger.info('About to create workers ...') workers = [] for i in range(5): wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(config_worker,)) workers.append(wp) wp.start() logger.info('Started worker: %s', wp.name) logger.info('About to create listener ...') stop_event = Event() lp = Process(target=listener_process, name='listener', args=(q, stop_event, config_listener)) lp.start() logger.info('Started listener') # We now hang around for the workers to finish their work. for wp in workers: wp.join() # Workers all done, listening can now stop. # Logging in the parent still works normally. logger.info('Telling listener to stop ...') stop_event.set() lp.join() logger.info('All done.') if __name__ == '__main__': main() Inserting a BOM into messages sent to a SysLogHandler¶ RFC 5424 requires that a Unicode message be sent to a syslog daemon as a set of bytes which have the following structure: an optional pure-ASCII component, followed by a UTF-8 Byte Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the relevant section of the specification.) In Python 3.1, code was added to SysLogHandler to insert a BOM into the message, but unfortunately, it was implemented incorrectly, with the BOM appearing at the beginning of the message and hence not allowing any pure-ASCII component to appear before it. As this behaviour is broken, the incorrect BOM insertion code is being removed from Python 3.2.4 and later. However, it is not being replaced, and if you want to produce RFC 5424-compliant messages which include a BOM, an optional pure-ASCII sequence before it and arbitrary Unicode after it, encoded using UTF-8, then you need to do the following: Attach a Formatter instance to your SysLogHandler instance, with a format string such as: 'ASCII section\ufeffUnicode section' The Unicode code point U+FEFF, when encoded using UTF-8, will be encoded as a UTF-8 BOM – the byte-string b'\xef\xbb\xbf'. Replace the ASCII section with whatever placeholders you like, but make sure that the data that appears in there after substitution is always ASCII (that way, it will remain unchanged after UTF-8 encoding). Replace the Unicode section with whatever placeholders you like; if the data which appears there after substitution contains characters outside the ASCII range, that’s fine – it will be encoded using UTF-8. The formatted message will be encoded using UTF-8 encoding by SysLogHandler. If you follow the above rules, you should be able to produce RFC 5424-compliant messages. If you don’t, logging may not complain, but your messages will not be RFC 5424-compliant, and your syslog daemon may complain. Implementing structured logging¶ Although most logging messages are intended for reading by humans, and thus not readily machine-parseable, there might be circumstances where you want to output messages in a structured format which is capable of being parsed by a program (without needing complex regular expressions to parse the log message). This is straightforward to achieve using the logging package. There are a number of ways in which this could be achieved, but the following is a simple approach which uses JSON to serialise the event in a machine-parseable manner: import json import logging class StructuredMessage: def __init__(self, message, /, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): return '%s >>> %s' % (self.message, json.dumps(self.kwargs)) _ = StructuredMessage # optional, to improve readability logging.basicConfig(level=logging.INFO, format='%(message)s') logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456)) If the above script is run, it prints: message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"} Note that the order of items might be different according to the version of Python used. If you need more specialised processing, you can use a custom JSON encoder, as in the following complete example: import json import logging class Encoder(json.JSONEncoder): def default(self, o): if isinstance(o, set): return tuple(o) elif isinstance(o, str): return o.encode('unicode_escape').decode('ascii') return super().default(o) class StructuredMessage: def __init__(self, message, /, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): s = Encoder().encode(self.kwargs) return '%s >>> %s' % (self.message, s) _ = StructuredMessage # optional, to improve readability def main(): logging.basicConfig(level=logging.INFO, format='%(message)s') logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603')) if __name__ == '__main__': main() When the above script is run, it prints: message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]} Note that the order of items might be different according to the version of Python used. Customizing handlers with dictConfig()¶ There are times when you want to customize logging handlers in particular ways, and if you use dictConfig() you may be able to do this without subclassing. As an example, consider that you may want to set the ownership of a log file. On POSIX, this is easily done using shutil.chown(), but the file handlers in the stdlib don’t offer built-in support. You can customize handler creation using a plain function such as: def owned_file_handler(filename, mode='a', encoding=None, owner=None): if owner: if not os.path.exists(filename): open(filename, 'a').close() shutil.chown(filename, *owner) return logging.FileHandler(filename, mode, encoding) You can then specify, in a logging configuration passed to dictConfig(), that a logging handler be created by calling this function: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'default': { 'format': '%(asctime)s%(levelname)s%(name)s%(message)s' }, }, 'handlers': { 'file':{ # The values below are popped from this dictionary and # used to create the handler, set the handler's level and # its formatter. '()': owned_file_handler, 'level':'DEBUG', 'formatter': 'default', # The values below are passed to the handler creator callable # as keyword arguments. 'owner': ['pulse', 'pulse'], 'filename': 'chowntest.log', 'mode': 'w', 'encoding': 'utf-8', }, }, 'root': { 'handlers': ['file'], 'level': 'DEBUG', }, } In this example I am setting the ownership using the pulse user and group, just for the purposes of illustration. Putting it together into a working script, chowntest.py: import logging, logging.config, os, shutil def owned_file_handler(filename, mode='a', encoding=None, owner=None): if owner: if not os.path.exists(filename): open(filename, 'a').close() shutil.chown(filename, *owner) return logging.FileHandler(filename, mode, encoding) LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'default': { 'format': '%(asctime)s%(levelname)s%(name)s%(message)s' }, }, 'handlers': { 'file':{ # The values below are popped from this dictionary and # used to create the handler, set the handler's level and # its formatter. '()': owned_file_handler, 'level':'DEBUG', 'formatter': 'default', # The values below are passed to the handler creator callable # as keyword arguments. 'owner': ['pulse', 'pulse'], 'filename': 'chowntest.log', 'mode': 'w', 'encoding': 'utf-8', }, }, 'root': { 'handlers': ['file'], 'level': 'DEBUG', }, } logging.config.dictConfig(LOGGING) logger = logging.getLogger('mylogger') logger.debug('A debug message') To run this, you will probably need to run as root: $ sudo python3.3 chowntest.py $ cat chowntest.log 2013-11-05 09:34:51,128 DEBUG mylogger A debug message $ ls -l chowntest.log -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log Note that this example uses Python 3.3 because that’s where shutil.chown() makes an appearance. This approach should work with any Python version that supports dictConfig() - namely, Python 2.7, 3.2 or later. With pre-3.3 versions, you would need to implement the actual ownership change using e.g. os.chown(). In practice, the handler-creating function may be in a utility module somewhere in your project. Instead of the line in the configuration: '()': owned_file_handler, you could use e.g.: '()': 'ext://project.util.owned_file_handler', where project.util can be replaced with the actual name of the package where the function resides. In the above working script, using 'ext://__main__.owned_file_handler' should work. Here, the actual callable is resolved by dictConfig() from the ext:// specification. This example hopefully also points the way to how you could implement other types of file change - e.g. setting specific POSIX permission bits - in the same way, using os.chmod(). Of course, the approach could also be extended to types of handler other than a FileHandler - for example, one of the rotating file handlers, or a different type of handler altogether. Using particular formatting styles throughout your application¶ In Python 3.2, the Formatter gained a style keyword parameter which, while defaulting to % for backward compatibility, allowed the specification of { or $ to support the formatting approaches supported by str.format() and string.Template. Note that this governs the formatting of logging messages for final output to logs, and is completely orthogonal to how an individual logging message is constructed. Logging calls (debug(), info() etc.) only take positional parameters for the actual logging message itself, with keyword parameters used only for determining options for how to handle the logging call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using str.format() or string.Template syntax, because internally the logging package uses %-formatting to merge the format string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. There have been suggestions to associate format styles with specific loggers, but that approach also runs into backward compatibility problems because any existing code could be using a given logger name and using %-formatting. For logging to work interoperably between any third-party libraries and your code, decisions about formatting need to be made at the level of the individual logging call. This opens up a couple of ways in which alternative formatting styles can be accommodated. Using LogRecord factories¶ In Python 3.2, along with the Formatter changes mentioned above, the logging package gained the ability to allow users to set their own LogRecord subclasses, using the setLogRecordFactory() function. You can use this to set your own subclass of LogRecord, which does the Right Thing by overriding the getMessage() method. The base class implementation of this method is where the msg % args formatting happens, and where you can substitute your alternate formatting; however, you should be careful to support all formatting styles and allow %-formatting as the default, to ensure interoperability with other code. Care should also be taken to call str(self.msg), just as the base implementation does. Refer to the reference documentation on setLogRecordFactory() and LogRecord for more information. Using custom message objects¶ There is another, perhaps simpler way that you can use {}- and $- formatting to construct your individual log messages. You may recall (from Using arbitrary objects as messages) that when logging you can use an arbitrary object as a message format string, and that the logging package will call str() on that object to get the actual format string. Consider the following two classes: class BraceMessage: def __init__(self, fmt, /, *args, **kwargs): self.fmt = fmt self.args = args self.kwargs = kwargs def __str__(self): return self.fmt.format(*self.args, **self.kwargs) class DollarMessage: def __init__(self, fmt, /, **kwargs): self.fmt = fmt self.kwargs = kwargs def __str__(self): from string import Template return Template(self.fmt).substitute(**self.kwargs) Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual “message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”. If you find it a little unwieldy to use the class names whenever you want to log something, you can make it more palatable if you use an alias such as M or _ for the message (or perhaps __, if you are using _ for localization). Examples of this approach are given below. Firstly, formatting with str.format(): >>> __ = BraceMessage >>> print(__('Message with {0}{1}', 2, 'placeholders')) Message with 2 placeholders >>> class Point: pass ... >>> p = Point() >>> p.x = 0.5 >>> p.y = 0.5 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p)) Message with coordinates: (0.50, 0.50) Secondly, formatting with string.Template: >>> __ = DollarMessage >>> print(__('Message with $num $what', num=2, what='placeholders')) Message with 2 placeholders >>> One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call to one of the XXXMessage classes shown above. Configuring filters with dictConfig()¶ You can configure filters using dictConfig(), though it might not be obvious at first glance how to do it (hence this recipe). Since Filter is the only filter class included in the standard library, and it is unlikely to cater to many requirements (it’s only there as a base class), you will typically need to define your own Filter subclass with an overridden filter() method. To do this, specify the () key in the configuration dictionary for the filter, specifying a callable which will be used to create the filter (a class is the most obvious, but you can provide any callable which returns a Filter instance). Here is a complete example: import logging import logging.config import sys class MyFilter(logging.Filter): def __init__(self, param=None): self.param = param def filter(self, record): if self.param is None: allow = True else: allow = self.param not in record.msg if allow: record.msg = 'changed: ' + record.msg return allow LOGGING = { 'version': 1, 'filters': { 'myfilter': { '()': MyFilter, 'param': 'noshow', } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'filters': ['myfilter'] } }, 'root': { 'level': 'DEBUG', 'handlers': ['console'] }, } if __name__ == '__main__': logging.config.dictConfig(LOGGING) logging.debug('hello') logging.debug('hello - noshow') This example shows how you can pass configuration data to the callable which constructs the instance, in the form of keyword parameters. When run, the above script will print: changed: hello which shows that the filter is working as configured. A couple of extra points to note: If you can’t refer to the callable directly in the configuration (e.g. if it lives in a different module, and you can’t import it directly where the configuration dictionary is), you can use the form ext://... as described in Access to external objects. For example, you could have used the text 'ext://__main__.MyFilter' instead of MyFilter in the above example. As well as for filters, this technique can also be used to configure custom handlers and formatters. See User-defined objects for more information on how logging supports using user-defined objects in its configuration, and see the other cookbook recipe Customizing handlers with dictConfig() above. Customized exception formatting¶ There might be times when you want to do customized exception formatting - for argument’s sake, let’s say you want exactly one line per logged event, even when exception information is present. You can do this with a custom formatter class, as shown in the following example: import logging class OneLineExceptionFormatter(logging.Formatter): def formatException(self, exc_info): """ Format an exception so that it prints on a single line. """ result = super().formatException(exc_info) return repr(result) # or format into one line however you want to def format(self, record): s = super().format(record) if record.exc_text: s = s.replace('\n', '') + '|' return s def configure_logging(): fh = logging.FileHandler('output.txt', 'w') f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|', '%d/%m/%Y %H:%M:%S') fh.setFormatter(f) root = logging.getLogger() root.setLevel(logging.DEBUG) root.addHandler(fh) def main(): configure_logging() logging.info('Sample message') try: x = 1 / 0 except ZeroDivisionError as e: logging.exception('ZeroDivisionError: %s', e) if __name__ == '__main__': main() When run, this produces a file with exactly two lines: 28/01/2015 07:21:23|INFO|Sample message| 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'| While the above treatment is simplistic, it points the way to how exception information can be formatted to your liking. The traceback module may be helpful for more specialized needs. Speaking logging messages¶ There might be situations when it is desirable to have logging messages rendered in an audible rather than a visible format. This is easy to do if you have text-to-speech (TTS) functionality available in your system, even if it doesn’t have a Python binding. Most TTS systems have a command line program you can run, and this can be invoked from a handler using subprocess. It’s assumed here that TTS command line programs won’t expect to interact with users or take a long time to complete, and that the frequency of logged messages will be not so high as to swamp the user with messages, and that it’s acceptable to have the messages spoken one at a time rather than concurrently, The example implementation below waits for one message to be spoken before the next is processed, and this might cause other handlers to be kept waiting. Here is a short example showing the approach, which assumes that the espeak TTS package is available: import logging import subprocess import sys class TTSHandler(logging.Handler): def emit(self, record): msg = self.format(record) # Speak slowly in a female English voice cmd = ['espeak', '-s150', '-ven+f3', msg] p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # wait for the program to finish p.communicate() def configure_logging(): h = TTSHandler() root = logging.getLogger() root.addHandler(h) # the default formatter just returns the message root.setLevel(logging.DEBUG) def main(): logging.info('Hello') logging.debug('Goodbye') if __name__ == '__main__': configure_logging() sys.exit(main()) When run, this script should say “Hello” and then “Goodbye” in a female voice. The above approach can, of course, be adapted to other TTS systems and even other systems altogether which can process messages via external programs run from a command line. Buffering logging messages and outputting them conditionally¶ There might be situations where you want to log messages in a temporary area and only output them if a certain condition occurs. For example, you may want to start logging debug events in a function, and if the function completes without errors, you don’t want to clutter the log with the collected debug information, but if there is an error, you want all the debug information to be output as well as the error. Here is an example which shows how you could do this using a decorator for your functions where you want logging to behave this way. It makes use of the logging.handlers.MemoryHandler, which allows buffering of logged events until some condition occurs, at which point the buffered events are flushed - passed to another handler (the target handler) for processing. By default, the MemoryHandler flushed when its buffer gets filled up or an event whose level is greater than or equal to a specified threshold is seen. You can use this recipe with a more specialised subclass of MemoryHandler if you want custom flushing behavior. The example script has a simple function, foo, which just cycles through all the logging levels, writing to sys.stderr to say what level it’s about to log at, and then actually logging a message at that level. You can pass a parameter to foo which, if true, will log at ERROR and CRITICAL levels - otherwise, it only logs at DEBUG, INFO and WARNING levels. The script just arranges to decorate foo with a decorator which will do the conditional logging that’s required. The decorator takes a logger as a parameter and attaches a memory handler for the duration of the call to the decorated function. The decorator can be additionally parameterised using a target handler, a level at which flushing should occur, and a capacity for the buffer (number of records buffered). These default to a StreamHandler which writes to sys.stderr, logging.ERROR and 100 respectively. Here’s the script: import logging from logging.handlers import MemoryHandler import sys logger = logging.getLogger(__name__) logger.addHandler(logging.NullHandler()) def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None): if target_handler is None: target_handler = logging.StreamHandler() if flush_level is None: flush_level = logging.ERROR if capacity is None: capacity = 100 handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler) def decorator(fn): def wrapper(*args, **kwargs): logger.addHandler(handler) try: return fn(*args, **kwargs) except Exception: logger.exception('call failed') raise finally: super(MemoryHandler, handler).flush() logger.removeHandler(handler) return wrapper return decorator def write_line(s): sys.stderr.write('%s\n' % s) def foo(fail=False): write_line('about to log at DEBUG ...') logger.debug('Actually logged at DEBUG') write_line('about to log at INFO ...') logger.info('Actually logged at INFO') write_line('about to log at WARNING ...') logger.warning('Actually logged at WARNING') if fail: write_line('about to log at ERROR ...') logger.error('Actually logged at ERROR') write_line('about to log at CRITICAL ...') logger.critical('Actually logged at CRITICAL') return fail decorated_foo = log_if_errors(logger)(foo) if __name__ == '__main__': logger.setLevel(logging.DEBUG) write_line('Calling undecorated foo with False') assert not foo(False) write_line('Calling undecorated foo with True') assert foo(True) write_line('Calling decorated foo with False') assert not decorated_foo(False) write_line('Calling decorated foo with True') assert decorated_foo(True) When this script is run, the following output should be observed: Calling undecorated foo with False about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... Calling undecorated foo with True about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... about to log at ERROR ... about to log at CRITICAL ... Calling decorated foo with False about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... Calling decorated foo with True about to log at DEBUG ... about to log at INFO ... about to log at WARNING ... about to log at ERROR ... Actually logged at DEBUG Actually logged at INFO Actually logged at WARNING Actually logged at
8585
dbpedia
3
51
https://www.geeksforgeeks.org/car-package-in-r/
en
Car package in R
https://media.geeksforge…_200x200-min.png
https://media.geeksforge…_200x200-min.png
[ "https://media.geeksforgeeks.org/gfg-gg-logo.svg", "https://media.geeksforgeeks.org/auth-dashboard-uploads/Google-news.svg", "https://media.geeksforgeeks.org/wp-content/uploads/20230215133304/car1.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215133724/car2.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215135329/car3.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215135441/car4.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215135545/car5.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215135647/car6.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215140128/car7.png", "https://media.geeksforgeeks.org/wp-content/uploads/20230215141342/car8.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/Google-news.svg", "https://media.geeksforgeeks.org/auth-dashboard-uploads/new-premium-rbanner-us.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/suggestChangeIcon.png", "https://media.geeksforgeeks.org/auth-dashboard-uploads/createImprovementIcon.png" ]
[]
[]
[ "Data Structures", "Algorithms", "Python", "Java", "C", "C++", "JavaScript", "Android Development", "SQL", "Data Science", "Machine Learning", "PHP", "Web Development", "System Design", "Tutorial", "Technical Blogs", "Interview Experience", "Interview Preparation", "Programming", "Competitive Programming", "Jobs", "Coding Contests", "GATE CSE", "HTML", "CSS", "React", "NodeJS", "Placement", "Aptitude", "Quiz", "Computer Science", "Programming Examples", "GeeksforGeeks Courses", "Puzzles", "SSC", "Banking", "UPSC", "Commerce", "Finance", "CBSE", "School", "k12", "General Knowledge", "News", "Mathematics", "Exams" ]
null
[ "GeeksforGeeks" ]
2023-07-31T03:37:33
A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
en
https://media.geeksforge…/gfg_favicon.png
GeeksforGeeks
https://www.geeksforgeeks.org/car-package-in-r/
In R, there are several packages available that provide functions and tools to perform various tasks related to data analysis and visualization. The “car” package is one of the packages that provide functions and tools for regression analysis. In this article, we will provide a comprehensive and in-depth guide to the “car” package in R, including how to install and load the package, how to use the functions it provides, and how to visualize the results. Installing the “car” Package Before we can start using the functions provided by the “car” package, we need to install it. Installing the “car” package can be done using the following code: install.packages("car") Loading the “car” Package Once the “car” package is installed, we need to load it into the R environment. Loading the “car” package can be done using the following code: library(car) Example 1: Simple Linear Regression Step 1: Data Preparation For just demonstration purposes, we will use the “mtcars” dataset that is available in the R environment. The “mtcars” contains data on 32 different models of cars and includes information like miles per gallon (mpg), number of cylinders (cyl), horsepower (hp), weight (wt), and acceleration (qsec). Output: mpg cyl disp hp Min. :10.40 Min. :4.000 Min. : 71.1 Min. : 52.0 1st Qu.:15.43 1st Qu.:4.000 1st Qu.:120.8 1st Qu.: 96.5 Median :19.20 Median :6.000 Median :196.3 Median :123.0 Mean :20.09 Mean :6.188 Mean :230.7 Mean :146.7 3rd Qu.:22.80 3rd Qu.:8.000 3rd Qu.:326.0 3rd Qu.:180.0 Max. :33.90 Max. :8.000 Max. :472.0 Max. :335.0 drat wt qsec vs Min. :2.760 Min. :1.513 Min. :14.50 Min. :0.0000 1st Qu.:3.080 1st Qu.:2.581 1st Qu.:16.89 1st Qu.:0.0000 Median :3.695 Median :3.325 Median :17.71 Median :0.0000 Mean :3.597 Mean :3.217 Mean :17.85 Mean :0.4375 3rd Qu.:3.920 3rd Qu.:3.610 3rd Qu.:18.90 3rd Qu.:1.0000 Max. :4.930 Max. :5.424 Max. :22.90 Max. :1.0000 am gear carb Min. :0.0000 Min. :3.000 Min. :1.000 1st Qu.:0.0000 1st Qu.:3.000 1st Qu.:2.000 Median :0.0000 Median :4.000 Median :2.000 Mean :0.4062 Mean :3.688 Mean :2.812 3rd Qu.:1.0000 3rd Qu.:4.000 3rd Qu.:4.000 Max. :1.0000 Max. :5.000 Max. :8.000 Step 2: Simple Linear Regression The “car” package provides functions for performing linear regression analysis. To perform simple linear regression analysis, we can use the lm() function. For example, if we want to perform a simple linear regression analysis to predict the miles per gallon (mpg) of a car based on its weight (wt), we can use the following code: Step 3: Model Summary Once we have fit a linear regression model, we can get a summary of the model using the summary() function. The summary of the model provides information such as the coefficients of the regression equation, the R-squared value, and the p-value of the regression coefficients. To get the summary of the model, we can use the following code: Output: Call: lm(formula = mpg ~ wt, data = mtcars) Residuals: Min 1Q Median 3Q Max -4.5432 -2.3647 -0.1252 1.4096 6.8727 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 37.2851 1.8776 19.858 < 2e-16 *** wt -5.3445 0.5591 -9.559 1.29e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.046 on 30 degrees of freedom Multiple R-squared: 0.7528, Adjusted R-squared: 0.7446 F-statistic: 91.38 on 1 and 30 DF, p-value: 1.294e-10 Step 4: Plotting the Regression Line The “car” package also provides functions for visualizing the results of the regression analysis. To plot the regression line, we can use the abline() function. The abline() function takes the parameters of the regression equation as inputs and plots the regression line on a scatter plot of the data. To plot the regression line, we can use the following code: Output: Step 5: Residual Plot The residual plot is a plot of the residuals (the differences between the observed values and the predicted values) against the fitted values. The residual plot can be used to assess the assumptions of the linear regression model, such as the assumption of homoscedasticity (constant variance of the residuals). To plot the residual plot, we can use the residplot() function provided by the “car” package. The residplot() function takes the linear regression model as an input and plots the residual Output: Test stat Pr(>|Test stat|) wt 3.258 0.002860 ** Tukey test 3.258 0.001122 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Example 2: Multiple Linear Regression Step 1: Fit a multiple linear regression model to predict mpg based on wt and hp Output: Call: lm(formula = mpg ~ wt + hp, data = mtcars) Residuals: Min 1Q Median 3Q Max -3.941 -1.600 -0.182 1.050 5.854 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 37.22727 1.59879 23.285 < 2e-16 *** wt -3.87783 0.63273 -6.129 1.12e-06 *** hp -0.03177 0.00903 -3.519 0.00145 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.593 on 29 degrees of freedom Multiple R-squared: 0.8268, Adjusted R-squared: 0.8148 F-statistic: 69.21 on 2 and 29 DF, p-value: 9.109e-12 Step 2: Plot the model Output: Above is the scatter plot of the residuals against the fitted values. The plot is used to check for the presence of nonlinearity in the model. The Normal Q-Q plot checks for the assumption of normality by comparing the distribution of the residuals to a normal distribution. The above is a plot of the square root of the absolute values of the standardized residuals against the fitted values. It is used to check for the presence of heteroscedasticity (unequal variances of the residuals). If it shows a horizontal line with equally spread points, then it suggests that the residuals have equal variance across the range of fitted values. Above is the plot of the standardized residuals against the leverage of each observation. The leverage is a measure of how much an observation affects the regression line. The plot is used to identify influential observations, which are observations that have a high leverage and a large standardized residual. Influential observations can have a significant impact on the regression results and should be examined more closely. Step 3: Plot the model Output: Test stat Pr(>|Test stat|) wt 3.1383 0.0039785 ** hp 2.4973 0.0186653 * Tukey test 3.7968 0.0001466 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Example 3: Anova and diagnostic Plots Step 1: Perform ANOVA on the model Output: A anova: 3 × 5 Df Sum Sq Mean Sq F value Pr(>F) <int> <dbl> <dbl> <dbl> <dbl> wt 1 847.72525 847.725250 126.04109 4.488360e-12 hp 1 83.27418 83.274183 12.38133 1.451229e-03 Residuals 29 195.04775 6.725785 NA NA The ANOVA function in the above example performs an analysis of variance (ANOVA) on the model that tests the significance of the overall model and the individual variables in the model. Step 2: Plot partial regression plots for the variables in the model The plot function creates four plots of the partial regression plots for variables in the model which is useful for understanding the relationship between each variable and the response variable. The partial regression plot shows the relationship between the response variable and each predictor variable which has control for other predictor variables in the model. It allows you to see the unique contribution of each variable to the model and to identify any potential interactions between the variables. Output: The above four diagnostic plots are used to check for the assumptions of linear regression models. The Residuals vs Fitted plot checks for the assumption of linearity by showing the relationship between the residuals and the fitted values. The Normal Q-Q plot checks for the assumption of normality by comparing the distribution of the residuals to a normal distribution. The Scale-Location plot checks for the assumption of homoscedasticity by showing the relationship between the square root of the standardized residuals and the fitted values. The Cook’s Distance plot checks for influential observations by showing the influence of each observation on the regression coefficients. These plots are important for evaluating the fit of a linear regression model and identifying potential issues, such as outliers or influential observations. The above shows the influence of each observation on the model fit. Leverage is a measure of how much an observation differs from the average of the predictor variables. It can be used to identify potential influential observations. It suggests a well-fit linear regression model as the points in the plot are randomly distributed, with no points having both high leverage and high residuals. Conclusion In conclusion, the car package in R provides a variety of functions and tools for conducting regression analysis and diagnostic plots. In this tutorial, we covered examples of using the car package for regression analysis. By working through the above examples you should now have a good understanding of how to use the car package for regression analysis in R. Whether you’re a beginner or an experienced R user, the car package is a valuable tool for conducting regression analysis and exploring relationships between variables in your data.
8585
dbpedia
1
33
http://www.science.smith.edu/~jcrouser/SDS293/labs/python-intro.html
en
Introduction to python
[]
[]
[]
[ "" ]
null
[]
null
null
For most analyses, the first step involves importing a data set into python. For this class, a lot of the data comes from the ISLR package. Unfortunately this isn't available for python so I've exported the data to CSV to make things easier. We can use the read_csv() function from the pandas library to import it. We begin by loading in the Auto data set. Nothing happens when you run this, but now the data is available in your environment. To view the data, we can either print the entire dataset by typing its name, or we can just look at the first few rows with the head() function. Now that we have the data, we can begin to learn things about it. For example, if we want to know how many rows and columns the DataFrame contains: This tells us that the data has 392 observations, or rows, and nine variables, or columns. The ${\tt .dtypes}$ atribute tells us that most of the variables are numeric or integer, although the ${\tt name }$ variable is a character vector. Often, we want to know some basic things about variables in our data. Calling the describe() method on a DataFrame will give you an idea of some of the distributions of your variables. The summary suggests that origin might be better thought of as a factor. It only seems to have three possible values, 1, 2 and 3. If we read the documentation about the data we will learn that these numbers correspond to where the car is from: 1. American, 2. European, 3. Japanese. So let's cast that variable into a categorical variable using using the astype() function . If we want to include a summary of this variable when we call .describe(), we need to let python know we want ALL the variables (not just the quantitative ones): The basic idea is that you need to initialize a plot with ggplot() and then add "geoms" (short for geometric objects) to the plot. The ggplot package is based on the Grammar of Graphics, a famous book on data visualization theory. It is a way to map attributes in your data (like variables) to "aesthetics" on the plot. The parameter aes() is short for aesthetic. For more about the ggplot2 syntax, view the documentation using the help() function. There are also great online resources for ggplot2, like ggplot from Å·hat. The cylinders variable is stored as a numeric vector, so python has treated it as quantitative. However, since there are only a small number of possible values for cylinders, one may prefer to treat it as a qualitative variable. We can turn it into a factor, again using an astype() call.